Showing posts with label software. Show all posts
Showing posts with label software. Show all posts

Friday, June 19, 2009

Winning the Quality War: HP Customers Offer Case Studies on Managing Application Performance

Transcript of a BriefingsDirect podcast recorded at the Hewlett-Packard Software Universe 2009 Conference in Las Vegas the week of June 15, 2009.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com. Sponsor: Hewlett-Packard.

Dana Gardner: Hello, and welcome to a special BriefingsDirect podcast series coming to you on location from the Hewlett-Packard Software Universe 2009 Conference in Las Vegas. We’re here in the week of June 15, 2009 to explore the major enterprise software and solutions trends and innovations that are making news across the global HP ecology of customers, partners and developers.

I'm Dana Gardner, principal analyst at Interarbor Solutions, and I'll be your host throughout this special series of HP Sponsored Software Universe live discussions.

Now, please join me for our latest discussion, a series of user discussions on quality assurance issues. Our first HP customer case study comes from FICO. We are joined by Matt Dixon, senior manager of tools and processes, whose department undertook a service management improvement award for operational efficiency and integrity. Welcome to the show Matt.

Matt Dixon: Thanks, Dana. I’m glad to be here.

FICO's service-management approach

Gardner: Tell me a little bit about how you use the development of a service management portfolio approach to your remediation and changes that take place vis-à-vis your helpdesk. It sounds like an awful lot of changes for a large company?

Dixon: Yes. We did go through a lot of changes, but they were changes that we definitely needed to go through to be able to do more with less, which is important in this environment.

The IT service management (ITSM) project that we undertook allows us to centralize all of our incidents, changes, and configuration items (CIs) into a one centralized tool. Before, we had all these disparate tools that were out there and had to go to different tools and spreadsheets to find information about servers, network gear, or those types of things.

Now, we’ve consolidated it into one tool which helps our users and operations folks to be able to go to one spot, one source of truth, to be able to easily reassign incidents, migrate from an incident to a change, and see what’s going to be impacted through the configuration management database (CMDB).

Gardner: Perhaps you can help our listeners better understand what FICO does and then what sort of helpdesk and operational staff structure they have?

Dixon: FICO, formerly known as Fair Isaac, is a software analytics company. We help financial institutions make decisions and we’re primarily known for FICO scores. If you apply for a loan, half of the times you get a FICO score. We’re about 2,300 employees. Our IT staff is about 230. We’re a global company, and our helpdesk is located in India. It’s 24x7 and they are FICO employees -- so that’s important to know.

Gardner: Tell me about the problem set you’re trying to address directly with your IT service management approach?

Dixon: We had two primary objectives we were trying to meet with our ITSM project. The first was to replace our antiquated tool sets. As I said before, we had disparate tools that were

They're very important definitely in today’s economy, and through the completion of our project we've been able to consolidate tools to increase those efficiencies and to be able to do more with less.

all over the place and were not integrated. Some were developed internally, and the development team had left. So, we’re no longer able to keep up with the process maturity that we wanted to do, because the tools could not support the process improvements that we wanted.

In addition to that, we have a lot of sensitive data -- from all the different credit data that we have to medical data to insurance data. So, we go through a vast number of audits per year, both internal and external, and we identified some gaps with our ITSM's previous solution. We undertook this project to close those gaps, so we could meet those audit requirements.

Gardner: I suppose in today’s economy, making sure your operations are efficient, making sure that these changes don’t disrupt, and maintaining the applications in terms of performance are pretty important?

Dixon: They're very important definitely in today’s economy, and through the completion of our project we've been able to consolidate tools to increase those efficiencies and to be able to do more with less.

Gardner: As you transition from identifying your problems and knowing what you wanted, how did you come about a solution?

Request for proposal

Dixon: We sent a request for proposal (RFP) to four different companies to ask them how they would help us address the gaps that our previous tool sets had identified. Throughout that process, we kept a scorecard, and HP was chosen, primarily for three reasons.

Number one, we felt that the integration capabilities within HP, both currently and the future roadmaps, were better than the other solution sets. Number two, we thought that universal configuration management database (UCMDB), through its federation, offered a more complete solution than other CMDB solutions that were identified. The third one was our partnerships and existing relationships with HP, which we relied upon during the implementation of our ITSM solution.

Gardner: And so, were there several products that you actually put in place to accomplish your goals?

Dixon: We chose two primary products from HP. One was Service Manager where we log all of our changes and incidents, and then the second one was the UCMDB, and we integrated those two products, so that the CIs flow into Service Manager and that information flows out of Service Manager back into UCMDB.

Gardner: How long have you had this in place, and what sort of metrics or success and/or payback have you had?

Dixon: We started our implementation last summer, in July of 2008. We went live with Incident in August. We went live with Change Management in October. And, we went live in January with Configuration Management. It was kind of a phased rollout. We started last July, and the project wrapped up in January of 2009.

From the payback perspective, we’ve seen a variety of different paybacks. Number one, now we’ve been able to meet and surpass audit requirements.

Now, we can report on first-call resolution. We can report on a meantime to recover. We can report on all the important information the business is asking for.

That was our number one objective -- make sure that those audits go much faster, that we can gather the information quicker, and that we can meet and surpass audit requirements. We’ve been able to do that.

Number two, we’ve improved efficiencies and we’ve done that through templates, not having to double-enter data because of disparate tools. Now, we have one tool, and that information tracks within all the tools. You don’t have to double-enter data.

The third one is that we've improved visibility through notifications and reporting. Our previous toolset didn’t have a lot of reporting abilities and no notification options. Now, we can report on first-call resolution. We can report on a meantime to recover. We can report on all the important information the business is asking for.

The last one is that we have more enforcement or buy-in of our processes. Our number of changes logged, has gone up by 21 percent. It’s easier to log a change. We have different change processes and workflows that we’ve been able to develop. So, people buy into the process. We’ve seen a 21 percent increase in the number of changes logged from our previous toolset.

Gardner: You’ve got this information in one place, where you can analyze it and feel comfortable that all the changes are being managed, and nothing is falling off the side or in between the cracks. Is there something you can now do additionally with this data in this common, managed repository that you couldn’t do before? Or, were there adds or improvement in terms of moving to a variety of different systems or approaches?

Dixon: We have a lot of plans for the future, things that we’ve identified that we can do. Some of the immediate impacts we’ve seen are our major problem channels -- which CIs have the most incidents logged against them. We identify CIs in incidents. We identify CIs in changes. Now, we can run reports and say, "Which CIs are changing the most? Which CIs are breaking the most?" And, we can work on resolving those issues.

Then, we’ve continually improved the process. We have a mature tool with lot of integrations. We’ve been able to pull all this information together. So, we’re setting up roadmaps, both internally and in partnership with HP, to continually improve our process and tools.

Gardner: Well, great. We've been talking about a case study with a user FICO, and how they’ve implemented ITSM projects. Thanks, Matt.

Dixon: Thanks, Dana. I appreciate it.

Gevity opts for PPM solutions

Gardner: Our second customer use case discussion today comes from Gevity, part of TriNet. We’re here to discuss how portfolio and project management (PPM) solutions have helped them. We’re here with Vito Melfi. He is the vice president of IT operations. Welcome.

Vito Melfi: Thank you.

Gardner: Tell us a little bit about how PPM solutions became important for you?

Melfi: Well, in Gevity, we had, as most other companies do, a whole portfolio of applications and a lot of resources. The desire on Gevity’s part to become a very transparent IT organization was difficult to do, not knowing where your resources are, how you are using them, and how to re-prioritize applications within our company priorities.

The application of portfolio management became very critical, as well as strategic. Today, we have the ability to see across our resource base. We use the time-tracking system, and we can produce portfolio documents monthly. Our client base can see what’s out there as a priority, what’s in queue, and, if we have to change things, we can do so with great flexibility.

Gardner: Now, Gevity does a lot of application support for a number of companies in their HR function. So, applications are very important. Tell us more about how your company operates?

Melfi: We’re a professional employment organization (PEO). We deliver payroll services, benefits and workers’ comp services, and a host of other HR services. We’re essentially an HR service company for hire. We believe that we can provide these capabilities better as a service provider than most companies can provide trying to build this type of technology capability on their own.

Gardner: When you began looking into PPM, complexity of control probably was a number one concern for you?

Melfi: Absolutely. Complexity in an organization can be paramount if you don’t have good control over your resources and over your applications. At Gevity, we had a lot of people trying very hard to get control and get their arms around those things.

The technology that HP provides to PPM really is the enabler for us to figure out our whole portfolio requirement. The communication that comes back to our functional areas

Better internal customer service is always paramount to us. By being able to do more with less, obviously we can take our funding and look into different areas of investment.

and to our client base has been very well received. It's something that we’ve found to be very valuable to us. Then, with taking that through to the quality center and service center, the integration of the three has been just a big benefit to us.

Gardner: What would you say is the solution that this combination of products actually provides for you?

Melfi: The solution that we get out of our service center application is to be able to turn around the incidents that we have. We’ve been able to resolve first call incidents in a 70-80% first call close. This was our ratio with 10 people a couple of years ago, and it’s still our ratio with 7 people doing it. Our service level has maintained okay, and actually improved a bit, and our employee base has gone down. This is particularly important to us as we go forward with our parent company, TriNet, because now we’re going to be merging east- and west-coast operations.

Gardner: So, it’s greater visibility and greater control. How does that translate into returns on either investment in dollars and cents or in the way you can provide service and reliability to your users?

Melfi: It translates in a couple of ways. Better internal customer service is always paramount to us. By being able to do more with less, obviously we can take our funding and look into different areas of investment. Not having to invest in adding people to scale our services creates opportunity for us elsewhere in the organization.

Gardner: Okay. I wonder if there are any lessons that you might have for other folks who are looking at PPM? And, expanding on that, what would you do differently?

Melfi: We knew this, but it really comes to bear when you’re actually doing an implementation of your toolset, the key to success is having good processes. If you have those processes in place, the implementation of the toolset is a natural transition for you.

If you don’t have good processes in place, the tool itself will help, but you're going to have to take a step backwards and understand how these three things interact -- two, or three, or how many you’re implementing. So it’s not a silver bullet. It’s not going to come and automate everything for you. The key is to have a really a good grasp on what you do and how you do it and what your end game is, and then use the tools to your advantage.

Gardner: We’ve been talking about the use of PPM solutions with Vito Melfi. He is the vice president of IT operations at Gevity. Thanks.

Melfi: Thank you.

JetBlue revs up test cycle

Gardner: Our third customer today comes from an HP Software & Solutions Awards of Excellence winner, JetBlue Airways. We’re here with Sagi Varghese, manager of quality assurance at JetBlue. Welcome.

Sagi Varghese: Hi. How are you?

Gardner: Good. Tell us about the problems that you faced, as you tried to make your applications the best they could be?

Varghese: About two years ago, our team picked up the testing for our online booking site, which is hosted in-house. At that time, we had various issues with the stability of the site as well as the capability of the site. Being a value-add customer, we wanted to be able to offer our customer features beyond what came in a canned product offered by our business partner. We wanted to be able to offer additional services.

Over the last two years, we added a lot of features on top of our generic products -- integration with ancillary services like cars, hotels, and things that -- and we did those at a very fast pace. A lot of these enhancements had to be rolled out in a very short time frame.

Almost two years ago, all of the testing was manual and one of the first steps was to adopt a methodology, so that we could bring some structure and process around the testing techniques that we’re using. The next step was to partner with HP. We worked very closely with HP, not only on the functional aspects of the application, but also on the performance aspects of the application.

A typical end-to-end test cycle would take five to six people over several weeks to completely test a new solution or a new release of the application. We made a business case to automate the testing effort or the regression testing, as we call it, or the repeated testing, if you’d like, for want of the simple term. We made a business case to automate that using HP’s Quick Test Pro product and we were able to complete the automation in less than four weeks. That became the starting point.

It involved using a test automation framework that worked with the Quick Test Pro product, and our testing cycles reduced about 70 percent. As time progressed, and we added more features into our online Website, we also became more mature in the utilization of the tool and added more test scripts into our automated bucket, rather than manual. We went from 250 test cases to about 750 test cases that we run today, a lot of them overnight, in less than two days.

Gardner: At JetBlue, of course you’re in a very competitive field, the airline business. Therefore, all of your applications need to perform well. If your customers don’t get what they want in one or two clicks, you’re going to lose them. Tell me a little bit about the solution approach to making your applications better. Is it something that your testing did alone? What did you look for from a more holistic solutions perspective?

Varghese: One of the things that we were looking at was that customer experience. We were working with a product that was offered by a business partner or a vendor

Today, we are turning them around in less than two days, which means we can deliver more features to the market more often and realize the value.

and we were allowed customizations on top of that. We were largely dependent on the business partners, because they host our reservations site. So, we're kind of dependent on them for the performance of the application. We were able to work with them using HP’s LoadRunner product to optimize the performance of the site.

Gardner: You mentioned a few paybacks in putting together better quality assurance. What sort of utilization did you get in some of the tools that you had in place, even though you were going from manual to a more automated approach?

Varghese: About two years ago, even though we had the tools, we had very limited use. We ran a few ad-hoc, automated scripts every now and then. Since we adopted this framework a little over a year ago, we have 100 percent utilization of the tool. We don’t have enough licenses today. We definitely are in dire need of getting more licenses.

Last year, every person on my team went to advanced training. Everybody on the team can execute the 700 scripts pretty much overnight, if they had to. We could run them all parallel. We have 100 percent utilization of the tool and we’re in need for more licenses. I wish we had that capability, and we will in the future.

Gardner: So you’ve been able to cut your testing costs. You have seen better utilization of the tools you have in place and higher demand for more. How does it translate into what you've been able to accomplish in terms of your post-production quality of applications?

Varghese: Historically, when we had manual test cases, delivering a new release or a functionality on our Website involved perhaps three to four months of effort, simply because it took us several weeks to go through one cycle of testing. Today, we are turning them around in less than two days, which means we can deliver more features to the market more often and realize the value.

If you have heard, at JetBlue we have been offering even more leg-room features. This year, we have launched three or four products in the first quarter alone. We’ve been able to do that because of the quick turnaround time offered by the test automation capability.

Gardner: And not only do you reduce the time, what about the rate of failure?

Varghese: The rate of failure has reduced greatly. We brought post-production failures down by about 80 percent or so. Previously, in the interest of time, we would compromise on quality and you wouldn't necessarily do an end-to-end test. Today we have that, I wouldn’t say a luxury, but the ability to run an end-to-end test in less than two days. So, we’re able to pretty much test all of the facets of an application, even if that particular module is not affected.

Gardner: Congratulations on winning the award. This is a great testament that you took this particular solution set and did very good things with it.

Varghese: Absolutely. Thank you very much. Thank you for having us.

Gardner: We've been talking with Sagi Varghese, manager of quality assurance at JetBlue, a winner today of HP Software & Solutions Awards of Excellence.

Thanks for joining us for this special BriefingsDirect podcast, coming to you on location from the Hewlett-Packard Software Universe 2009 Conference in Las Vegas.

Also look for full transcripts of all of our Software Universe live podcasts on the BriefingsDirect.com blog network. Just search the web for BriefingsDirect. The conference content is also available at www.hp.com, just search on the HP site under Software Universe Live 2009.

I'm Dana Gardner, principal analyst at Interarbor Solutions, your host for this series of HP sponsored Software Universe Live Discussions. Thanks for listening and come back next time.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com. Sponsor: Hewlett-Packard.

Transcript of a BriefingsDirect podcast recorded at the Hewlett-Packard Software Universe 2009 Conference in Las Vegas during the week of June 15, 2009. Copyright Interarbor Solutions, LLC, 2005-2009. All rights reserved.

Monday, November 24, 2008

Enterprises Can Leverage Cloud Computing Benefits While Managing Risks Through Services Governance, Say HP Executives

Transcript of a BriefingsDirect podcast on cloud adoption best practices with HP executives.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: Hewlett-Packard.

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you’re listening to BriefingsDirect. Today we present a sponsored podcast discussion on cloud computing, and how enterprises can best prepare to take advantage of this shift in IT resources use and acquisition -- but while also avoiding risks and uncertainty.

Much has been said about cloud computing in 2008, and still many knowledgeable IT people scratch their heads over what it really means. We’ll dig into the hype and opportunity for cloud computing with executives from Hewlett-Packard (HP) and EDS, an HP company. We'll discuss the pragmatic benefits -- and also the limits and areas of lingering immaturity for cloud-based delivery of mission-critical applications and data.

Here to provide the inside story on the current state of cloud computing we welcome our panel, Rebecca Lawson, Director of Service Management and Cloud Solutions at HP. Welcome to the show, Rebecca.

Rebecca Lawson: Thank you.

Gardner: Next, Scott McClellan, Vice President and Chief Technologist of Scalable Computing and Infrastructure in HP’s Technology Solutions Group (TSG). Welcome, Scott.

Scott McClellan: Thank you.

Gardner: And last, Norman Lindsey, Chief Architect for Flexible Computing Services at EDS, an HP company. Welcome, Norman.

Norman Lindsey: Thank you, sir.

Gardner: The trends and the talk around cloud have jumped around a fairly large landscape -- everything from social networking computing to Web services, video, and ... you name it. But what we are going to be talking about is primarily of interest to enterprises, and what we could continue to classify as utility or grid-type computing. First, I want to talk to Rebecca about what is changing around cloud computing, and why IT people should be taking this seriously at this time.

Lawson: Let me first say that at HP, we are really interested in just trying to articulate where we see cloud opportunities -- and how they differ from existing infrastructure, application and service environments. So the way that we define cloud at HP is that we consider it a means by which very particular types of highly scalable and elastic services can be consumed over the Internet through a low-touch, pay-per-use business model.

There is an implication with cloud that is different. It solves different problems than what we have been solving over the last few years, and it implicates both breakthroughs in technology architecture and the confluence of that with new business models.

That’s kind of a mouthful, but we basically think that the enterprise should be aware of what’s happening at the infrastructure level, at the platform level, and at the application level -- and understand what opportunities they have to further source from the cloud certain services that will directly relate to the business outcomes that their organizations are trying to achieve.

Gardner: Do you think the interest at this time is primarily an economic story, or is it convenience? What are the drivers behind all this interest in cloud computing?

Lawson: There is an overriding notion that the cloud provides a lower-cost option for computing, and that may be true in a very few limited use cases. Really, from an enterprise point of view, when they are running mission-critical applications that need security and reliability, and are operating with service-level agreements (SLAs), etc., the cloud isn’t quite ready for prime time yet. There are both technical and business reasons why that’s the case.

As far as the idea of the cost savings, it’s good to look at why that is the case in a few certain areas, and then to think about how you can reduce the cost in your own infrastructure by using automation and virtualization technologies that are available today, and that are also used in the “cloud.” But, that doesn’t mean you have to go out to the cloud to automate and virtualize to reduce some cost in your infrastructure.

Gardner: Let’s go to Norm Lindsey. There are a number of other similar overlapping trends afoot today. There’s virtualization at a number of different levels, application modernization, consolidation, next-generation data center architectures, services-oriented architecture (SOA), an emphasis on IT service management.

Does cloud intersect with these? Is cloud a result of some of these? What is, in a sense, the relationship between some of these technology trends and these economics-driven cloud initiatives?

Lindsey: A lot of these technologies are enablers for a cloud approach to services. The cloud is an evolution of other ideas that have come before it, grid, and before that Web services. All these things combine to enable people to start thinking of this as delivering service with a different business model, where we are paying for it by the unit, or in advance, or after the fact.

Virtualization and these other approaches enable the cloud, but they aren’t necessarily the cloud. What IT departments have to do is start to think about what is it they’re trying to accomplish, what business problem they’re trying to address, as they look at cloud providers or cloud technologies to try and help solve those problems.

Gardner: It also seems that we are hearing about private clouds or on-premises use of these architectural approaches, as well as public clouds or third-party sourcing for either applications or infrastructure resources. Does this boil down to a service orientation, regardless of the sourcing? Perhaps you could help people better understand the different between a private cloud and a public cloud?

Lindsey: Private cloud versus public cloud is part of this whole evolution that we’ve seen. We’ve seen people do their own private utilities versus public utilities such as flexible computing services provide. The idea of a private utility is that, within an organization, they agree to share resources and allow the boundaries to slide back and forth to hit the best utilization out of the fixed set of assets or maybe a growing set of assets.

Nevertheless, they agree to share it to try and approve the utilization. The same idea is in a public utility or a public cloud, except that now a third party is providing those assets and providing that as a service. It increases the concerns and considerations that you have to bring to the party. You have to think about problems that you didn’t have to think about when you had a private utility.

When you go to a public space, security is paramount. What do I do with my proprietary information and service levels? How certain can I get what I need when I need it. The promise with the cloud is great, but the uncertainty has caused people to come up short and decide maybe it’s better if I do it myself, versus utilizing an outside service.

Gardner: Now, I think it’s fair to say that, at this point, this is all still quite new and experimental -- with developers, small companies, and some departments -- using such resources as Amazon Web Services. Clearly this is still in the very early innings, but some of the analyst firms are predicting as much as 5 percent of IT might be devoted to this in several years. While that’s a fairly large number in total, it’s still quite small in regard to the whole pie.

Let’s go to Scott McClellan. Are there really serious positive business outcomes that should entice organizations to start looking at cloud computing now?

McClellan: I definitely think there are. Basically I see the conversation happening between business and IT in two different ways, and one of them was already touched on earlier, when you were talking to Rebecca.

That has to do with the cost factor. That’s your business asking your IT department to reduce cost; CEOs put pressure on CIOs to deliver more with less.

So there are aspects of automation and virtualization that allow you to get to a more utilitized approach to delivering the services within your IT department -- to allow you to increase flexibility, reduce cost, drive up utilization, and things like that to address the cost issue. So there are real business drivers behind that, and that’s especially heightened in today’s economic climate.

In the longer term, the more overarching impact of cloud comes when your IT department can deliver value back to the business, rather than just taking cost out. Some examples of that are using aspects of social networking and other aspects of cloud computing, and the fact that cloud is delivered over ubiquitous media, the Internet, to increase share of wallet, increase market share, maybe bring higher margin to a business, and build ecosystems, and drive user communities for a business. That’s where cloud brings value to a business and that’s obviously important.

Gardner: So we have, at one level, an opportunity to take advantage of these technologies for pure efficiency’s sake for our internal IT operations. There is also this additional opportunity to use the clouds as a gateway to new or existing customers and be able to service them perhaps better through this ubiquitous medium of the Internet and perhaps at lower cost. Is that right?

McClellan: Yeah, it’s absolutely true. The former, the taking cost out is the first way. The first wave of innovation from cloud computing is coming from making services consumable on a different model, on more of a utilitized model, and that drives up utilization, etc. To unlock some of the value requires innovating at the application tier, in many cases, but absolutely you can bring both benefits to a business.

Lawson: I’ll give a concrete example of this cost. Let’s choose an example, first of a service your business needs to have -- a credit check service. Obviously, when you are selling a product, you want to make sure that your customer has credit, which, of course, is all the rage today.

You could think of a credit-check service as having a very specific business outcome. It may be that your company has an internally developed service that maybe you built, and it’s tied into your SAP, Ariba, or what have you.

Or, it may be that your credit-check service is hosted by an external service provider, but still designed in a traditional architectural manner. Or, it may be that there are credit-check services available through the cloud, designed in a different application architectural style that suits your purpose.

Either way, what IT is going to need to do is really think through its service centric way of behaving and a way of operating IT -- so that what’s appropriate for that company can be arbitrated by IT, knowing that they have to take into consideration security, speed, and accuracy. So for some companies, doing a credit check through a cloud service might be perfectly fine. For other companies, it may be way too risky for them for whatever reason.

We need to think in terms of which services provide what level of value, based on the complexion of that particular company -- and it’s never going to be the same for all companies. Some companies can use Google Gmail as an email service. Other companies wouldn’t touch it with a 10-foot pole, maybe for reasons of security, data integrity, access rights, regulations, or what have you. So weighing the value is going to become the critical thing for IT.

Gardner: It appears that the ability to take advantage of cloud computing comes from an increased services orientation, and understanding the technologies and how to take advantage of them and exploit them -- but that the larger business decisions really are around which services should or shouldn’t be sourced in a certain way, and what level of comfort and risk aversion are acceptable.

This is probably going to be something that needs to be judged and managed company-by-company, even department-by-department.

How do companies start to get a handle around that decision process which seems critical -- not just how to take advantage of the technology but in which fashion should these services be acquired and managed?

Let’s go to Norm. How do people start managing, at a local individual level, the decision process around which services might become cloud services?

Lindsey: Start by looking at the business problem that you are trying to solve, and IT has to start looking at the requirements and dealing with it as a requirements issue, as opposed to a technical issue. They need to make sure that the requirements are clear and all stakeholders understand what you are doing.

Then you can start to look around at your internal capabilities, versus external, and make some decisions as to how you want to solve that problem, whether buying an external service or creating a service internally and delivering it to your customers with your own internal utility.

Gardner: Rebecca, this raises the question, then, of … Who owns this decision-making process around cloud, utilization, and/or resource? This seems to be an abstraction above IT, but you certainly need to know what IT processes are involved here.

I know we are early in this, but is there any sense of how who owns the decision-making process around cloud is going to shake out?

Lawson: That’s a really great question, because a lot of people in the lines of business or business functions can go out to the Internet and make a decision. “Hey! We’re going to use Salesforce.com,” or what have you. Those decisions made without IT could have some really deep ripple effects that a line-of-business person might not realize.

People in the lines of business don’t think about data architecture and integrity, they don’t think about firewalls, they don’t think about disaster recovery, and they shouldn’t. That’s not their job.

So this will force IT to come closer to the people in the business and really understand what is the business objective, and then find the right service that maps to the value of that objective. Again, we can’t emphasize it enough. This should really change behavioral dynamics in IT and how they think about what their job is.

Lindsey: That’s a key point -- the IT guys become an enabler, as opposed to a gatekeeper. They know what the compliance issues are; they know what the regulatory rules are on their company to meet Sarbanes-Oxley, or whatever world they live in.

The line of business has the business problem and they need to focus on what their problem is and let IT answer the question in terms of, “These are some possible solutions. This is what they cost. Now tell me which one you do.” But these will all have to meet the myriad list of requirements that we have to live within.

Gardner: It appears to me that there are a couple of different levels of risk here. One risk would be that people start jumping into cloud and external-service consumption piecemeal, without it being governed or managed centrally, or with some level of oversight in a holistic sense.

The other risk might be that you are so clamped down, and you are so centralized and tightly managed, that no one takes advantage of efficiencies that become available through the cloud. You then have unfortunate costs and an inability to adapt quickly.

Let’s go to Scott McClellan. How are companies expected to manage these types of risks, that is to say, over-consumption or under-consumption of cloud services? How can companies become more rational in how they approach these issues?

McClellan: In the process of getting to a service-centric IT governance model, they’re going to have to deal with the governance model for deploying new services. Again, I think risk is partly a function of benefit. So when there is a marginal benefit or when the stakes are very high, you would want to be very conservative in terms of your risk profile.

Basically, within the spectrum of things that are cloud computing, you have everything from infrastructure as a service … all the way up through virtualized infrastructure, a platform on top of that, an application on top of that, or perhaps a completely re-architected true cloud-computing offering.

As you move up that spectrum, I think the benefits increase, but in not all cases are the application domains available in all of those environments.

There are several choice points here. What services are available through some cloud model, what model of availability, what are the characteristics of that model, what are the requirements for that particular service – and what are the security performance, continuity integration, and compliance requirements? Those all have to be taken in holistically and through a governance model to make the decision whether we are going to move from the traditional deployment model to a cloud-delivery model, and if so, which one.

Gardner: To me, this governance issue sounds an awful lot like what we’ve heard around SOA, and what you need to put in place to take advantage of that approach.

Rebecca, are we talking really about the same set of issues that, if you put in a good SOA infrastructure, management, governance, and capability set -- and if you organize your culture and your people to think about services – that that puts you in a good position to manage cloud? You can find were it’s appropriate, and then be able to find that balance between these risks?

Lawson: That’s a good observation, and there is a parallel between the notions of SOA, the loose coupling of services, and what we’re talking about here. The hard part is that services come in many different flavors and architectural styles. So in reality you might be managing a service that runs on a very old architectural style, but it really delivers value. You really want to maintain it, and it’s worth it. You might also want to adopt a Web-oriented architectural approach, vis-à-vis using some cloud services in another part of the organization.

The parallel is there. People who’ve grown up through a SOA kind of model naturally gravitate to this. The service provider and consumer relationship is a big change with cloud because, all of a sudden, providers look different than they used to.

Companies that you didn’t think of as service providers are now a service provider. You never used to think of Amazon as a company you might go to to get compute from. You used to buy books there.

So what happened? All of a sudden, lots of people can become providers in startling ways, which is great. It’s a whole new burst of creativity and possibility in the area of technology-enabled services. Obviously, we have to tread carefully, because businesses have to grow, and you’ve got to choose wisely.

Gardner: I wonder if there are other precursors to organizations being better able to take advantage of cloud computing, but at low risk. I suppose one would be IT service management, treating IT as a bureau or service provider, the charge back type of system.

Any input, Norm, on some of these other precursors that organizations might think about as they start to wonder how they can best take advantage of cloud?

Lindsey: Actually, one of them is one you haven’t brought up, which is a lot of times they are out of space and out of time. They have some idea or they have some new business. They want to load it and they are out of room in their data center.

Or it’s something that just comes up really quickly, and they need to act quickly. The flexibility and the nimbleness of the cloud enable them to respond. So, as far as the drivers inside the business, that’s one of the big ones. The other one is just running out of power and space inside of their existing facility.

Gardner: I suppose that gives them the opportunity to ramp up, but without a whole lot of upfront capital expense. They can pay for this on a per-use basis, right?

Lindsey: Precisely. You rent instead of buying. The other obvious benefit is that you have minimized your risk and you can turn it off, if things don’t go the way you want them to.

Gardner: Let’s look at some of the things that cloud computing can’t do so well. Obviously, as they say, we are in the early innings here. Let’s go to Scott McClellan on this. Not all applications can be delivered by a cloud. There are design and data issues and application programming interface (API) issues. We’re not ready for database joins and two-phase commits, and needs around transactional integrity where you need to have correction of transactions, and so forth.

Maybe you can help our listeners understand, at least for the foreseeable future, what types of applications and services might be appropriate for cloud -- and which ones would not be?

McClellan: It’s partly a matter of how modern is the application architecture that enables the service. So, it is a bit of a continuum. To some extent, the question isn’t, “Can it be delivered as a service model?” but “Can it be delivered in as a service model at the necessary scale on a cost curve that allows the service to be delivered at an attractive price?”

So it’s not a simple black and white. Is it possible to do this particular service in the cloud? You might be able to take a legacy architected application, delivered it in, say, software-as-a-service (SaaS) model, assuming it’s basic underlying architecture is relatively modern, and it can be Web-enabled and it has appropriate user interfaces and so forth to be Web-enabled.

The immaturity of some of the data services and the truly scalable cloud computing infrastructure -- examples are things like Google’s BigTable or Hadoop data-services level -- do provide some relational data semantics, but they are nowhere near as rich as the full database semantics provided by the mature database management subsystems. As you mentioned there is no way to do a join.

Gardner: It seems an important hurdle to overcome in taking advantage of cloud would be the proper mixing, if you will, of data. There needs to be some kind of a sharing, where not the entire database, but perhaps a level of meta data might be shared between different organizations, private and public.

Do you have any thoughts, Rebecca, on how HP views that sharing, that data issue? Again, that’s something for an IT department, or may be even a marketing department, to tackle.

Lawson: Obviously, there will be data that you just don’t want to share with anyone, but there is a good use-case out in the cloud for a provider to offer up a ton of data that might be valuable to a whole bunch of different consumers. Let’s say it’s demographic data, and they may want to make a marketer’s ability to access that data through a number of services very agile and very scalable. That would be an example of a potential place where somebody could write some cloud-based services or applications and offer them through the cloud.

Intelligence in data varies widely, so it’s hard to generalize. On the other extreme, inside the firewall, you might have some extremely rigorous requirements for what data goes into your enterprise data warehouse, who gets to access it, how the tables are set up, or what the security provisions are. That would be another extreme where you have no interest whatsoever in sharing that with anyone, and it’s considered core to the company.

So that’s a great example of where you have to really consider the value of the service and the output. What’s the business outcome and how should we think about where we let our data live, how we access our data, how we mash it up with other information sources. Again, the bad news is there is no simple answer; the good news is there are lots of opportunities to get very clear in what you want as a result of that data, and lots of places to get it.

Gardner: All right, let's give the last word to Scott. Clearly, the technologies are there for a scalable and agile infrastructure. The economics are apparently quite compelling.

This comes back down then to the organization behavioral risk management issue. My last question to you is, in a period of economic downturn where economics and cost issues are paramount, is cloud computing something that will be accelerated by the tough economic times, or will people back off from something like what cloud offers until they have a better picture in terms of growth?

McClellan: My personal prediction would be that the tougher economic conditions would heighten the acceleration of cloud computing, and not just because of the opportunity to save cost. Reinforcing what we brought up earlier, there are some clear opportunities to bring value to your business.

Examples of that are things like being able to drive user communities, users and consumers of whatever it is your business produces, using techniques of social networking, and things like that.

There is the question of how to use the advantages you get from cloud computing to drive differentiation for your business versus your competitors, because they’re hesitating, or not using it, because they’re being risk-averse. In addition, that compliments the benefits you get from cost savings.

The other characteristic that the tough economic conditions could have on adoption of cloud computing is that it might cause customers to shy away from particularly painful places, where the risk is super-high, but it will kind of lower the barrier or the threshold that you have to clear for the opportunities that are less extremely risky, if that makes sense.

Gardner: I think you are talking about the high upfront capital outlays to start something. If you build it, you hope they will come, that kind of thing?

McClellan: That's on the service-provider side. There could be some risk aversion on service providers building out giant infrastructures, with just the hope that someone will come and consume them. I agree with your point there.

What I really meant is that, if you are an IT shop and you are trying to decide what to move to a cloud paradigm or a cloud model, you’re likely to really focus on the places where either you can get that big win -- because moving this particular service to a cloud paradigm is going to bring you some positive differentiation, some value to your company.

Or, you are going to get that big cost savings from the places where it's the most mission-critical -- the place where you have the least tolerance for downtime, and you have the greatest continuity requirements, or where the performance SLA has been most stringent. The thinking may be, “Well, we’ll tackle that later. We’re not going to take a risk on something like that right now.”

In the places where the risk is not as great -- and the reward either in terms of cost or value looks good -- the current economic conditions are just going to accelerate the adoption of cloud computing in enterprises for those areas. And they definitely do exist.

Gardner: It gives companies a series of additional choices at a time when that might be exactly what they need.

McClellan: That's right. And in some cases, it's not super-expensive to move to this model, and you'll have a quick payback in terms of return on investment (ROI). If you are bringing value to your company and differentiation, this is a good time to do that. Strike while there is a sense of urgency. It creates a sense of urgency to strike. I guess I would say it that way.

Gardner: We’ve been discussing some of the advantages and potential pitfalls of cloud computing. It seems that the opportunities are there for those who examine it carefully and appropriately, and can balance the risks to get the rewards.

We’ve been chatting today with Rebecca Lawson, the Director of Service Management and Cloud Solutions at HP. Thanks, Rebecca.

Lawson: Thank you.

Gardner: Also, Scott McClellan, Vice President and Chief Technologist of Scalable Computing and Infrastructure at HP's Technology Solutions Group. Thanks so much, Scott.

McClellan: Thank you very much. I appreciate the opportunity.

Gardner: And also, Norman Lindsey, Chief Architect for Flexible Computing Services at EDS.

This is Dana Gardner, principal analyst at Interarbor Solutions. You have been listening to a sponsored BriefingsDirect podcast. Thanks, and come back next time.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: Hewlett-Packard.

For more information on HP Adaptive Infrastructure, go to:
www.hp.com/go/ai./

Transcript of a BriefingsDirect podcast on cloud adoption best practices with HP and EDS executives. Copyright Interarbor Solutions, LLC, 2005-2008. All rights reserved.

Tuesday, September 30, 2008

Improved Insights and Analysis From Systems Logs Reduce Complexity Risks From Virtualization

Transcript of BriefingsDirect podcast on the infrastructure management and security challenges of virtualization.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: LogLogic.

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you’re listening to BriefingsDirect. Today, a sponsored podcast discussion about virtualization, and how to better improve management of virtualization, to gain better security using virtualization techniques, and also to find methods for compliance and regulation -- but without the pitfalls of complexity and mismanagement.

We're going to be talking about virtualization best practices with several folks who are dealing with this at several different levels. We're going to hearing from VMware, Unisys and LogLogic.

Let me introduce our panel today. First, we're joined by Charu Chaubal, senior architect for technical marketing, at VMware. Welcome, Charu.

Charu Chaubal: Thank you.

Gardner: We're also joined by Chris Hoff, chief security architect at Unisys. Hi, Chris.

Chris Hoff: Hi, how are you?

Gardner: Great. Also, Dr. Anton Chuvakin, chief logging evangelist and a security expert at LogLogic. Welcome to the show.

Dr. Anton Chuvakin: Hello. Thank you.

Gardner: Virtualization has certainly taken off, and this is nothing new to VMware. Organizations like Unisys are now doing quite a bit to help organizations that utilize, expand, and enjoy the benefits of virtualization. But virtualization needs to be done the correct way, without avoid pitfalls. If you do it too tactically, without allowing it to be part of an IT lifecycle and without management, then the fruits and benefits of virtualization can be largely lost.

Before we get into what virtualization can do, what to avoid, and how to better approach it, I'd like to just take a moment and try to determine why virtualization is really hot and taking off in the market now.

Let's start with Chris Hoff at Unisys. Some of these technologies have been around for many years. What is it about this point in time that is really making virtualization so hot?

Hoff: It's the confluence of quite a few things, and we see this sort of event happen in information technology (IT) quite often. You have the practically perfect storm of economics, technology, culture, and business coming together at one really interesting point in time.

The first thing that comes to mind is when people think about the benefits. The reasons people are virtualizing are cost, cost savings and then cost avoidance, which is usually seconded by agility and flexibility. It’s also about being able to, as an IT organization, service your constituent customers in a manner that is more in line with the way business functions, which is, in many cases, quite a fast pace -- with the need to be flexible.

These things are contributing a lot to the uptake, not to mention the advent of a lot of new technology in both hardware and software, which is starting to enable some of this to be more realistic in a business environment.

Gardner: Now over to VMware. Charu, tell us how deep and wide virtualization is emerging? It seems like people are using it in more and more ways, and in more and more places.

Chaubal: That's right. When the x86 virtualization first started out, maybe 10 years ago in a big way, it was largely being used in test and development types of environments. Over the last five years, it's definitely started to enter the production arena as well. We see more and more customers running even mission-critical applications on virtualization technologies.

Furthermore, we also see it across the board in terms of customer size, where everyone from the smallest customer to the very largest enterprises, are expanding further and further with their virtual environments.

Gardner: Let's go to LogLogic. Tell me, Anton, what sort of security and what sort of preventative measures are you helping your customers with in terms of gaining the visibility and the analytics about what's going on among these many moving parts? Many of these deployments are in now in an automated mode, more so than before they were virtualized. What are some of the issues that are you helping people deal with?

Chuvakin: You were exactly right about the visibility into the environments. As people deploy different types of IT infrastructure, first physical and now virtual, there is always a challenge of figuring out what happens with those PCs, at those PCs, which people are trying to connect to, or even attack them, and do all these at the same time around the clock.

Adding virtualization to the technology that people use in such a massive way as it's occurring now brings up the challenges of how do we know what happens in those environments. Is there anybody trying to abuse them, just use them, or use them inappropriately? Is there a lack of auditability and control in those environments? Logs are definitely one of the ways, or I would say a primary way, of gaining that visibility for most IT compliance, and virtualization is no exception.

As a result, as people deploy VMware and applications in a couple of virtual platforms, the challenge is knowing what actually happens on those platforms, what happens in those virtual machines (VMs), and what happens with the applications. Logging and LogLogic play a very critical role in not only collecting those bits and pieces, but also creating a big picture or a view of that activity across other organizations.

Virtualization definitely solves some of the problems, but at the same time, it brings in and brings out new things, which people really aren't used to dealing with. For example, it used to be that if you monitor a server, you know where the server is, you then know how to monitor it, you know what applications run there.

In virtual environments, that certainly is true, but at the same time it adds another layer of this server going somewhere else, and you monitor where it was moved, where it is now, and basically perform monitoring as servers come up and down, disappear, get moved, and that type of stuff.

Gardner: Now, Chris at Unisys, when you're dealing with customers, based on what we've heard about this expansion of virtualization, you're dealing with it on an applications level, and also on the infrastructure and server level.

What’s more, some folks are now getting into desktop virtualization infrastructure and delivering whole desktop interfaces out to end-user devices. This impacts not just a server. We're talking about network devices and storage devices. This is a bit more than a tactical issue. It really starts getting strategic pretty quickly.

Hoff: That's absolutely correct. If you really look at virtualization as an enabling technology or platform, as we can look out to the next three years of large companies use from the perspective of their strategic plans, you'll notice that there is a large trend toward what you might call "real-time infrastructure."

The notion here is about how you apply and take this enabling technology in the benefits of virtualization and leverage that to provide automation re-purposing. You have to deal with elements and issues that relate to charge-back for assets, as IT becomes more of a utility service.

If we look further out from there, we look at the governance issues of what it means to not really focus on hardware anymore, or even applications -- but on service and service levels. It gets a lot more strategic at times, played out all along the continuum.

While we focus virtualization on the notion of infrastructure and technology, what's really starting to happen now -- and what's important with the customers that we deal with -- is being able to unite both business process and business strategy, along with the infrastructure and the architecture that support it.

So we're a little excited and frothed up as it relates to all the benefits of virtualization today, and the bigger picture is even more exciting and interesting. That's going to fundamentally continue to cause us to change what we do and how we do it, as we move forward. Visibility is very important, but understanding the organizational and operational impacts that real-time infrastructure and virtualization bring, is really going to be an interesting challenge for folks to get their hands around.

Gardner: Now, Charu at VMware, you obviously are building out what you consider the premier platform and approach to virtualization technically. You've heard, obviously, the opportunity for professional services and methodologies for approaching this, and you have third parties like LogLogic that are trying to provide better visibility across many different systems and devices.

How are you using this information in terms of what you bring to the management table for folks who are moving from, say, tactical to more strategic use of virtualization?

Chaubal: A lot of customers are expanding their virtualization so much now, to the point where they're hitting some interesting challenges that they maybe wouldn't have hit before. One great example is around compliance, such as Payment Card Industry Data Security Standards (PCI) compliance. There are a lot of questions right now around virtualizing those systems that process credit card holder data.

Chaubal: They're asking, "If I do this, am I going to be compliant with PCI? Is this something that's a realistic possibility? If it is, how do I go about demonstrating this to an auditor?"

This is where partners like LogLogic come into play, because they have the tools that can help achieve this. We believe that VMware provides a compliance-ready type of platform, so it is something you can achieve compliance with. But, in order to demonstrate and maintain that compliance, it's useful to have these tools from partners that can help you do that.

Gardner: Now, Anton at LogLogic, you're able to examine a number of different systems, gather information, correlate that information, do analytics, and provide a picture of what should be happening. Or, when something is not happening, you can look for the reasons why and look for aberrant or unusual behavior. So let's address security a little bit.

What are some of the challenges in terms of security when you move from a physical environment for compute power and resources to a virtualized environment? Then second, what about the mixture? It is obviously going to be both physical and virtualized instances of infrastructure and applications. Tell us about the security implications.

Chuvakin: I just follow the same logic I used for our recent webcast about virtualization security. In this webcast, I basically presented a full view of things that are the same and that are different in virtualized environments. I'll use the same structure, because some people who get too frothy, as Greg put it, about virtualization just stick to "virtualization changes everything." That is used sometimes as an excuse to not do things that you should continue doing in a virtualized environment.

Let's start from what things are the same. When you migrate from a physical to a virtual infrastructure, you certainly still have servers and applications running in those servers and you have people managing those servers. That leaves you with the need to monitor the same audit and the same security technologies that you use. You shouldn't stop. You shouldn't throw away your firewalls. You shouldn't throw away your log analysis tool, because you still have servers and applications.

They might be easier to monitor in virtual environments. It might sometimes be harder, but you shouldn't change things that are working for you in the physical environment, because virtualization does change a few things. At the same time, the fact that you have applications, servers, and they serve you for business purposes, shouldn't stop you from doing useful things you're doing now.

Now, an additional layer on top of what you already have adds the new things that come with virtualization. The fact that this server might be there one day, but be gone tomorrow -- or not be not there one day and be built up and used for a while and then removed -- definitely brings the new challenges to security monitoring, security auditing in figuring out who did what where.

The definition of "who" didn't change. It's still a user, but what and where definitely did change. I mean, if it was done on a certain server, in virtual environment it might not be a server -- it might be a virtual image, which adds additional complexities

There are also new things that just don't have any occurrence in the physical environment -- for example, a rogue VM, a VM that is built by somebody who is not authorized to run VMs. It might be the end user who actually has his own little mini infrastructure. It brings up all sorts of forensic challenges that you have now solved. You don't just investigate a machine. You investigate a machine with a virtual platform, with another server on top, or another desktop on top.

This is my view of things that are the same that you should continue doing and things that are new that you should start learning how to audit and how to analyze the activity in the virtual environments, as well as how to do forensics, if what you have is a machine with potential a rogue VM.

Gardner: How about you, Chris at Unisys, how do you view implications for security and risk mitigation when it comes to moving increasingly into virtualized environments?

Hoff: I have to take a pretty pragmatic approach. The reality is that there are three conversations and three separate questions that need to be addressed, when you're talking about security in virtualized environments.

Unfortunately, what usually happens is that all three of them are combined into one giant question, which tends to lead to more confusion. So I like to separate the virtualization and security questions into three parts.

One of them is securing virtualization, and understanding what the impacts are on your architecture, your infrastructure, and your business process and models, when you introduce this new virtualization layer. That's really about securing the underlying virtualization platforms and understanding what happens and what changes when you introduce that, assuming that you have a decent understanding of what that means, and how that will ultimately flow down operationally.

The second point or question to address is one of virtualizing security, which is actually the operational element of, "What does it mean, and how do I go about taking what I might do in the physical world, and replicate that and/or even improve it in the virtual world?"

That's an interesting question, assuming that you have a good understanding of architecture and things that matter most to you, and how you might protect them, or how you might not be doing that. You may find several gaps today in your ability to actually do what you do in the physical world.

The third element is security through virtualization, which is okay, assuming that I have a good architectural blueprint and that I understand the impacts, the models, who and what changes operationally, how I have to go about securing things, and what benefits I get out of virtualization.

How do I actually improve my security posture by using these platforms and this technology? If you look at that, if you look at it in that way, you really are able to start dealing with the issues associated with each category. You could probably guess that if you mixed all three of them up, you could go down one path, and very easily be distracted by another.

When we break out the conversations with customers like that, it always comes back to a very basic premise that we seem to have forgotten in our industry. Despite all the technology, despite all the tools, and all of the things that go blinky-blink at night, the reality is that this comes down to being able to appropriately manage risk. That starts with understanding the things that matter to you most and using risk assessment frameworks and processes.

In a gross analogy, when you go to a grocery store and you take time to pack your frozen goods in one bag, and your canned goods and your soft goods in other bags, you use this compartmentalization, understanding what the impact is of all of the wonderful mobility, balanced with compliance and security needs.

If you got home, and you've got canned goods in with your fruit, the reality is that you've not done a good job of compartmentalizing and understanding what the impact of one good might have on the other.

The same thing applies in the virtual world. If you don't take the time to go back to the basics, understanding the impact of the infrastructure and the changes -- you're going to be a world of hurt later, even if you get the cost benefits and all the wonderful agility and mobility.

We really approach it pragmatically in a rational manner, such that people understand both the pluses, the pros and the cons of virtualization in their environments.

Gardner: We've determined that virtualization is quite hot. It's ramping up quickly. A number of studies have shown a 50-70 percent increase in the use of virtualization in the last few years. Projections continue for very fast-paced growth.

We also see a number of organizations using multiple vendors, when it comes to virtualization. We've also discussed how security and complexity apply to this, and that you need a comprehensive or contextual view of what's going on with your systems -- particularly if you have a mixture of physical and virtual.

Let's look at some examples of how this has been mitigated, how the risk has actually been decreased, and how the fruits, if you will, of virtualization are enjoyed without the pitfalls.

Let's first go to Charu at VMware. Can you offer some examples of how people have used virtualization, done it the right away, avoided some of these pitfalls, and have gained the visibility and analytics and therefore helped with their matured approach to virtualization?

Chaubal: One thing we've done at VMware over the last year and a half is try to provide as much prescriptive guidance as we can. So a lot of securing of virtualization comes down to making sure you actually deploy it [properly].

So, one thing that we've done is created hardening guides that really aim to show customers how this can be done. That's proved to be very popular among our customers.

Not to get into too much detail, but one of the main issues is the fact that you have a virtualization layer that typically has a management interface in it. Then, you have the interface that goes into your virtual machines. People need to understand that this management layer needs to be completely separated from the actual production network.

That principle is manifested in different recommendations and scenarios, when you plan a deployment and configure it. That's just one example where customers have been able to make use of our prospective guidance. Then, they architect something that is actually much more secure than possibly they would have with some preconceived notions that they might have had. I think that's one area where we are seeing success.

Gardner: Let's go to LogLogic. Anton, give us some examples, actual companies or at least use-case scenarios, where the use of LogLogic, or the methodologies that it supports, have brought to bear on virtualization – to lower the cost, increased performance, gain higher utilization, and so forth -- but without some of these risks.

Chuvakin: I'll give an example of a retail company that was using LogLogic for compliance, as well for operational usage, such as troubleshooting their servers. This company, in a separate project, was implementing virtualization to convert some of their infrastructure to a virtual machine.

At some point, those two projects mainly had their log management to track operations to satisfy PCI requirements. These issues collided with the virtualization projects, and the company realized that they now have to not just collect logs from the physical infrastructure, but also from the virtual side that is now being built.

What happened was that the logs from the virtual infrastructure were also streamed into LogLogic. Now, LogLogic has the ability to collect any type of a log. In this case, we did use that capability to collect the log, which were at the time not even supported or analyzed by LogLogic.

The customers understood that they have to collect the logs from the virtual platforms, and that LogLogic has an ability to collect any type of a log. They first started from a log collection effort, so that they could always go back and say, "We've got this data somewhere, and you can go and investigate it."

We also built up a package of contents to analyze the logs as they were starting their collection efforts to have logs ready for users. At LogLogic, we built and set up reports and searches to help them go through the data. So, it was really going in parallel with that, building up some analytic content to make sense of the data, if a customer already has a collection effort, which included logs from the virtual platform.

In this case, it was actually a great success story because we used part of the LogLogic infrastructure that doesn't rely on any preconceived notions of what the logs are. Then, they built up on top of that to help them pinpoint the issues with their VMs to see who accesses the platforms, what applications people use to manage the environment, and, basically, to track all sorts of interest in events in their virtual infrastructure.

I have to admit that it wasn't really tested on their PCI yet, but I'm pretty confident that their PCI auditors will accept what they did for the virtual environment. And, they would satisfy the requirements of PCI, which calls for logging and monitoring, as well as the requirements in the compliance mandate.

At the same time, while they are building it for that use, their analysts are already trying to do searches and look certain things that might be out of order in their VM environment. An operational use-case spontaneously emerged, and now they not only have their own idea for what to look for, but also our content to do that.

Gardner: You bring up a point here that we shouldn't overlook. This isn't something that you just build and walk away from. It requires ongoing refinement tuning. The dynamic nature of virtualization, while perhaps automated in terms of allocating resources, is an overall process that needs to be managed in order for these business outcomes to be enjoyed.

Let's go back to Chris at Unisys. Tell us about the ongoing nature of virtualization. How do you keep on top of it? How do you keep it performing well, and perhaps even eke out more optimized utilization benefits?

Hoff: There's not a whole lot of difference in terms of how you might apply the same query to non-virtualized infrastructure. It's not a monolithic single-time event, but, as I alluded to in a previous answer, the next extension should be evolution along the continuum. That notion of real-time infrastructure really does take in the concept of a lot of tasks.

Today, we are quite operationally inefficient in doing that, both from the perspective of practice and infrastructure utilization, and really making sure that our infrastructure, and the compute and storage, and all of the things that go into, up in our infrastructure become much more efficient, for power, cost efficiency, utility, and flexibility.

When you unite all of those capabilities, what it's going to mean going forward is a much more rich methodology and model for taking business process and instantiating that as an expression of policy within your infrastructure. So, you can say the things that are most important to your business are these processes, and these services.

What you need to be able to do, and ultimately what it means to automation and the efficiency problems, is that the infrastructure needs to self-govern, self-provision and re-provision. You need to be to able to allocate cost back to your constituents, and it gets closer and closer to becoming a loose, but federated, group of services. It can essentially play and interact in real-time to service the needs of the business.

All the benefits that we get out of virtualization today are just the beginning and kind of the springboard for what we are going to see in terms of automation, which is great. But we are right at the same problem set, as we kind of pogo along this continuum, which is trying really hard to unite this notion of governance and making sure that just because you can, doesn't mean you should. In certain instances the business processes and policies might prescribe that you don't do some things that would otherwise be harmful in your perspective.

It's that delicate balance of security versus operational agility that we need to get much better at, and much more intelligent about, as we use our virtualization as an enabler. That's going to bring some really interesting and challenging things to the forefront in the way in which IT operates -- benefits and then differences.

Gardner: In the way that you were describing this continuum, it almost sounds like you were alluding to cloud computing, as it's being defined more and more -- and perhaps the “private cloud,” where people would be managing their internal enterprise IT resources from a cloud perspective. Am I overstating it?

Hoff: No, I don't think you're overstating it. I think that's a reasonable assertion and assumption based on what I am saying. The difficulty in using the "cloud" word is that it means a lot of things to lots of people. I think you brought up three definitions in your one sentence.

But the notion of being able to essentially utilize our resources pretty much anywhere, regardless of who owns the infrastructure, is something that's enticing and brings up a host of wonderful issues that make security people like me itchy.

If you read Nicolas Carr's book The Big Switch, and you think about utility or grid computing or whatever you want to call it -- the notion of being able to better utilize my resources, balance that with security, and be very agile -- it's fun times ahead. You are absolutely right. I was alluding to the C-word, yes.

Gardner: Okay. Charu at VMware, given that organizations are at different rates of adoption around virtualization -- some are just starting to test the waters -- but the end goal for some of these adopters could be this cloud-compute value, this fabric of IT value.

How are people getting started, and how should they get started in a way that sets them up for this longer-term payoff?

Chaubal: That's a very broad question, but I think it is important that you can go in and use virtualization to consolidate physical servers on to smaller number of physical servers, and you get that savings that way. If that's the approach you take, you might end up at a dead-end, or you might get off on a tangent somewhere.

What we find is that there is really a maturity curve when it comes to virtualization adoption, and one of the most important axes along that curve is, in a broad sense, your operational maturity.

When you are starting out, sure, go ahead and consolidate servers. That's a good way to get some quick wins, but you're rapidly going to come to a point where you need to start to imposing an operational discipline and policies and procedures that perhaps you didn't have before.

Perhaps you had them, but they weren't all that rigidly adhered to or weren't really followed all the time. The most important thing is that you start thinking about this operational maturity, and then go to things like being able to standardize upon processes and standardize upon the way things are configured.

Any kind of process you do, make sure it goes through the right steps in terms of getting it approved. There is a whole methodology around that, and that's one of the things that we spend a lot of time with our customers.

We have this graph where, if you can look at how many servers are virtualized over time, we would like to see a steady upward 45-degree angle to that curve. If somebody virtualizes too many too soon, you will see that curve shoot up sharply. Then, you repeat yourself, because you virtualized so much so quickly, and all these other issues that Chris alluded to come into play, and they might bog you down.

On the other hand, you could suffer the other extreme where you virtualize so slowly, that the curve is very shallow, and you end up leaving savings and benefits on the table, because you are just picking them up so slowly.

Gardner: Missed opportunities, right?

Chaubal: Right, exactly. The most important thing, when you are starting out, is to keep that in mind that you are not just installing a piece of software that will optimize what you have already. It's really a fundamental transformation in how you do things.

Gardner: Okay, let's take the last question to Anton at LogLogic. How do you recommend people get started, particularly in reaching this balance between wanting not to miss opportunities, wanting to be able to ramp up quickly and to enjoy the benefits that virtualization provide, but doing it in such a way that they get that visibility and analytics, and can set themselves up to be risk resistant, but also strategic in their outlook?

Chuvakin: I'll use the case that I just presented to illustrate the way to do it. As has happened with me in technology before virtualization, people will sometimes deploy it in a manner that's really makes auditing and monitoring pretty hard. So they have to go back and figure out what the technologies are doing in terms of transparency and visibility.

I suggest that, as people deploy VMware and other virtualization platforms, they instantly connect those to their log-management tools, and that log collection starts day one.

Admittedly, most of those organizations would not know what to do with those logs, but having those logs as a first step will be important. Even if you don't know how to analyze the log, you don't know what they mean, or what they're trying to tell you, you still have that repository to fall back to.

If you have to investigate an issue, an incident, or an operational issue in an environment, you still have an ability to go back and say, "Oh, something of that sort already happened to me once. Let's see what else occurred at the same time."

Even if you have no skills to delve into the full scope of how to analyze all these signals that virtual infrastructure is sending us, I would focus first on selecting the data and having the data for analysis. When you do that, your future steps or your further steps, when you make sense of the data, will be much more easy, much more transparent, and much more doable overall.

You will have to learn what the signals are, what information is being emitted by your virtual infrastructure, and then make conclusions on that. But, to even analyze the information, to make conclusions, and to figure out what's going on, you have to have the original data.

It's easier to collect the data early, because it's really not a big deal. You just send those logs to LogLogic or the log management system, and they are capable of doing that right away. Now, admittedly, you have to pick a system, such as LogLogic, that can support your virtualization infrastructure and then you can build up your analysis and your understanding and build up your true visibility, sort of the next layer of the intelligence as you go. Don't try to use the analysis right away, but start collecting it day one.

Gardner: Right, visibility early and often. I appreciate your input. We have been talking about virtualization -- how to do it right, how to enjoy lower risk, understanding security implications, but at the same time moving aggressively as you can, because they are significant economic benefits.

Helping us understand virtualization in this context, we have been joined by Charu Chaubal, senior architect in technical marketing at VMware. Thank you, sir.

Chaubal: Thank you.

Gardner: Also Chris Hoff, chief security analyst at Unisys. I really appreciate your input, Chris.

Hoff: Thanks, very much.

Gardner: And also, Dr. Anton Chuvakin, chief logging evangelist and also a security expert at LogLogic. Thank you, sir.

Chuvakin: Thank you so much for inviting me.

Gardner: I would like to thank our sponsor for this podcast, LogLogic. This is Dana Gardner, principal analyst at Interarbor Solutions. You have been listening to a BriefingsDirect podcast. Thanks, and come back next time.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: LogLogic.

Transcript of BriefingsDirect podcast on the management and security challenges of virtualization. Copyright Interarbor Solutions, LLC, 2005-2008. All rights reserved.

Wednesday, August 13, 2008

Borland's Own ‘Journey' to Agile Development Forms Real-World Foundation for New Software Delivery Management Solutions

Transcript of BriefingsDirect podcast on Agile Development principles and practices with Borland Software.

Listen to the podcast. Sponsor: Borland Software.

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you’re listening to BriefingsDirect. Today we present a sponsored podcast discussion about Agile software development.

We're going to be talking to a software executive from Borland Software about Borland's own Agile “journey.” They deployed Agile practices and enjoyed benefits from that, as well as gained many lessons learned, as they built out their latest application lifecycle management (ALM) products. [See product and solution rundowns.]

We're going to talk with Pete Morowski, the senior vice president of research and development (R&D) at Borland Software. Welcome to the show, Pete.

Peter Morowski: Thank you, Dana. It's good to be here.

Gardner: Before you get into Borland Software's journey, I want to get a level-set about Agile Development practices in general. Why is Agile development a good idea now? What is it about the atmosphere in the evolution of development that makes this timely?

Morowski: From the standpoint of software development, it's a realization that development is an empirical process, a process of discovery. Look at the late delivery cycles that traditional waterfall methodologies have brought upon us. Products have been delivered and end up on the shelf. The principles behind Agile right now allow teams to deliver on a much more frequent cycle and also to deliver more focused releases.

Gardner: There are also, I suppose, technical and business drivers: better quality, faster turnaround, more complexity, and, of course, distributed teams. What is it about the combination? Why is this important now in terms of some of these other technical business and even economic imperatives?

Morowski: With the advent of Web applications, businesses really expect a quicker turnaround time. In addition, when you look at cost structures, the time spent on features not used and other things are critical business inhibitors at this point.

Gardner: Let's help out some folks out who might not be that familiar with Agile and its associated process called Scrum. Tell us a little bit from an elevator-pitch perspective. What is Agile and what is Scrum?

Morowski: Agile really is a set of principles, and these principles are based on things like self-directed teams, using working code as a measure of progress, and also looking at software development in terms of iteration. What we mean by that is that when you look at traditional software development, we talked about things like design, code, and testing as actual phases in a development lifecycle. Within Agile, in an iteration, these are just activities that occur in each iteration.

Now, when you talk about Scrum, that is more of a process and a methodology. This is actually taking those Agile principles and then being more prescriptive on how to apply them to a software-development cycle.

In the case of Scrum, it's based upon a concept called a sprint, which is a two-to-four week iteration that the team plans for and then executes. In that two-to-four weeks, whatever they get done is considered completed during that sprint, and what work hadn't been completed goes into what they call "product backlog" for prioritization on what is done in the next sprint. You chain these several iterations together for a release.

The beauty of this is that now you have a way to induce change on the borders of those iterations. So, one of the things that's really advantageous to Agile is its ability to adapt the changing requirements.

Gardner: When I try to explain Agile to people, some of them come away thinking that it's an oxymoron or is conflicted because they say, "Okay, your goal is to do things better and faster, but you are telling people use fewer rules, use less structure, and have your teams be self-selecting." People see a conflict here. Why isn't that a conflict?

Morowski: I think it's a misnomer that self-directed teams and that type of thing mean that we can do whatever we want. What it's really about is that teams begin to take ownership for delivering the product. What happens is that, by allowing these teams to become self-directed, they own the schedule for delivery.

What happens is that you see some things like traditional breakdowns of roles, where they are looking at what work needs to be finished in a sprint, versus "Well, I am a developer. I don't do testing," or "I am a doc writer, and I can't contribute on requirements," and those types of things. It really builds a team, which makes it a much more efficient use of resources and processes, and you end up with better results than you do in a traditional methodology.

Gardner: It almost sounds like we're using market forces, whereby entrepreneurs or small startups tend to be more energized and focused than teams within a larger, centralized organization. Is that a fair characterization?

Morowski: Yeah, I think it is very fair.

Gardner: And, given that we're looking for this empirical learn-as-you-go, do what's right for you, I suppose that also means that one size does not fit all. So, Agile would probably look very different from organization to organization.

Morowski: It could. One thing we chose to do, though, was to really to set a benchmark process. So, when Borland first started developing in Agile, we had multiple locations, and each site was, in essence, developing its own culture around Agile. What I found was that we were getting into discussions about whose Agile was more pure and things like that, and so I decided to develop a Borland Agile culture. [See case study on Borland and Agile.]

We broke that up on geographic bases, where we started with one site, had one "ScrumMaster" and we built what we call the reference process. As we've grown, and our projects are getting more complex, the fact that we evolve from site-to-site based on the same process and the same terminology has allowed us to now choose more complex agile techniques like Scrum of Scrums or work across organizations, and have a common vocabulary, and that kind of common way of working.

Gardner: It also sounds like you are taking the best of what a centralized approach offers and the best of what a decentralized approach offers, in terms of incentive; take charge, and local ownership, and then making them co-exist.

Morowski: That's correct.

Gardner: All right, let's get specifically into Borland's situation. What is it about the way that Borland has been developing software, which is of course a core competency for a large independent software vendor (ISV) like yourselves, and it has been for 15-plus years … How difficult was it for you to come into this established organization and shake things up?

Morowski: Initially, it wasn't an issue because, like most organizations, when we went through and looked at it, there were a couple of grassroots efforts underway. From an Agile perspective, one of the things we did was to begin to leverage that activity and the successes that it had to use as a benchmark with other teams. As we grew and moved into other organizations that were not necessarily grassroots efforts, there were some challenges.

Gardner: So, it might be quite possible that lot of organizations that do development have people who are Agile-minded and perhaps even followers of Agile doing this already. Perhaps they should look for those and start there.

Morowski: I would recommend that you start with your grassroots efforts, establish your benchmark process, and then begin to move out from there.

One thing we clearly did was, once that we saw the benefits of doing this, we had a lot of executive sponsorship around this. I made it one of the goals for the year to expand our use of Agile within the organization, so that teams knew it was safe to go ahead and begin to look at it. In addition, because we had a reference implementation of it, it also gave teams a starting point to begin their experimentation. We also paid for our teams to undergo training and those types of things. We created an environment that encouraged transformation.

Gardner: Let's learn a little bit more about you, Pete. Tell us a little bit about your background and how you came into development and then into Agile?

Morowski: I've been in this business a little over 25 years now. I started in the defense and aerospace industries and then moved into commercial ISVs later in my career. I've been an executive at Novell. I've also been a CTO at IBM Tivoli, and prior to Borland, was the vice president of software at Dell.

Gardner: You've taken on this Agile project at Borland, and you've written a paper on the “Borland Agile Journey.” I've had the pleasure of reading it. I think it's a really nice read and I commend for you it.

Morowski: Oh, thank you.

Gardner: Tell us about this particular product set [Borland Software Delivery Management information] that Borland is coming out with. It's a product set about helping people develop software. Is there a commonality between some of the lessons you learned and then what you may have actually visited in terms of requirements for your products? [See demo and see launch video.]

Morowski: Oh, absolutely. One of the interesting things about the products that we are delivering is that one of them is a product for managing Agile development, especially in distributed teams and managing the requirements. So, we had the advantage of actually using the tools as we were developing them.

Now, we were also very cautious because you can get myopic about that type of thing, where we also using Agile principles, and we involved our customers in the process, as well. So we were getting kind of the best of both worlds.

Gardner: What makes software development different? In reading your paper, I was thinking about how these principles about self-empowerment and working quickly and then setting these boundaries -- "Okay, we're going to just work and do this for three weeks and then will revisit any changes," -- that might be something it would apply to almost any creative activity where a team is involved.

Is Agile something you think applies to any creative activity, a complex team-based activity, or is there something about it that really is specific and germane to software development?

Morowski: If you look at Agile principles, conceptually, they do apply to a lot of things. Anything in which you are going into a period of discovery, one of the key things is knowing what your goal or mission is. In the case of software, that's a requirement, and what you want the product to be.

But in any kind of empirically based endeavor, this would be something that you could apply. Now, when you get down to the actual Scrum process itself, it's the terminology, the measures, the metrics, and all those types of things are really tailored for software development.

Gardner: When I read your paper, I also came away with some interesting observations. You say, there is a difference between how development is supposed to work and how it actually works. It's sounds like many companies are living in denial or a certain level of dysfunction that they are not necessarily facing.

Morowski: It's one of the issues with laying a manufacturing process over something that's inherently an empirical process. In the end, all software R&D organizations or IT shops responsible for applications are responsible to the business for delivering results. And, in doing so, we all try to measure those things.

What I have observed over my career was the fact that there really existed two worlds. There is what I call the "management plane," and this is a plane of milestones, of phase transitions and a very orderly process and progress through the software development lifecycle.

Underneath it though, in reality, is a world of chaos. It's a world of rework, a world of discovery, in which the engineers, testers and frontline managers live. We traditionally use Gantt as a measure that is task-based. It requires a translation from the implementation world to the management world to show indications of progress. Any time you do a translation, there can be a loss of information, and that's why today software is such an experienced-based endeavor.

Gardner: And it's often been perceived as sort of a dark art. People don't appreciate or understand how it's done, and that those who do it should say, "Hey, leave me alone, get away from me. I'll come back with the results in three months."

Morowski: Exactly.

Gardner: But that doesn't necessarily or hasn't historically been the best approach.

Morowski: Absolutely not.

Gardner: Also, at times, you see them downplay process and say that doing good hiring probably is the biggest issue here. What's the relationship between hiring and what people, not always affectionately, refer to as human resources? What's the relationship between HR and Agile?

Morowski: Well, first of all, just getting back to a little bit on hiring thing. Hiring is important, regardless of what methodology you use, and I tend to stress that. I do contend there are different kinds of personalities and skill sets you are going to be looking for when you are building Agile teams, and it's important to highlight those.

It's very important that whoever comes onboard in Agile team is collaborative in nature. In traditional software environments, there are two roles that traditionally you may struggle with, and you have to look at them closely. One is the manager. If a manager is a micromanager-type, that's not going to work in an Agile environment.

And, the other thing, interestingly enough, is the chief architect role. What's interesting about that is that you would think you would fit in Agile very easily, but in a lot of traditional software organizations, all decisions of a technical nature on a project go through the chief architect. In an Agile world, it's much more collaborative and everybody contributes. So for some personalities, this would be a difficult change for them.

Gardner: So there is that grassroots element, and you have to be open to it.

Morowski: Right.

Gardner: What is it about the structures here? Again, for folks who might not be that familiar with Agile, tell us a little bit about some of the hierarchy.

Morowski: There are really two key roles. There is the ScrumMaster and the ScrumMaster runs what they call the daily stand-up. This is basically a meeting, where everybody on the team gets together on a daily basis and they answer three questions. "What did I get accomplished yesterday?" "What am I going to do today?" And "What's blocking me?"

Everybody goes around the room. It's a 15- minute meeting. You solve any particular problems, but you log things. The role of ScrumMaster is to run that meeting and to remove blocks to the team, and it's a very key role.

The second major role within Scrum is really the product owner, and this is the individual that's responsible for prioritizing the requirements or what we call the product backlog -- what is what is going to be done during the sprint, which features are going to be completed. Those are the two primary roles, and then from there everybody is pretty much a team member.

Gardner: When you decided to bring this into play at Borland, a very large, distributed organization, you didn't try to bite off too much. You didn't say, "We are going to transform the entire company and organization." You did this on more of an iterative basis. It seems that most people, when they do Agile, will probably follow similar path. They'll take a project basis and then say, "Now we need to expand this and make it holistic."

Many organizations, however, across all kinds of different management activities, can stumble at that transition from the project, or the tactical, into the holistic, or general, across an organization. What did you learn in making this transition from small to large scale at Borland?

Morowski: A couple things. One is that, as we rolled it out, let's say starting by site-by-site, we grew from teams-to-teams. The ScrumMasters worked very collaboratively to help each other out, because, in the end, they were responsible for delivering at the end of those sprints. That was a very positive effect.

As we moved out to distributed teams, there were a number of challenges, things like the daily stand-up, or if I have people in Singapore that are supporting a particular sprint, say, from the system testing point, that made things difficult. But, what I found is the team was pretty creative in involving those individuals, whether they recorded sprints, whether they shifted time zones, and they did this all on their own.

That was the absolute positive, one of the things that surprised me. It was an interesting discovery.

As we started to be more broad with the interaction with the non-Agile parts of the organization, this was a little bit more of a challenge, and I learned a couple of things. In doing any kind outsourcing, if you try to match a traditional, contractual base -- the statement of work (SOW)-type outsourcer -- with an agile team, that's going to present problems. The outsourcer is expecting very detailed specifications as a statement of work and that's just not produced during an agile or sprint/Scrum type of development activity.

The other thing is internally, and what I would say at the end of the pipe and at the beginning of the pipe, working with marketing and our new product introduction processes and support and getting sales out. One of the things that we found is that we started to have a capacity to release more often, but the organization, as a whole, had to adjust now to: A) provide market requirements to us in a different manner, and B) we had to adjust our process at the end to be able to accept more rapid releases.

Gardner: So in order to get the most out of Agile, it sounds like, for those organizations where software development is core competencies, important to their successes as a company, or as a government organization, or a public not-for-profit, that the edges of Agile start to blend into other departments. The whole competency around their organization can perhaps borrow some of these principles from development and extend them into the entire lifecycle.

Morowski: Yes, we no longer look at it as strictly an R&D thing anymore, just because of that. And, it's interesting. You know you are making progress from a development team perspective, when you are starting to output more than the organization can accept.

Gardner: Interesting. So, adjustments along the way, and that's again a principle of the approach.

All right. In this age of Agile and your Agile journey, you came away with three basic observations about the benefits. One was around self-directing teams; second around being able to manage change well; and, third, about how to do the relationship with the customer, in this case the customer being the folks who are interested in getting the software. Tell us about these three benefits and what you have learned?

Morowski: Well, we touched on the self-directing teams, and the key to that is one of the most important things as an executive is that you really have to take the lead and let your teams go and develop -- let them truly own their projects. There will be mistakes along the way, but once they do, it's an extremely powerful concept.

One of the great things about agile is that it's a very open and very visible methodology. During daily stand-ups, I can attend any daily stand-up and sit there and listen to what's going on. I can't contribute in those meetings, because that's run by the ScrumMaster. But, one of the times I was attending the daily stand-up, I knew the teams had progressed a great deal.

When they were looking at their remaining work backlog that they had for that particular sprint, and there were a couple of tests that need to be run that there was nobody assigned to. One of the developers had time, looked at that, and picked it up.

Now, normally, that would never happen, because we behave in a silo fashion. "I am an engineer." "I am a tester." It's an "I am a …" type of thing. But, when you really have a self-directing team, the team owns that schedule and it's very interested in making sure that they meet their commitments.

Gardner: I suppose that also fosters willingness of people to move in and out of role, without just saying, "Well, that's not my job …", but taking more group responsibility, and even as an individual.

Morowski: Absolutely correct, and that to me has been one of the more powerful things that I have personally observed.

Gardner: Change management has often been something that drives developers crazy. They hate when people come in and start changing requirements when they are in the middle of doing code or test. On the other hand, things don't stay the same, and change is part of everything in life and business, perhaps more so today than ever. How do you reconcile those two?

Morowski: Well, I think the reality is that there is going to be change during these development cycles, and so the question is what's the best way to handle it? If you look in a traditional waterfall methodology, you march along phase transitions. Even if you have iteration in place, if you discover a design or coding defect late in the game, you have to go backwards to a different phase and start going into the design or fixing the code. Then, you repeat the process again, and you continue to move along your space transition line.

The thing that's interesting is that with Agile you have an orderly way of injecting change. In other words, as a sprint completes and you've demonstrated the code -- and you demonstrate it after that three-week iteration -- if something has changed and you need to change the prioritization, you have a way to inject that change along that boundary, and then let the team go forward. That's what I always like to say, "We're always going forward in Agile."

Gardner: And how do the teams adjust to that?

Morowski: It's part of the process. The changes go into the backlog. The product owner looks at them and then prioritizes it based upon the complexity of the work and the timing and so on and so forth, and just how important that is. If it's important enough, it will go into the next iteration. The teams are used to doing that, because you are not, in essence, disrupting at a random point. They have already finished what work they were working on, and now there is a cleaner opportunity to inject that change.

Gardner: So, boundaries allow for those who want change to get it done without having to wait for a particularly long period of time or until the project is done. But, for those involved in the project, they have these sections where it's not going to become chaotic and they are not going to lose track of their overall process, because of this injection of change.

Morowski: No, as a matter of fact, the process encourages it.

Gardner: How about this, what you call customer relationships? It sound to me as thought it's just being transparent.

Morowski: It is. It's a different approach, in the sense that you are actually bringing in the customer as what I would call a partner in the development. They participate in sprint reviews, and sprint reviews at the end of a sprint, where you show the working code, what you have completed and so. Those are done on an every-three-week basis, and we involve our customers.

They also take early drops of the code and provide input into the product backlog on requests that they want, and things like that. It's proven to be very beneficial for us. The one thing is that, when you choose these customers to participate, it's important for them to be Agile, as well, and understand that, and they need to approach this as a partnership not just an opportunity to get their particular features or requirements in.

Gardner: And, that must also help keep expectations in line, right?

Morowski: Absolutely. What I have found is that the customers we have involved want to get used to our cycles and our delivery rhythm. They are less adamant about getting every feature on a list in a particular release, because they know it's a relatively short time before the next one comes around.

Gardner: When we describe these customers, would that, in many organizations, include bringing the marketing people in, and the salespeople. Can they get involved so that this becomes something that will enter the market as an agile activity, rather than having Agile happening on the development side, and then falling back into a waterfall mentality when it comes to the go-to-market activities?

Morowski: Yes, we do, and the transparency that's there actually helps build confidence in the rest of the organization on what we are delivering, because they see it as we progress along. It's not something that mysteriously shows up on their doorstep.

Gardner: It certainly sounds great in theory, and it sounds like you've been able to accomplish quite a bit in practice, but what about metrics of success? How have you been able to say, "it works?" Has Borland cut their cost, their time to development? Do they have better products? All of the above? How do we know we are succeeding?

Morowski: I'd say it's combination of all the above. The first thing is that by putting these teams together, they are much smaller teams than in traditional organizations. So, if you look at it, my teams are almost 30 percent smaller on the Agile side than they are on the traditional side.

Gardner: And what's accounting for that change?

Morowski: I think one, is the ownerships of the teams, and two, the breakdown of very specific roles.

Gardner: Would I be going out on a limb in saying you have eliminated the middle management factor?

Morowski: There is absolutely that as well. The other thing is the fact that we're delivering working code and involving with customers. We are developing fewer superfluous features. When a product goes out the door, it generally has the most important features that were entailed for this release. So, it really helps the prioritization process.

Gardner: Not too many cooks in the kitchen?

Morowski: Exactly.

Gardner: Cool! Tell us a little bit about what surprised you the most about this Agile journey of Borland.

Morowski: I think the power of the daily stand-up. I mean, yes, we got a lot of benefits, and yes, we had a number of successes, we were able to transition code from locations and things like that, but I owe that a lot to the daily stand-up. The thing that surprised me is how powerful it is each morning when everybody gets around the table and actually goes through what they've done, basically saying, "Have I lived up to my commitments? What I am committing to the team today? And then is there anything blocking?"

Generally speaking, a lot of developers tend to be quiet and not the most social. What this did is that it just didn't allow the few people who were social to have input on what was going on. This daily stand-up had people, everybody on the team, contributing, and it really changed the relationships in the team. It was just a very, very powerful thing.

Gardner: It sounds like balance among personality types, but that balance directed toward the real activity that is developing code.

Morowski: Absolutely.

Gardner: Interesting! Well, congratulations. I enjoyed reading your paper, and this certainly sounds like the future of development, I know that's what many people in the business think. We've been talking about Agile development practices and principles and how Borland Software has been undertaking an Agile journey itself, in a development project around development process tools and application lifecycle management products.

Back to those products. Is there anything about the synergy between doing it this way and then presenting products into the field that you think will help other people engage with Agile benefits?

Morowski: Are you talking about the products themselves?

Gardner: Yes.

Morowski: The products themselves, absolutely. We have a product coming out called Team Analytics. The key to this is that, while we talked about self-directed teams, we still have responsibilities to reporting to the business and how we are progressing.

Team Analytics gives us a view into the process, gives us the ability to go ahead and look at how the team is progressing, and those types of things, what features have been included or dropped, without having to go into the team and request that information. So that's a very powerful thing.

Gardner: Right. So, it's one thing to agree that visibility and transparency are good, but it's another to actually accomplish it in terms of complexity in large teams and hierarchy.

Morowski: Absolutely. This allows us to move to what I call a "monitored" from a "reported" kind of methodology of metrics. What I mean by that is, typically, at the senior vice president or vice president level, you really get to look at the state of your products once a month, in the sense that you have operations reviews or some kind of review cycle where all your teams come in and then they report the progress of what's going on.

With Team Analytics, you are able to actually look at that on a daily basis and see if anything’s changed over time. That way, you know where you need to spend your time and that's why we call it monitored, at this point.

Gardner: Super! Well, thank you for sharing your insights. I think there is a lot to be taken away here and learned.

We have been talking with Pete Morowski, the senior vice president of research and development for Borland Software. We were looking at Agile principles in the context of Borland's Agile journey.

Thanks, Pete.

Morowski: Thank you, Dana.

Gardner: This is Dana Gardner, principal analyst at Interarbor Solutions, and you’ve been listening to a sponsored BriefingsDirect podcast.

Thanks for joining us and come back next time.

Listen to the podcast. Sponsor: Borland Software.

Transcript of BriefingsDirect podcast on Agile development principles with Borland Software. Copyright Interarbor Solutions, LLC, 2005-2008. All rights reserved.