Monday, August 31, 2009

Cloud Adoption: Security is Key as Enterprises Contemplate Moves to Cloud Computing Models

Transcript of a sponsored BriefingsDirect podcast on the state of security in cloud computing and what companies need to do to overcome fear, reduce risk and still enjoy new-found productivity.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Learn more. Sponsor: Hewlett-Packard.

Free Offer: Get a complimentary copy of the new book Cloud Computing For Dummies courtesy of Hewlett-Packard at www.hp.com/go/cloudpodcastoffer.

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you’re listening to BriefingsDirect.

Today, we present a sponsored podcast discussion on caution, overcoming fear, and the need for risk reduction on the road to successful cloud computing.

In order to ramp up cloud-computing use and practices, a number of potential security pitfalls need to be identified and mastered. Security, in general, takes on a different emphasis, as services are mixed and matched and come from a variety of internal and external sources.

So, will applying conventional security approaches and best practices be enough for low risk, high-reward cloud computing adoption? Is there such a significant cost and productivity benefit to cloud computing that being late or being unable to manage the risk means being overtaken by competitors that can do cloud successfully? More importantly, how do companies know whether they are prepared to begin adopting cloud practices without undo risks?

To help us better understand the perils and promises of adopting cloud approaches securely, we're joined by three security experts from Hewlett-Packard (HP). Please join me in welcoming Archie Reed, HP Distinguished Technologist and Chief Technologist for Cloud Security. Welcome, Archie.

Archie Reed: Hello, Dana. Thanks.

Gardner: We're also joined by Tim Van Ash, director of software-as-a-service (SaaS) products at HP Software and Solutions. Welcome, Tim.

Tim Van Ash: Good morning, Dana.

Gardner: Also, David Spinks, security support expert at HP IT Outsourcing. Welcome, David.

David Spinks: Good morning.

Gardner: Of course, any discussion nowadays that involve cloud computing really deserves a definition. It's a very amorphous subject these days. We're talking about cloud computing in terms of security and HP. How do you put a box around this? What are the boundaries?

Van Ash: It's a great question, Dana, because anything associated with the Internet today tends to be described as cloud in an interchangeable way. There's huge confusion in the marketplace, in general, as to what cloud computing is, what benefits it represents, and how to unlock those benefits.

Over the last two years, we've really seen three key categories of services emerge that we would define as cloud services. The first one is infrastructure as a service (IaaS). Amazon's EC2 or S3 services are probably some of the best known. They're there to provide an infrastructure utility that you can access across the Internet, and run your applications or store your data in the cloud, and do it on a utility-based model. So, it's a pay-per-use type model.

If we look at platform as a service (PaaS), this is an area that is still emerging. It's all about building applications in the cloud and providing those application-development platforms in the cloud that are multi-tenant and designed to support multiple customers on the same platform, delivering cost efficiencies around development, but also reducing the amount of development required. Many of the traditional tiers from data persistency and other things are already taken care of by the platform.

The last area, which is actually the most mature area, which started to emerge about 10 years ago, is SaaS. Great examples of this are Salesforce.com, HP's partner NetSuite, and, obviously, HP's own Software-as-a-Service Group, which delivers IT management as a service.

Gardner: When we're talking about applying security to these definitions, are we talking about something very specific in terms of crossing the wire? Are we talking about best practices? Are we talking about taking a different approach in terms of a holistic and methodological understanding of security vis-à-vis a variety of different sources? Help us better understand what we mean when we apply security to cloud.

Different characteristics

Van Ash: Once again, it's a great question, because you see very different characteristics, depending on the category of the service. If it's IaaS, where it's really a compute fabric being provided to you, you're responsible for the security from the operating system, all the way out.

You're responsible for your network security, the basic operating system security, application security, and the data security. All of those aspects are within your domain and your control, and there really is a large difference between the responsibility of the consumer and the responsibility of the provider. The provider is really committing to providing a compute fabric, but they're not committing, for the most part, to provide security, although there are IaaS offerings emerging today that do wrap aspects of security in there.

For PaaS, the data persistency and all those elements, for the most part, are black box. You don't see that, but you're still responsible for the application-level security, and ensuring that you're not building vulnerabilities in your code that would allow things like SQL injection attacks to actually mine the data from the back-end. You see more responsibility put on the provider in that environment, but all the classic application security vulnerabilities, very much lie in the hands of the consumer or the customer who is building applications on the cloud platform.

With SaaS, more of the responsibility lies with the provider, because SaaS is really delivering capabilities or business processes from the cloud. But, there are a number of areas that you're still responsible for, i.e., user management in ensuring that there are perfect security models in place, and that you're managing entry and exit of users, as they may enter a business or leave a business.

You're responsible for all the integration points that could introduce security vulnerabilities, and you're also responsible for the actual testing of those business processes to ensure that the configurations that you're using don't introduce potential vulnerabilities as well.

Gardner: Archie Reed, it sounds as if there is a bigger task here. We had to evaluate whether the provider has instituted sufficient security on their end. We have to be concerned about what we do internally. It sounds like there is a larger security wall to deal with here. Is that the case when we look at cloud?

Reed: Absolutely. One of the key things here is, if you take the traditional IT department perspective of whether it's appropriate and valuable to use the cloud, and then you take the cloud security's perspective -- which is, "Are we trusting our provider as much as we need to? Are they able to provide within the scope of whatever service they're providing enough security?" -- then we start to see the comparisons between what a traditional IT department puts in play and what the provider offers.

For a small company, you generally find that the service providers who offer cloud services can generally offer -- not always, but generally -- a much more secure platform for small companies, because they staff up on IT security and they staff up on being able to respond to the customer requirements. They also stay ahead, because they see the trends on a much broader scale than a single company. So there are huge benefits for a small company.

But, if you're a large company, where you've got a very large IT department and a very large security practice inside, then you start to think about whether you can enforce firewalls and get down into very specific security implementations that perhaps the provider, the cloud provider, isn't able to do or won't be able to do, because of the model that they've chosen.

That's part of the decision process as to whether it's appropriate to put things into the cloud. Can the provider meet enough or the level of security that you're expecting from them?

Suitable for cloud?

The flip side of that is from the business side. Are you able to define whether the service value that's being provided is appropriate, and is the data going into the cloud suitable for that cloud service?

By that, I mean, have we classified our data that is going to be used in this cloud service regardless of whether it's sitting in a PaaS or SaaS? Is it adequately protected when it goes into the cloud, such that we can meet our compliance objectives, our governance, and the risk objectives? That ultimately is the crux of the decision about whether the cloud is secure enough.

Gardner: Let's go to David Spinks. It sounds as if we almost fundamentally need to rethink security, because we have these different abstractions now of sourcing. We have to look at access and management control, what should be permeable and perhaps governed at a policy level across the boundaries.

I suppose there are also going to be issues around dynamic shifting, when processes and suppliers change or you want to move from a certain cloud provider to another over time. Do you think it's fair that we have to take on something as dramatic as rethinking security?

Spinks: That's absolutely right. We've just been reviewing a large energy client's policies and procedures. While those policies, procedures, and controls that they apply on their own systems are relevant to their own systems, as you move out into an outsourcing model, where we're managing their technology for them, there are some changes required in the policies and procedures. When you get to a cloud services model, some of those policies, procedures, and controls need to change quite radically.

Areas such as audit compliance, security assurance, forensic investigations, the whole concept of service-level agreements (SLAs) in terms of specifying how long things take have to change. Companies have to understand that they're buying a very standard service with standard terms and conditions.

Before they were saying, "Our systems have to comply with this policy, and you have to roll out patches." In a cloud services environment, those requirements no longer apply. They have very standard terms and conditions imposed on them by the cloud providers.

Gardner: So, while we need to think out how we approach cloud, particularly when we want a high level of security and a low level of risk, the rewards for doing this correctly can be rather substantial.

Tim Van Ash, what are the balances here? Who is in the role of doing the cost-benefit analysis that can justify moving to the cloud, and therefore recognize the proper degree of security required?

Pressure to adopt

Van Ash: It's a very interesting question, because it talks to where the pressures to the adoption of cloud are really coming from. Obviously, the current economic environment is putting a lot of pressure on budgets, and people are looking at ways in which they can continue to move their projects forward on investments that are substantially reduced from what they were previously doing.

But, the other reason that people are looking at it is just agility, and both these aspects – cost and agility -- are being driven by the business. Going back to the earlier point, these two factors coming from the business are forcing IT to rethink how they look at security and how they approach security when it comes to cloud, because you're now in a position where many of your intellectual property and your physical data and information assets are no longer within your direct control.

So what are the capabilities that you need to mature in terms of governance, visibility, and audit controls that we were talking about, how do you ramp those up? How do you assess partners in those situations to be able to sit down and say that you can actually put trust into the cloud, so that you've got confidence that the assets you're putting in the cloud are safeguarded, and that you're not potentially threatening the overall organization to achieve quick wins?

The challenge is that the quick wins that the business is driving for could put the business at much longer-term risk, until we work out how to evolve our security practices across the board.

Gardner: We've been dealing with security issues for many years. Most people have been doing

When we start to look at what the cloud providers offer in terms of security, and whether our traditional security approaches are going to meet the need, we find a lot of flaws.

wide area networking and using the Internet for decades. Archie Reed, are the current technologies sufficient? Is the conventional approach to security all right? Or, do we need to recognize that we, one, either need new types of technologies, or two, primarily need to look at this from a process, people, and methodology perspective?

Reed: That's a long question. Tying into that question, and what Tim was just alluding to, most customers identify cost and speed to market as being the primary drivers for going or looking at cloud solutions.

Just to clarify one other point, in this discussion so far, we've been primarily talking about cloud providers as being external to the company. We haven't specifically looked at whether IT inside a large organization may be a cloud provider themselves to the organization and partners.

So, sticking with that model, alongside the cost and speed to market, when customers are asked what their biggest concerns are, security is far and away the number one concern when they think about cloud services.

The challenge is that security, as a term, is arguably a very broad, all-encompassing thing that we need to consider. When we start to look at what the cloud providers offer in terms of security, and whether our traditional security approaches are going to meet the need, we find a lot of flaws.

What we need to do is take some of that traditional security-analysis approach, which ultimately we describe as just a basic risk analysis. We need to identify the value of this data -- what are the implications if it gets out and what's the value of the service -- and come back with a very simple risk equation that says, "Okay, this makes sense to go outside."

If it goes outside, are the processes in place to say who can have access to this system, who can perform actions on the service that's providing access to that data, and so on.

Traditional approaches

Our traditional approaches lead us to the point where we can then decide what the appropriate actions are that we need to put in play, whether they be training for people, which is very important and often forgotten when you're using cloud services. Then decide the right processes that need to be used, whether they be implemented by people or automated in any way. Then ultimately, down to the actual infrastructure that needs to be updated, modified, or added, in order to get to the level of security that we're looking for. Does that make sense?

Gardner: Yes. It sounds as if it's not so much a technological issue, as something for the architects and the operational management folks to consider, a fairly higher-level perspective is needed.

Reed: Arguably, yes. Again, it depends what you're putting into the cloud. There are certain things where you may say, "This data, in and of itself, is not important, should a breach occur. Therefore, I'm quite happy for it to go out into the cloud."

An example may be if you have a huge image database, for example, a real estate company. The images of the properties, in and of themselves, hold little value, but the amount of storage and bandwidth that you as a company have got to put into play to deliver that to your customers is actually quite costly and may not be something that your IT department has expertise in.

A cloud provider may be able to not only host those images and deliver those images on a

Generally, when we talk to people, we come back to the risk equation, which includes, how much is that data worth, what are the implications of a bridge, and what is the value of the services being provided.

worldwide basis, but also provide extra image editing tools, and so on, such that you can incorporate that into an application that you actually house internally, and you end up with this hybrid model. In that way, you get the best of both worlds.

Generally, when we talk to people, we come back to the risk equation, which includes, how much is that data worth, what are the implications of a bridge, and what is the value of the services being provided. That helps you understand what the security risk will be.

Gardner: So, if you start to "componentize" your workloads and understand more about what can be put on a scale of risk, you can probably reduce your costs dramatically, if you do it thoughtfully, and therefore gain quite a competitive advantage.

Reed: Absolutely. We have a vision at HP. It's generally recognized out there as "Everything-as-a-Service." An IT department can look at that and take things down to those componentized levels, be it based on a bit of data that needs to be accessed, or we need to provide this very broad service. In that way, they can also help define what is appropriate to go into the cloud and what security mechanisms are necessary around that. Does the provider offer those security mechanisms?

Gardner: Is it important to get started now, even for companies that may not be using cloud approaches very much, to fully engage on this? Is it important and beneficial for them to start thinking about the processes, the security, and the risk issues? Let me pass that to David Spinks.

Next big areas

Spinks: The big areas that I believe will be developed over the next few years, in terms of ensuring we take advantage of these cloud services, are twofold. First, more sophisticated means in data classification. That's not just the conventional, restricted, confidential-type markings, but really understanding, as Archie said, the value of assets.

But, we need to be more dynamic about that, because, if we take a simple piece of data associated with the company's annual accounts and annual performance, prior to release of those figures, that data is some of the most sensitive data in an organization. However, once that report is published, that data is moved into the public domain and then should be unclassified.

What we're finding is that many organizations, once they classify a piece of data as confidential or secret, it stays at that marking, and therefore is prohibited from moving into a more open environment.

We need not just management processes and data-classification processes, but these need to be much more responsive and proactive, rather than simply reacting to the latest security breach. As we move this forward, there will be an increased tension to more sophisticated risk management tools and risk-management methodologies and processes, in order to make sure that we take maximum advantage of cloud services.

Gardner: Tim Van Ash, as companies start to think about this and want that holistic perspective, does adopting SaaS and consuming those applications as services provide a stepping-stone? Is this a good validation point?

Van Ash: Going back to the point that David was just making, it comes down to which

The level of data being held within an organization like Salesforce is extremely sensitive. Salesforce has had to invest tremendous amounts of time and energy in protecting their systems over the years.

processes you're putting into the cloud and the value tied to those processes.

For example, Salesforce.com has been very successful in the SaaS market. Clearly, they're the leader in customer relationship management (CRM) in the cloud today. The interesting thing about that is, the information they store on behalf of customers are customer data and prospect data, things that organizations guard very carefully, because it represents revenue and bookings to the organization.

If you look at how the adoption has occurred, it started out with small to medium companies for whom speed was often more important than the financial security, but it has now very much moved into the enterprise. The level of data being held within an organization like Salesforce is extremely sensitive. Salesforce has had to invest tremendous amounts of time and energy in protecting their systems over the years.

Likewise, if we look at our own SaaS business within HP, not only do we go through external audit on a regular basis, but we're applying a level of security discipline. It could be SAS 70 Type II around the data centers and practices, or being certified to an ISO standard, whether it be 27001 or one of the earlier variations of that. Cloud providers are now having to adhere to a very rigorous set of guidelines that, arguably, customers don't apply to the same level around their information internally.

The big reason for that is that when you run element as a service, you have to build supporting elements around that service. It's not a generic capability that exists across the entire business. So, there's a lot more focus placed on security from the SaaS model than maybe would have been applied to some of those elements within smaller to medium organizations, and, certainly, in some of the non-core functions in the enterprise.

Gardner: I assume that the ways in which an organization starts to consume SaaS and the experiences they have there does set them up to become a bit more confident in how to move forward toward the larger type of cloud activity.

Fear, uncertainty, doubt

Van Ash: That's a great point, Dana. Typically, what we see is that organizations often have concerns. They go through the fear, uncertainty, and doubt. They'll often put data out there in the cloud in a small department or team. The comfort level grows, and they start to put more information out there.

At the same time, going back to the point that both Dave and Archie were making, you need to evolve your processes, and those processes need to include the evaluation of the risk and the value of the information and the intellectual property that you're placing out there.

Spinks: One of the observations I've had talking with a lot of customers about so far, some big customers and small, is they're experiencing this situation where the business units are pushing internally to get to use some cloud service that they've seen out there. A lot of companies are finding that their IT organizations are not responding fast enough such that business units are just going out there directly to a cloud services provider.

They're in a situation where the advice is either ride the wave or get dumped, if you want an analogy. The business wants to utilize these environments, the fast development testing and launch of new services, new software-related solutions, whatever they may be, and cloud offers them an opportunity to do that quickly, at low cost, unlike the traditional IT processes.

But, all of these security concerns often get lost, because these things that they want to work on

Many enterprises today looking for quick wins are leveraging elements like IaaS to reduce their costs around testing and development.

are very arguably entrepreneurial in nature and move very quickly to try to capture business opportunities. They also may require partners to engage quickly and easily, and getting holes through firewalls and getting approvals can take months, if not quarters, in the traditional model. So, there is a gap in the existing IT architectural processes to implement and support these solutions.

That's what IT has got to deal with, if we focus on their needs for a minute. If they don't have a policy, if they don't have a process and advertise that within an organization, they will find that the business units will get up on that wave and just ride away without them.

Van Ash: We do see enterprises are being somewhat cautious, when they're applying it. As Archie was saying right upfront, you see a different level of adoption, a different level of concern, depending on the nature of the business and the size of the business. Many enterprises today looking for quick wins are leveraging elements like IaaS to reduce their costs around testing and development. These are areas that allow them to get benefit, but doing it in a way that is managing their risk.

Gardner: It sounds as if we need to get this just right. If we drag our feet as an organization, some of the business units and developers will perhaps take this upon themselves and open up the larger organization to some risk. On the other hand, if we don't adopt at a significant pace, we risk a competitive downfall or downside. If we adopt too quickly and we don't put in the holistic processes and think it through, then we're faced with an unnecessary risk.

I wonder, is there a third-party, some sort of a neutral certification, someone or some place an organization can go to in order to try to get this just right and understand from lessons that have been learned elsewhere?

Efforts underway

Reed: We would hope so. There are efforts underway. There are things, such as the Jericho Forum, which is now part of The Open Group. A group of CIOs and the like got together and said, "We need to deal with this and we need to have a way of understanding, communicating, and describing this to our constituents."

They created their definition of what cloud is and what some of the best practices are, but they didn't provide full guidelines on how, why, and when to use the cloud, that I would really call a standard.

There are other efforts that are put out by or are being worked on today by The National Institute of Standards and Technology, primarily focused on the U.S. public sector, but are generally available once they publish. But, again, that's something that's in progress.

The closest thing we've got, if we want to think about the security aspects of the cloud, are coming from the Cloud Security Alliance, a group that was formed by interested parties. HP supported founding this, and actually contributed to their initial guidelines.

Essentially, it lays out 15 focus areas that need to be concentrated on in terms of ensuring a level

So, my suggestion for companies is to take a look at the things that are underway and start to draw out what works for them, but also get involved in these sorts of things.

of security, when you start to look at cloud solutions. They include things like information lifecycle management, governance, enterprise risk management, and so on. But, the guidelines today, knowing of course that these will evolve, primarily focus on, "Here is the best practice, but make sure you look at it under your own lens."

If we're looking for standards, they're still in the early days, they're still being worked on, and there are no, what I would call, formal standards that specifically address the cloud. So, my suggestion for companies is to take a look at the things that are under way and start to draw out what works for them, but also get involved in these sorts of things.

Gardner: I just want to make sure I understood the name. Was it Jericho, the project that's being done by The Open Group?

Reed: Jericho Forum was the group of CIOs who essentially put together their thoughts, and then they've moved it under The Open Group auspices.

The Jericho Forum and the Cloud Security Alliance, earlier this year, signed an agreement to work together. While the Jericho Forum focused more on the business and the policy side of things, the Cloud Security Alliance focused on the security aspects thereof.

Gardner: What is HP specifically doing to advance the safe and practical use of cloud services, working I would imagine in concert with some of these standards, but also looking to provide good commercial services?

HP's efforts

Reed: There are many things going on to try and help with this. As I said, we were involved in the formation of the CSA, and we were involved, and are still involved, in helping write the guidance for critical areas, a focus in cloud computing, and the next generation. We are, through our EDS folks, directly involved with the Jericho Forum, and bringing those together.

We also have a number of tools and processes based on standards initiatives, such as Information Security Service Management (ISSM) modeling tools, which incorporate inputs from standards such as the ISO 27001 and SAS 70 audit requirements -- things like the payment card industry (PCI), Sarbanes-Oxley (SOX), European Data Privacy, or any national or international data privacy requirements.

We put that into a model, which also takes inputs from the infrastructure that's being used, as well as input based on interviews with stakeholders to produce a current state and a desired or required state model. That will help our customers decide, from a security perspective at least, what do I need to move in what order, or what do I need to have in place?

That is all based on models, standards, and things that are out there, regardless of the fact that cloud security itself and the standards around it are still evolving as we speak.

Gardner: Tim Van Ash, did you have anything further to offer in terms of where HP fits into

Cloud Assure is really designed to deal with the top three concerns the enterprise has in moving into the cloud.

this at this early stage in the secure cloud approach?

Van Ash: Yeah. In addition to the standards and participation that Archie has talked about, we do provide a comprehensive set of consulting services to help organizations assess and model where they are, and build out roadmaps and plans to get them to where they want to be.

One of the offerings that we've launched recently is Cloud Assure. Cloud Assure is really designed to deal with the top three concerns the enterprise has in moving into the cloud.

Security, obviously, is the number one concern, but the number two and three concerns are performance and availability of the services that you're either consuming or putting into the cloud.

Cloud Assure is designed and delivered through the HP Software-as-a-Service Group, so that its a way that organizations can assess potential cloud services that they want to consume for those security issues, so that they know about it before they go in. This can help them to choose who is the right provider for them. Then, it's designed to provide ongoing assessment of the provider over the life of the contract, to ensure that they continue to be as secure as required for the type of information and the risk level associated with it.

The reason we do it through SaaS is to enable that agility and flexibility of those organizations, because speed is critical here. Often, the organizations aren't in a position to put up those sorts of capabilities in the timeframe the business is looking to adopt them. So, we're leveraging cloud to enable businesses to leverage cloud.

Gardner: David Spinks, are there areas where success is being meaningfully engaged now? Are there early adopters? Where are they? And, are they really getting quite a bit of productivity from moving certain aspects or maybe entire sets of IT functions or business functions to the cloud?

Moving toward cloud

Spinks: We're seeing some of the largest companies in the world move towards cloud services. You've got the likes of Glaxo and Coca-Cola, who are already adopting cloud services and, in effect, learning by actual practical experience. I think we'll see other large corporations in the world move towards the adoption of cloud, because obviously they spend the most on IT and, therefore, have got the most to gain from incremental savings.

The other key technology that we'll see emerge from one of the issues in cloud computing in the whole area of personal authentication, authorization, and federated access is this concept called Role-Based Access Control (RBAC).

There are a number of clients who are talking to us about how we might use our experiences with some of the largest corporations and government agencies in the world in terms of putting more robust authentication processes in place, allowing our largest clients to collaborate with their customers and their partners.

One of the key technologies there, and obviously one of the key technologies that Jericho have been pushing for years, is much more robust identity management and authentication, including technologies such as two-factor authentication and managed public key infrastructure (PKI). I would prophesize that we're going to see an explosion in the use of those technologies, as we move further and further into the cloud.

Gardner: Well, very good, I'm afraid we're about out of time. We've been having a discussion about overcoming fear -- caution and the need for risk reduction on the road to successful cloud computing. Our panelists have been Archie Reed, HP Distinguished Technologist and Chief Technologist for cloud security. I certainly appreciate your input Archie.

Reed: Thank you very much, Dana.

Gardner: Tim Van Ash, director of SaaS products at HP Software and Solutions. Thank you, Tim.

Van Ash: Thanks very much, Dana.

Gardner: And David Spinks, security support expert at HP IT Outsourcing. Thank you, David.

Spinks: You're very welcome.

Gardner: This is Dana Gardner, principal analyst at Interarbor Solutions. You've been listening to a sponsored BriefingsDirect podcast. Thanks, and come back next time.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Learn more. Sponsor: Hewlett-Packard.

Free Offer: Get a complimentary copy of the new book Cloud Computing For Dummies courtesy of Hewlett-Packard at www.hp.com/go/cloudpodcastoffer.

Transcript of a sponsored BriefingsDirect podcast on the state of security in cloud computing and what companies need to do to overcome fear, reduce risk and still enjoy new-found productivity. Copyright Interarbor Solutions, LLC, 2005-2009. All rights reserved.

Harnessing 'Virtualization Sprawl' Requires Managing Your Ecosystem of Technologies

Transcript of a sponsored BriefingsDirect Podcast on how companies need to deal with the complexity that comes from the increasing use of virtualization.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Learn more. Sponsor: Hewlett Packard.

Free Offer: Get a complimentary copy of the new book Cloud Computing For Dummies courtesy of Hewlett-Packard at www.hp.com/go/cloudpodcastoffer.

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you’re listening to BriefingsDirect.

Today, we present a sponsored podcast discussion on better managing server virtualization expansion across enterprises. We’ll look at ways that IT organizations can adopt virtualization at deeper levels, or across more systems, data and applications, at lower risk.

As more enterprises use virtualization for more workloads to engender productivity from higher server utilization, we often see what can be called virtualization sprawl, spreading a mixture of hypervisors, which leads to complexity and management concerns.

In order to ramp up to more, but advantageous, use of virtualization, pitfalls from heterogeneity need to be managed. Yet, no one of the hypervisor suppliers is likely to deeply support any of the others. So, how do companies gain a top-down perspective of virtualization to encompass and manage the entire ecosystem, rather than just corralling the individual technologies?

Here to help us understand the risks of hypervisor sprawl and how to mitigate the pitfalls to preserve the economic benefits of virtualization is Doug Strain, manager of Partner Virtualization Marketing at HP.

Doug Strain: Thanks, Dana.

Gardner: Help us out. What is the current state of virtualization adoption? Are we seeing significant pickup as a result of the economy? What’s driving all the interest in this?

Strain: Virtualization has been growing very steeply in the last few years anyway, but with the economy, the economic reasons for it are really changing. Initially, companies were using it to do consolidation. They continue to do that, but now the big deal with economy is the consolidation to lower cost -- not only capital cost, but also operating expenses.

Gardner: I imagine the underutilization of servers is like a many-headed dragon. You’ve got footprint, skills, and labor being used up. You’ve got energy consumption. You’ve got the applications and data that might be sitting there that have no real purpose anymore, or all of the above. Is this is a big issue?

Underutilized capacity

Strain: It definitely is. There’s a lot of underutilized capacity out there, and, particularly as companies are having more difficulty getting funding for more capital expenses, they’ve got to figure out how to maximize the utilizations they’ve already bought.

Gardner: And, of course the market around virtualization has been long in building, but we’ve had a number of players, and some dominant players. Do you see any trends about adoption in terms of the hypervisor providers?

Strain: Probably, we’re seeing a little bit of a consolidation in the market, as we get to a handful of large players. Certainly, VMware has been early on in the market, has continued to grow, and has continued to add new capabilities. It's really the vendor to beat.

Of course, Microsoft is investing very heavily in this, and we’ve seen with Hyper-V, fairly good demand from the customers on that. And, with some of the things that Microsoft has already announced in their R2 version, they’re going to continue to catch up.

We’ve also got some players like Citrix, who really leverage their dominance in what’s called Presentation Server, now XenApp, market and use that as a great foot in the door for virtualization.

Gardner: That’s a good point. Now, we introduced this as a server virtualization discussion, but virtualization is creeping into a variety of different aspects of IT. We’ve got desktop virtualization now, and what not. Tell us how this is percolating up and out from its core around just servers.

Strain: Desktop virtualization has been growing, and we expect it to grow further. Part of it is just a comfort within IT organizations that they do know how to virtualize. They feel comfortable with the technology, and now, putting a desktop workload instead of server workload, is sort of a natural way to extend that and to use as resources more wisely.

Probably the biggest difference in the drivers for desktop virtualization is the need for meeting compliance regulations, particularly in financial, healthcare, and in a lot of other industries, where customer or employee privacy is very important. It makes sure that the datas no longer sits on someone’s desk. It stays solely within the data center.

Gardner: So there are a lot of good reasons for virtualizing, and, as you point out, the economy is accelerating that from a pure dollars-and-cents perspective. But this is not just cut and dried. In some respects, you can find yourself getting in too deep and have difficulty navigating what you’ve fallen into.

Easy to virtualize

Strain: That’s definitely true, and because of the fact that all the major vendors now have free hypervisor capabilities, it becomes so easy to virtualize, number one, and so easy to add additional virtual machines, that it can be difficult to manage if technology organizations don’t do that in a planned way.

Gardner: As I pointed out, it’s difficult to go back to just one of the hypervisor vendors and get that full panoply of services across what you’ve got in place at your particular enterprise, which of course might be different from any other enterprise. What’s the approach now to dealing with this issue about not having a single throat to choke?

Strain: There are a couple of dimensions to that. As you said, most of the virtualization vendors do have management tools, but those tools are really optimized for their particular virtualization ecosystem. In some cases, there is some ability to reach out to heterogeneous virtualization, but it’s clear that that’s not a focus for most of the virtualization players. They want to really focus on their environment.

The other piece is that the hardware management is critical here. An example would be, if you’ve got a server that is having a problem, that could very well introduce downtime. You've got to have a way of navigating the virtual machine, so that those are moved off of the server.

That’s really an area where HP has really tried to invest in trying to pull all that together, being

. . . Having tools that work consistently both in physical and in virtual environments, and allow you to easily transition between them is really important to customers.

able to do the physical management with our Insight Control tools, and then tying that into the virtualization management with multiple vendors, using Insight Dynamics – VSE.

Gardner: We’ve discussed heterogeneity when it comes to multiple hypervisors, but we’re also managing heterogeneity, when it comes to mixtures of physical and virtual environments. The hypervisor provider necessarily isn’t going to be interested in the physical side.

Strain: That’s exactly right. And, if they were interested, they don’t necessarily have the in-depth hardware knowledge that we can provide from a server-vendor standpoint. So yeah, clearly there are a few organizations that are 100 percent virtualized, but that’s still a very small minority. So, we think that having tools that work consistently both in physical and in virtual environments, and allow you to easily transition between them is really important to customers.

Gardner: All right. How do we approach this, and is this something that is like other areas of IT we’ve seen, where you start at a tactical level and then over time it gets too complex, too unwieldy, you start taking more strategic overview and come up with methodologies to set some standards up. Is this business as usual in terms of a maturation process?

Strain: I think that’s what we’ve seen in the past. I certainly wouldn't recommend that to somebody today that’s trying to get into virtualization. There are a lot of ways that you can plan ahead on this, and be able to do this in a way that you don't have to pay a penalty later on.

Capacity assessment

It could be something as simple as doing a capacity assessment, a set of services that goes in and looks at what you’ve got today, how you can best use those resources, and how those can be transitioned. In most cases you’re going to want to have a set of tools like some of the ones I’ve talked about with Insight Control and Insight Dynamics VSE, so that you do have more control of the sprawl and, as you add new virtual machines, you do that in a more intelligent way.

Gardner: Tell us a little bit about how that works? I've heard you guys refer to this as "integrated by design." What does that mean?

Strain: We’ve really tried to take all the pieces and make sure that those work together out of the box. One of the things we’ve done recently to up the ante on that is this thing called BladeSystem Matrix. This is really converged infrastructure that allows customer to purchase a blade infrastructure complete with management tools with the services, and a choice of virtualization platforms. They all come together, all work together, are all tested together, and really make that integration seamless.

Gardner: And HP is pretty much neutral on hypervisors. You give the consumer, the customer, the enterprise the choice on their preferred vendor.

Strain: We do. We give them a choice of vendors. The other thing we try to do is give them a choice of platforms. We invest very heavily in certifying across those vendors, across the

What we’re finding is that we can’t say that one particular server or one particular storage is right for everybody. We’ve got to meet the broadest needs for the customers.

broadest range of server and storage platforms. What we’re finding is that we can’t say that one particular server or one particular storage is right for everybody. We’ve got to meet the broadest needs for the customers.

Gardner: Let's take a look at how this works in practice. Do you have any examples, customers that have moved in this direction at a significant level already, and perhaps had some story about what this has done for them?

Strain: I’ve just pulled up a recent case study that we did on a transportation company, called TTX Company. I thought this was a good example, because they’d really tried a couple of different paths. They’d originally done mainframes, and realized that the economics of going to x86 servers made lot more sense.

But, what they found was they had so many servers, they weren’t getting good utilization, and they were seeing the expenses go up, and, at the same time, seeing that they were starting to run out of space in their data center. So, from a pure economic standpoint, they looked at this and said, “Look, we can lower our hardware cost.”

TCO 50 percent lower

In fact, they saw a 10 percent reduced-hardware cost, plus they’re seeing substantial operating expense reductions, 44 percent lower power cost, and also 69 percent reduction in their rack footprint. So, they can now say they are removing it from the datacenter and, compared to their mainframes, they think they have about a 50 percent lower total cost of ownership (TCO).

Gardner: So, if you do this right, they're not just rounding-error improvements. These are pretty substantial.

Strain: These are substantial, and, particularly today, that’s a great way to justify virtualization. What they also found was that, from an IT standpoint, they were much more effective. They project that they can recover much more quickly -- in fact, 96 percent reduction in recovery time. That's going from 24 hours down to 1 hour recovery.

Likewise, they could deploy new servers much more quickly -- 20 minutes versus 4 hours is what they estimate. They’ve reduced the times they have to actually touch a server by a factor of five.

Gardner: So, we’ve seen quite a few new studies that have come out, and virtualization remains in the top tier of concerns and initiatives from enterprises, based on the market research. We’re also seeing interesting things like managing information explosion and reducing redundancy in terms of storaging data. These all come together at a fairly transformative level.

How big a part in what we might consider IT transformation does virtualization play?

Strain: It plays a very substantial role. It’s certainly not the only answer or not the only

The investment in the industry is around management, making it simpler to deploy, to move, to allow redundancy, all those kinds of things, as well as automation.

component of data center transformation, but it is a substantial one. And, it's one that companies of almost any size can take advantage of, particularly now, where some of the requirements for extensive shared storage have decreased. It's really something that almost anybody who's got even one or two servers can take advantage of, all the way to the largest enterprises.

Gardner: So, at a time when the incentives, the paybacks from virtualization activities are growing, we’re seeing sprawl and we’re seeing complexity. This needs to be balanced out. What do you think is the road map? If we had a crystal ball, from your perspective in knowledge of the market, how do we get both? How do we get the benefits without the pain?

Strain: Clearly, this is an area where the entire industry is investing heavily in not just the enabling of the virtualization. That’s been done. There’s still some evolution there, but the steps are getting increasingly smaller. The investment in the industry is around management, making it simpler to deploy, to move, to allow redundancy, all those kinds of things, as well as automation.

There are a lot of tasks that, particularly when you think about a virtual machine, can be run on a range of different hardware, even in different datacenters. The ability to automate based on a set of corporate rules really can make IT much more effective.

Gardner: Great. We’ve been talking about better managing server virtualization expansion across enterprises, and we’ve been joined in our discussion by Doug Strain. He is the manager of Partner Virtualization Marketing at HP. We appreciate it, Doug.

Strain: My pleasure.

Gardner: This is Dana Gardner, principal analyst at Interarbor Solutions. You've been listening to a sponsored BriefingsDirect podcast. Thanks for listening, and come back next time.

Free Offer: Get a complimentary copy of the new book Cloud Computing For Dummies courtesy of Hewlett-Packard at www.hp.com/go/cloudpodcastoffer.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Learn more. Sponsor: Hewlett Packard.

Transcript of a sponsored BriefingsDirect Podcast on how companies need to deal with the complexity that comes from the increasing use of virtualization. Copyright Interarbor Solutions, LLC, 2005-2009. All rights reserved.

Friday, August 28, 2009

Nimble Business Process Management Helps Enterprises Gain Rapid Productivity Returns

Transcript of a sponsored BriefingsDirect podcast on how Business Process Management can help enterprises solve productivity problems and rapidly adapt to changing economic conditions.

Listen to the podcast. Download the transcript. Find it on iTunes/iPod and Podcast.com. Learn more. Sponsor: BP Logix.

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you’re listening to BriefingsDirect.

Today, we present a sponsored podcast discussion on the importance of business process management (BPM), especially for use across a variety of existing systems, in complex IT landscapes, and for building flexible business processes in dynamic environments.

The current economic climate has certainly highlighted how drastically businesses need to quickly adapt. Many organizations have had to adjust internally to new requirements and new budgets. They have also watched as their markets and supplier networks have shifted and become harder to predict.

To better understand how business processes can be developed and managed nimbly to help deal with such change, we're joined by a panel of users, BPM providers, and analysts. Please join me in welcoming David A. Kelly, senior analyst at Upside Research. Welcome to the show, Dave.

David A. Kelly: Thanks Dana, glad to be here.

Gardner: We're also joined by Joby O'Brien, vice president of development at BP Logix. Hi, Joby.

Joby O'Brien: Hi, Dana, how are you doing?

Gardner: Good. We are also joined by Jason Woodruff, project manager at TLT-Babcock. Welcome Jason.

Jason Woodruff: Thank you, Dana.

Gardner: Let's start off with you, Dave. Tell us a little bit about how the business climate that we are in that has made agility and the ability to swiftly adapt not just a nice-to-have, but a must-have.

Kelly: You hit it on the head in the intro there, when you talked about dynamic business environments. That's what people are facing these days. In many cases, they have the same business processes that they have always had, but the landscape has shifted. So, some things may become more important and other things are less important.

What's important in any case is to be able to drive efficiency throughout an organization and across all these business processes. With the economic challenges that organizations are facing have had, they've had to juggle suppliers, products, customers, ways to market, and ways to sell.

As they're doing that, they're looking at their existing business processes, they are trying to increase efficiencies, and they are trying to really make things more streamlined. That's one of the challenges that organizations have had in terms of streamlining what's going on within their organization.

Gardner: Dave, just as a sort of level-check on IT, as more organizations have elevated data and applications and infrastructure even into services, IT has become a bit more nimble, but what we are really focusing on are the processes. How we can utilize these services, create workflows, apply logic and checks and balances across how things are conducted, and pull together people and process as much as it affects what IT does at its core?

Two levels

Kelly: You've got two levels. As you said, there are core IT operations and applications that are out there, but the real business value that's happening today is being able to tie those things together to be able to add on and address the business needs and the business processes.

In many cases, these processes cross applications and services. As you said, some organizations

We're seeing that it's difficult sometimes for an organization, especially right now, to look at something on a one-, two-, or three-year plan.

are even getting into cloud solutions and outside services that they need to integrate into their business processes. We've seen a real change in terms of how organizations are looking to manage these types of processes across applications, across data sources, across user populations.

That's where some of the real pressure has come from the changes in the economy in terms of being able to address those process needs more quickly and in a much more flexible and nimble approach than we have seen previously.

This is probably a good point to talk about the fact that BPM solutions have been around for quite sometime now, and a lot of organizations have really put them to good use. But, over the past three or four years, we've seen this progression of organizations that are using BPM from a task-oriented solution to one that they have migrated into this infrastructure solution.

This is great, if you can support that, but now with the changes and pressures that organizations are facing in the economy and their business cycles, we see organizations look for much more direct, shorter term payback and ways to optimize business processes.

Gardner: Let's go to Joby O'Brien at BP Logix. Joby, Dave just mentioned the fact that we have sort of an infrastructure approach to BPM, but where the rubber hits the road is how business processes get adapted, changed, implemented. This also cuts between where the business side sees value and where the IT side can provide value.

Perhaps you could tell us a little bit about where you see the BPM market now, and how things are a little different than they were a few years ago?

O'Brien: Actually, the points that Dave made were great, and I agree completely. We're seeing that it's difficult sometimes for an organization, especially right now, to look at something on a one-, two-, or three-year plan. A lot of the infrastructure products and a lot of the larger, more traditional ways that BPM vendors approach this reflect that type of plan. What we're seeing is that companies are looking for a quicker way to see a return on their investment. What that means really is getting an implementation done and into production faster.

One of the things we are also seeing is that part of that thrust is being driven heavily by the business users. Instead of being a more traditional IT-oriented approach, where it's again a longer-term implementation, this new approach is being driven by business needs.

When there are particular business needs that are critical to an organization or business, those are the ones they tend try to address first. They are looking for ways to provide a solution that can be deployed rapidly.

Same level of customization


One interesting thing is that they are also still looking for the same level of customization, the same level of flexibility, that they would have in a much larger or infrastructure-type approach, but they still want that rapid deployment of those applications or those implementations.

We're also seeing that what they are doing in a lot of cases is breaking them apart into different pieces based on priority. They take the processes that are most critical, and that are being driven by the business users and their needs, and address those with a one-at-a-time approach as they go through the organization.

It's very different than a more traditional approach, where you put all of the different requirements out there and spend six months going through discovery, design, and the different approaches. So, it's very different, but provides a rapid deployment of highly customized implementations.

Kelly: It's almost a bottom-up approach to BPM, instead of taking the top-down, large-scale infrastructure approach. Those definitely have their place and can be really powerful, but, at the same time, you can also take this bottom-up approach, where you are really focused on, as Joby said, individual processes that can be aggregated into larger business processes.

Gardner: Let's go to Jason Woodruff at TLT-Babcock. First, Jason, tell us a little bit about your company. You are in the industrial space. Then, as an IT project manager, tell us a little bit about what your business side has been looking for.

Woodruff: Sure, Dana. First of all, just to give a background of what TLT-Babcock does, we are a supplier of air handling and material handling equipment, primarily in the utility and

That requires flexibility and ultimately usability, not only from the implementation stage, but the end user stage, and to do so in the most cost-effective manner.

industrial markets. Our spectrum of products range from new product to after-market, which would include spare parts rebuilds. We rebuild our own equipment, customer equipment, and competitor equipment as well. So, we have our hands in a lot of markets and lot of places.

As a project manager, my job, before I got involved in our BPM solutions, was simply to manage those new product projects. Serving in that capacity, I realized a need for streamlining our process. Right now, we don't want to ride the wave, but we want to drive the wave. We want to be proactive and we want to be the best out there. In order to do that, we need to improve our processes and continuously monitor and change them as needed.

So, the direction was given, "Let's do this. How are we going to do it? What do we need to do? What is it going to take? Let's get moving." After quite a bit of investigation and looking at different products, we developed and used a matrix that, first and foremost, looked at functionality. We need to do what we need to do. That requires flexibility and ultimately usability, not only from the implementation stage, but the end user stage, and to do so in the most cost-effective manner. That's where we are today.

Gardner: Okay. Jason, you didn't just write down one day on a blackboard or a white board, "We need Nimble BPM." You probably started with whatever the requirements that your business side gave to you. What allowed you to get from a long-term perspective on BPM to being more proactive and agile?

Needed a change

Woodruff: As I said, the drive was that we needed to make a change. We knew we needed to make a change. TLT-Babcock wants to be the best. We looked within and said, "What can we change to achieve that? What are our weaknesses? Where can we improve?" We made a list of things, and one of the big ones that jumped out was document control.

So, we looked at that. We looked at why document control was an issue and what we could do to improve it. Then, we started looking at our processes and internal functions and realized that we needed a way to not just streamline them. One, we needed a way to define them better. Two, we needed to make sure that they are consistent and repetitive, which is basically automation.

The research drove our direction. We evaluated some of the products and ultimately selected BP Logix Workflow Director. The research really led us down that path.

Gardner: Let's go back to Dave Kelly. Dave, for this sort of requirement of faster, better, and cheaper, what is the requirement set from your perspective in the market for Nimble BPM?

Kelly: An important thing for Nimble BPM is to be able to embrace the business user. Jason just referenced being able to bring the end users into the process in a cost-effective manner and allow them to drive the business processes, because they are ultimately the beneficiaries and the people who are designing the system.

Another aspect is that you have to be able to get started relatively quickly. Jason mentioned the need for that terms of how they identified this business need to be competitive and to be able to

Another thing that's important is to be able to handle ongoing changes and to define potential solutions relatively quickly. Those are some of the key drivers.

improve the processes. You don't want to spend six months learning about a tool set and investing in it, if you can actually get functionality out of the box and get moving very quickly.

Another thing that's important is to be able to handle ongoing changes and to define potential solutions relatively quickly. Those are some of the key drivers.

O'Brien: There's one thing that Jason said that we think is particularly important. He used one phrase that's key to Nimble BPM. He used the term "monitor and change," and that is really critical. That means that I have deployed and am moving forward, but have the ability, with Workflow Director, to monitor how things are going -- and then the ability to make changes based on the business requirements. This is really key to a Nimble BPM approach.

The approach of trying to get everybody to have a consensus, a six-month discovery, to go through all the different modeling, to put it down in stone, and then implement it works well in a lot of cases. Organizations that are trying to adapt very quickly and move into a more automated phase for the business processes need the ability to start quickly.

Monitoring results

They need the ability to monitor results, see what's going on, and make those changes- without having to go through some type of infrastructural change, development process, or rebuild the or retool the application. They need to be able to provide those types of real time monitoring and resulting changes as part of the application. So, that phrase is so important -- the concept of monitor and change.

Gardner: Joby, to Dave's point about getting the tool in a position that the end user, the business driver and the analyst, can use, are we talking about compressing the hand off and the translation between the business side requirements and necessities, especially in a dynamic environment and then implement and refer back? How do we compress this back-and-forth business, so that it becomes a bit more automated, perhaps Web-based and streamlined?

O'Brien: That's a really good question. One of the things we see is that, especially for somebody who's just moving a manual process or a paper-oriented process to an electronic process or an automated one -- people who haven't actually done that yet and this is new to them -- it's difficult sometimes for them to be able to lay out all of the different requirements and have them be exact.

Once they actually see something running, they see it as Web-based, they see their paper-based forms turn into electronic forms, they see their printed documents stored electronically, and

The idea or the approach with the Nimble BPM is to allow folks like Jason -- and those within IT -- to be able to start quickly.

they have different ways of getting reports and searching data inevitably there are changes.

The idea or the approach with the Nimble BPM is to allow folks like Jason -- and those within IT -- to be able to start quickly. They can put one together based on what the business users are indicating they need. They can then give them the tools and the ability to monitor things and make those changes, as they learn more.

In that approach, you can significantly compress that initial discovery phase. In a lot of the cases, you can actually turn that discovery phase into an automation phase, where, as part of that, you're going through the monitoring and the change, but you have already started at that point.

Kelly: Dana, I'd just add that what you are saying here is what you've seen in the development of agile development methodologies over the past 10 years in the software arena, where organizations are really trying to develop applications more quickly and then iterate them.

I think that's what Joby's talking about here in terms of the Nimble BPM is being able to get out of the starting block much more quickly. The thing can then be adjusted dynamically over time, as the business really discovers more about that process.

User expectation


O'Brien: I completely agree. The other is the expectation of the users, even if it is nimble, for something faster, Just getting out of the block quicker is not sufficient. There is usually still an expectation about a relatively high degree of sophistication, even with doing something quickly. In most of these cases, we will still hear that the customer wants integration, for example, into their back-end systems.

They've got applications. They've got data that's stored in a lot of different systems. In a lot of cases, even when they're trying to go do something very quickly, what they are doing is still looking to have some type of an integration into existing systems, so that the BPM product now becomes that coordinator or a way of consolidating a lot of that information for the business users.

Gardner: l'd like to drill down a little bit on how this affects process. Jason, at your organization, when you started using BPM, did you notice that there was a shift in people and their process? That is to say, was there actual compression from what the business side needed and what the IT side could provide?

Woodruff: Yeah, that comes with the territory. We saw this as an opportunity not just to implement a new product like Workflow Director, but to really reevaluate our processes and, in

We'll sit down, find out what they need, create a form, model the workflow, and, within a couple of days, they're off and running. The feedback has been overwhelmingly positive.

many cases, redefine them, sometimes gradually, other times quite drastically.

Our project cycle, from when we get an order to when our equipment is up and operating, can be two, three, sometimes four years. During that time there are many different processes from many different departments happening in parallel and serially as well. You name it -- it's all over the place. So, we started with that six-month discovery process, where we are trying to really get our hands around what do we do, why do we do it that way and what we should be doing.

As a result, we've defined some pretty complex business models and have begun developing. It’s been interesting that during that development of these longer-term, far-reaching implementations, the sort of spur-of-the-moment things have come up, been addressed, and been released, almost without realizing it.

A user will come and say they have a problem with this particular process. We can help. We'll sit down, find out what they need, create a form, model the workflow, and, within a couple of days, they're off and running. The feedback has been overwhelmingly positive.

Gardner: It strikes me that when you demonstrate that you can do that, you open up this whole new opportunity for people to think about making iterative and constant improvement to their jobs. Before, they may not have even tried, because they figured IT would never be able to take it and run with it.

A lot more work for IT

Woodruff: It's interesting that you say that, because that's exactly what's happened. It's created a lot more work for us. One of the things we just implemented -- and this was one of those couple-of-day things -- involved a lot of issues where there was some employee frustration. Things weren't getting done as quickly as we thought they could be. People were carrying some ideas internally that they hadn't shared or shared through the existing channels, and results weren't being presented.

Sort of at the spur of the moment, we said, "We can address this. We can create an online suggestion box, where people can submit their problems and submit their ideas, and we can act on it." We got that turned around in a week, and it’s been a hit. Within the first couple of days, there were well over a dozen suggestions. They're being addressed. They're going to be resolved, and people will see the results. It just sort of builds on itself.

Gardner: Now, in some circles they call that Web 2.0, social networking, or Wikis. Collaboration, I suppose, is the age-old term. I want to go back to Joby at BP Logix. Do you see that the Nimble BPM approach, and this invigorated collaboration, is what gets us to that level of productivity that, as Jason pointed out, lets them push the wave rather than have to ride on someone else's?

O'Brien: Actually, we do. It's funny that Jason had mentioned that particular process. We see that also with many of the other customers we are working with. They are focused on the initial project or the business area that they are trying to address. They will take care of that, but then, as people see the different types of things that can be done, these small offshoots will occur.

A lot of these are very simple processes, but they still require some type of a structure. In some cases, some degree of compliance is also associated with them, and they need the ability to be

With everything somebody is doing, having some degree of management, some degree of control, visibility, auditing, tracking, is important

able to put those together very quickly. Some are simple things, the things that are one-off type of workflows or processes that have originated within an organization. It just happens to be the way they do business.

It's not something traditional, like an IT provisioning or some type of sales-order processing. There are those one-off and unique ways that they do business, which now can provide a degree of collaboration.

Gardner: So, we need to marry the best of ad hoc in innovation, but keep it within this confines of "managed," or it could spin out of control and become a detriment.

O'Brien: That's probably one of the key pieces to almost all of these. With everything somebody is doing, having some degree of management, some degree of control, visibility, auditing, tracking, is important. Inside an organization, there can be hundreds of different processes, little ad-hoc processes that people have created over the years on how they do business.

Some of those are going to stay that way, but with others there needs to be more of a management, automation, auditing, or tracking type of approach. Those are the types of processes, where people don't initially look at them and say, "These are the types of things that I want to automate, so let me bring a BPM tool in."

Getting control

They walk into that area because they realize that a Nimble BPM tool can address those very quickly. Then they start getting some degree of control almost instantaneously, and eventually work their way into full compliance within their industry -- tracking, auditing, automation, and all of the goodness associated with the traditional BPM tool.

Gardner: Jason, this all sounds great in theory, but when you put it into practice, are these small improvements, or what are the metrics? What is the payback? How can you rationalize and justify doing this in terms of a steadfast, predictable, even measurable business result?

Woodruff: I don't know if anybody can really answer that question in black and white, but there are several paybacks. We haven't spent a lot of time doing a calculation of our return on investment (ROI) financially. It's so obvious that the number doesn't really matter as far as we are concerned at this point.

We save a lot of time. To put a figure on it is tough to do, but we save a considerable amount of time. More importantly it allows us to reduce errors and reduce duplication of work, which

. . . It allows us to reduce errors and reduce duplication of work, which improves our lead-time and competitiveness. It's just a win-win. So, it doesn't really matter what the number is.

improves our lead-time and competitiveness. It's just a win-win. So, it doesn't really matter what the number is.

Gardner: Well, how about your relationships with the rest of the organization? When the folks at TLT-Babcock think of IT, do they perhaps perceive you a little differently than they may have in the past?

Woodruff: While I do have a background in IT, that wasn't my role at TLT-Babcock, and still isn't. As a project manager working on customer-driven projects, I am the end user. This current situation came about when I expressed not just my and several other people's comments that we could improve here.

Because I had that background from a previous life, so to speak, I became the natural choice to head this charge. Now, I don't spend as much time in project management. I spend very little time doing that and focus, primarily, on troubleshooting and improving processes.

I've got this role that Joby talked about -- management of these ad hoc things. Bring me your ideas and bring me your problems and we will be the umbrella over all of this and coordinate these efforts, so that we're implementing solutions that make sense for everybody, not just on a narrow focus.

Gardner: Perhaps, I oversimplified in referring to this as business versus IT, but a better way to phrase the question might be how has this changed your culture at your organization from where you sit?

In the early stages


Woodruff: It's interesting, because we're in the early stages here of implementation. We have a couple of processes out and a couple in testing. In the last couple of weeks, just for the first time, we gave a company-wide demonstration of Workflow Director, what it does, how we're going to use it, and, looking down the road, how the processes we have known and grown to love, so to speak, will be changing using this new tool.

That really was a spark that gave each of the users a new look at this and an idea of how this tool is going to affect the tasks that they do each day, their own processes. That's when these ideas started flowing in, "Can you use it to do this? Can you use it to do that?" When they see that, they say, "Oh, that's cool. That's slick. That's so easy." So, we're right at that turning point.

Gardner: Well, we'll have to come back in a while and see how that cultural shift has panned out. Meanwhile, let's go to Joby. For those organizations like Jason's that want to take a Nimble BPM tool and make themselves nimble as a result, how do they get started? Where do you begin to look to implement this sort of a benefit?

O'Brien: Let me make sure I understand the question. How do they typically get started or what organization brings us in?

Gardner: How do you get started in saying, "We like the idea of Nimble BPM that then enables as a catalyst nimble business processes. Where do we begin? How do we get started?"

O'Brien: Almost always, that request will be initiated or driven from some business need, a lot of times from a business unit, and occasionally from IT. So, it's going to be driven from a lot of

That really was a spark that gave each of the users a new look at this and an idea of how this tool is going to affect the tasks that they do each day, their own processes.

different places, but it's almost always going to be geared around the idea of the ability to respond quickly to some type of automation and control around a particular process.

In most cases, at least in our experience, there is usually a primary factor that causes the organization to bring in the product and start the implementation, and that's what they are focused on addressing. From there it grows into other areas, very much like Jason just described. When people start gaining visibility into the types of things that can be done and what that actually means, we generally see the tool growing into other areas.

Gardner: Now, David Kelly, that gets back to your earlier statements, if you are going to start from a tactical pain point and then realize benefits that can then be presented perhaps more horizontally and strategically across the organization, you can't do that sort of crawl-walk-run approach, if you've got to do a two-year multi-million dollar infrastructure approach, better to have something you can do at that more iterative level.

Kelly: Exactly. I think Jason highlighted that in terms of what he just said, in terms of getting these workflows and processes out there showing them to the rest of the company then watching as, all of a sudden, the idea started exploding in terms of how those could be applied. It's the same kind of thing.

From what I have seen, a lot of organizations -- Joby has mentioned this -- start with any process in the organization that needs automation. There are probably multiple processes that need automation, monitoring, or some kind of control.

Just look around

You don't have to think big-picture BPM solution. Just look around. It could be a request management. It could be tracking something. It could be sharing documents or controlling access to the documents. It could be something that adds on to an enterprise resource planning (ERP) system that you need to have additional control over.

There are multiple processes, even in highly automated organizations, that still need automation. You can start in an area like that with a task and with a specific kind of scenario, automate that, use a Nimble BPM product tool like this, start down that road, and then expand beyond there. Jason provides a really good example of that.

Woodruff: If I can jump in again here to expand on that point, something comes to mind here. The question was asked, how does this process start, how do you get started on this path? The two years prior to even looking at BP Logix, we had brought in two, maybe three, different subject matter experts to develop our current in-house system. This was to do just what you said David, do a little something here, a little something there, not necessarily as a global approach to streamlining everything, not workflow software but just something to get results.

Well, we weren't getting anything done. We would get one little thing that wasn't very useful to somebody and something else that wasn't useful to somebody else, and we were just sort of spinning our wheels. Within a few months of getting BP Logix products in our hand, we are off and running. It’s pulling us through in some ways.

So it was just the lack of results that said, "We've got to find something better." So we went out and did that research I talked about earlier, and here we are a few months down the road, and I can say that we are now driving that wave.

Gardner: Okay. Well, I'm afraid we are about out of time, but we have been discussing how in dynamic business environments a nimble approach to BPM can start at the tactical level and even lead to cultural change and swift paybacks. Helping us understand the ability to draw down processes into something that can be measured and used in a managed environment, we have been joined by Joby O'Brien, development manager at BP Logix. Thanks Joby.

O'Brien: Thank you.

Gardner: David A. Kelly, senior analyst at Upside Research. Thanks again, Dave.

Kelly: You're welcome, Dana. Great to be here.

Gardner: We also appreciate Jason Woodruff joining us. He is the project manager at TLT-Babcock. Thanks for your insights and sharing, Jason.

Woodruff: Thank you. It's my pleasure.

Gardner: This is Dana Gardner, principal analyst at Interarbor Solutions. You've been listening to a sponsored BriefingsDirect podcast. Thanks for listening and come back next time.

Listen to the podcast. Download the transcript. Find it on iTunes/iPod and Podcast.com. Learn more. Sponsor: BP Logix.

Transcript of a sponsored BriefingsDirect podcast on how Business Process Management can help enterprises solve productivity problems and rapidly adapt to changing economic conditions. Copyright Interarbor Solutions, LLC, 2005-2009. All rights reserved.