Wednesday, March 11, 2009

BriefingsDirect Analysts Discuss Solutions for Bringing Human Interactions into Business Process Workflows

Edited transcript of BriefingsDirect Analyst Insights Edition podcast, Vol. 37 on aligning human interaction with business process management.

Listen to the podcast. Download the podcast. Find it on iTunes and Learn more. Charter Sponsor: Active Endpoints. Additional underwriting by TIBCO Software.

Special offer: Download a free, supported 30-day trial of Active Endpoint's ActiveVOS at

Dana Gardner: Hello, and welcome to the latest BriefingsDirect Analyst Insights Edition, Volume 37.

This periodic discussion and dissection of IT infrastructure related news and events, with a panel of industry analysts and guests, comes to you with the help of our charter sponsor, Active Endpoints, maker of the ActiveVOS, visual orchestration system, as well as with the support of TIBCO Software.

I'm your host and moderator, Dana Gardner, principal analyst at Interarbor Solutions. Our topic this week, the week of Feb. 9, 2009, returns to the essential topic of bringing human activity into alignment with IT supported business processes.

The need to automate and extend complex processes is obvious. What's less obvious, however, is the need to join the physical world of people, their habits, needs, and perceptions with the artificial world of service-oriented architecture (SOA) and business process management (BPM). This will become all the more important, as cloud-based services become more common.

We're going to revisit the topic of BPEL4People, an OASIS specification that we discussed when it first arrived, probably a year-and-a-half ago. We'll also see how it's progressing with someone who has been working with the specification at OASIS since its beginning.

I'd like to welcome our guest this week, Michael Rowley, director of technology and strategy at Active Endpoints. Welcome, Mike.

Michael Rowley: Thank you.

Gardner: I'd also like to introduce our IT analyst guests this week. Our panel consists of regular Jim Kobielus, senior analyst at Forrester Research. Welcome back, Jim.

Jim Kobielus: Thanks, Dana. Hi, everybody.

Gardner: And someone who is beginning to become a regular, JP Morgenthal, independent analyst and IT consultant. Welcome back, JP.

JP Morgenthal: Thanks, Dana. Hi, everyone.

Gardner: Let's go to you first, Mike, as our guest. I've pointed out that Active Endpoints is the sponsor of the show, so I guess we will try to be nice to you, but I can't guarantee it. Tell us a little bit about your background. You were at BEA for some time. You've been involved with Service Component Architecture (SCA) and a few other open standards around OASIS. Give us the bio.

Rowley: I was at BEA for five years. I was involved in a couple of their BPM-related efforts. I led up the BPELJ spec effort there as part of the WebLogic integration team. I was working in the office of the CTO for a while and working on BPEL-related efforts. I also worked on the business process modeling notation (BPMN) 2.0 efforts while I was there.

I worked a little bit with the ALBPM team as well, and a variety of BPM-related work. Then, I've been at Active Endpoints for a little over half a year now. While here, I am working on BPEL4People standards, as well as on the product itself, and on some BPMN related stuff as well.

Gardner: Let's just jump into BPEL4Ppeople. Where do we stand, and is this getting traction to people? Not to be a punster, but do people grok BPEL and BPEL4People?

Good feedback

Rowley: We've had some very good feedback from our users on BPEL4People. People really like the idea of a standard in this area, and in particular, the big insight that's behind BPEL4People, which is that there's a different standard for WS-Human Task. It's basically keeping track of the worklist aspect of a business process versus the control flow that you get in the BPEL4People side of the standard. So, there's BPEL4People as one standard and the WS-Human Task as another closely related standard.

By having this dichotomy you can have your worklist system completely standards based, but not necessarily tied to your workflow system or BPM engine. We've had customers actually use that. We've had at least one customer that's decided to implement their own human task worklist system, rather than using the one that comes out of the box, and know that what they have created is standards compliant.

This is something that we're seeing more and more. Our users like it, and as far as the industry as a whole, the big vendors all seem to be very interested in this. We just recently had a face-to-face and we continue to get really good turnout, not just at these meetings, but there's also substantial effort between meetings. All of the companies involved -- Oracle, IBM, SAP, Microsoft, and TIBCO, as well as Active Endpoints -- seem to be very interested in this. One interesting one is Microsoft. They are also putting in some special effort here.

Gardner: I want to ask you a question, but at two levels. What is the problem that we're trying to solve here? Let's ask that first at the business level and then at the technical level?

Rowley: At the business level, it's pretty straightforward. It's essentially the promise of workflow systems, in which you can automate the way people work with their computers and interact with other people by pulling tasks off of a worklist and then having a central system, the BPM engine, keep track of who should do the next thing, look at the results of what they have done, and based on the data, send things for approval.

It basically captures the business process, the actual functioning of a business, in software in a way that you can change over time. It's flexible, but you can also track things, and that kind of thing is basic.

Gardner: Before you go to the technical issues, one of the things that's really interesting to me on this is that I understand the one-way street of needing to take processes, making that understood, and then finding out who the people are who can implement it. But, is this a two-way street?

Is it possible for the people who are involved with processes in the line of business, in the field, to then say, "Listen, this doesn't quite work?" Sometimes you can't plan things in advance. We have some insight as to what we think the process should be, how to improve it, and how can we then relate that back into what the SOA architecture is delivering." Are we on a two-way street on this?

Rowley: Absolutely. One value of a BPM engine is that you should be able to have a software system, where the overall control flow, what's happening, how the business is being run can be at the very least read by a nontechnical user. They can see that and say, "You know, we're going through too many steps here. We really can skip this step. When the amount of money being dealt with is less than $500, we should take this shortcut."

That's something that at least can be described by a layperson, and it should be conveyed with very little effort to a technical person who will get it or who will make the change to get it so that the shortcut happens. I'm leery about the end user, the nontechnical person, going in and mucking with fundamental control flow, without at least collaborating with somebody who can think about it from more of an IT angle.

Gardner: No. Clearly, we want to have a lifecycle between design, requirements and refinements, but not just throw the keys to the locker room out of the window. What is it technically that we need to overcome in order to solve those problems?

Need for standards

Rowley: I'm going to take this from a standards aspect, because one of the hardest questions is what you standardize and how you divvy up the standards. One thing that has slowed down this whole vision of automating business process is the adoption of standards.

Let's say a business school wants to describe how to do management and how to run your organization. Right now, I don't believe any of them have, as part of the coursework for getting an MBA, something that says, "Here's how you deal with the BPM engine to design and control your organizations."

The reason it isn't at that level of adoption yet is because the standards are new and just being developed. People have to be quite comfortable that, if they're going to invest in a technology that's running their organization, this is not just some proprietary technology.

Gardner: We're at that chicken and egg stage, aren't we, before we can get this really deeply adopted?

Rowley: Yes. I think we're spinning up. We're starting to get the kind of momentum that's necessary, with all the vendors getting on board. Oftentimes, with things like this, if the vendors can all get on the same bandwagon at the same time, the users get it. They see that, "Okay, now this is real. This is not just a standard that is a de jure standard, but it's actually a de facto standard as well."

Gardner: Let's go to Jim Kobielus. Jim, how important is this, and how might this chicken-and-egg conundrum get jump-started?

Kobielus: It's extremely important. One thing that users are challenged with all the time in business is the fact that they are participating in so many workflows, so many business processes. They have to multi-task, and they have to have multiple worklists and to-do lists that they are checking all the time. It's just a bear to keep up with.

It's a real drag on productivity, when you've got tasks coming from all angles at you and you're floundering, trying to find a way to manage them in a systematic way, to roll them up into a single worklist.

BPEL4People, by providing an interoperability framework for worklisting capabilities of human workflow systems, offers the promise of allowing organizations to help users have a single view of all of their tasks and all the workflows in which they are participating. That will be a huge productivity gain for the average information worker, if that ever comes to pass.

That's why I agree with Mike that it's critically important that the leading BPM and workflow vendors get on board with this standard. In many ways, I see BPEL4People as having a similar aim to business intelligence in general. Where business intelligence environments are geared towards providing a single view of all business metrics. BPEL4People is trying to provide a single view of all business processes that either you participate in or which you might manage.

Process steward

A term that I have batted around -- I don't think its really gained any currency -- is the notion of a process steward, somebody whose job it is to define, monitor, track, and optimize business processes to achieve greater productivity and agility for the business.

What Mike was getting at that was really interesting is the fact that you want an environment, a human workflow environment, that not only wraps up all of your tasks in a single worklist, regardless of a back-end execution engine. You also want the ability of not only the end user but especially the process steward, to begin to do what-if analysis in terms of re-engineering. They may have jurisdiction over several processes and have a single dashboard, as it were, looking at the current state and the dependencies of the various workflows they are responsible for.

This is critically important for SOA, where SOA applications for human workflows are at the very core of the application.

Gardner: JP, do you agree with me on this two-way street, where the users, the people who are actually doing the work, feel like they are empowered at some level to contribute back into refinement? It seems to me that otherwise workers tend to say, "Okay, I can't have any say in this process. I don't agree with it. Basically, I do an end run around it. I'm going to find ways to do my work that suits me and my productivity." Then, that value and intelligence is lost and doesn't ever make it back into the automated workflow. How important from your perspective is this two-way street capability?

Morgenthal: I'm going to answer that, but I'd like to take a step back, if I could, to answer the business problem. Interestingly enough, I've been working on and researching this particular problem for the past few months. One interesting aspect from the business side is that this has been looked at for quite a while by the business, but hasn't fully been identified and ferreted out as a niche.

One key term that has been applied here industry wide I found only in the government. They call this "suspense tracking." That's a way of saying that something leaves the process and goes into "ad hoc land." We don't know what happens in there, but we control when it leaves and we control when it comes back.

I've actually extended this concept quite a bit and I am working on getting some papers and reports written around something I am terming "business activity coordination," which is a way to control what's in the black hole.

That's what you're talking about -- controlling what's happening in that black hole. It ties into the fact that humans interact with humans, humans interact with machines, and data is changing everywhere. How do we keep everything on track, how do we keep everything coordinated, when you have a whole bunch of ad-hoc processes hitting this standardized process? That requires some unique features. It requires the ability to aggregate different content types together into a single place.

An example that was mentioned earlier, where you have this thing that happens and somebody does something and then something else. The next step is going to analyze what that step does. The chances are that's related to some sort of content, probably semi-structured or maybe even unstructured content, something like a negotiation over what date something will occur. It's often human based, but when that date locks, something else will trigger, maybe the release of a document, or an invoice, or something out of an automated system.

So, you have these ongoing ad hoc processes that occur in business everyday and are difficult to automate. I've been analyzing solutions to this, and business activity coordination is that overlap, the Venn diagram, if you will, of process-centric and collaborative actions. For a human to contribute back and for a machine to recognize that the dataset has changed, move forward, and take the appropriate actions from a process-centric standpoint, after a collaborative activity is taking place is possible today, but is very difficult. I don't necessarily agree with the statement earlier that we need to have tight control of this. A lot of this can be managed by the users themselves, using common tools.

Solid foundation

One thing I'm looking at is how SharePoint, more specifically Windows SharePoint Services, acts as a solid foundation that allows humans and machines to interact nicely. It comes with a core portal that allows humans to visualize and change the data, but the behavioral connections to actually notify workflows that it's time to go to the next step, based on those human activities, are really critical functions. I don't see them widely available through today's workflow and BPM tools. In fact, those tools fall short, because of their inability to recognize these datasets.

They'll eventually get there. What you see today with regard to workflow and these BPM and workflow management tools is really around enterprise content management. "Jim approved this, so now Sally can go buy her ticket." Well, whoopie do. I could have done that with Ruby code in about ten minutes.

Gardner: It tends to follow a document trail rather than a process trail, right?

Morgenthal: Exactly. So, BPEL4People, from a standards perspective, is a standard route suspense tracking? All I'm controlling is going into the black hole and coming out of the black hole. Neither WS-Human Task nor BPEL4People addresses how I control what's happening inside the black hole.

Rowley: Actually it does. The WS-Human Task does talk about how do you control what's in the black hole -- what happens to a task and what kind of things can happen to a task while its being handled by a user? One of the things about Microsoft involvement in the standards committee is that they have been sharing a lot with us about SharePoint and we have been discussing it. This is all public. The nice thing about OASIS is that everything we do is in public, along with the meeting notes.

The Microsoft people are giving us demonstration of SharePoint, and we can envision as an industry, as a bunch of vendors, a possibility of interoperability with a BPEL4People business process engine like the ActiveVOS server. Maybe somebody doesn't want to use our worklist system and wants to use SharePoint, and some future version of SharePoint will have an implementation of WS-Human Task, or possibly somebody else will do an implementation of WS-Human Task.

Until you get the standard, that vision that JP mentioned about having somebody use SharePoint and having some BPM engine be able to coordinate it, isn't possible. We need these standards to accomplish that.

Gardner: Mike, doesn't governance come into play in this as well? If we want to reach that proper balance between allowing the ad hoc and the worker-level inputs into the system, and controlling risk, security, compliance, and runaway complexity, aren't policies and governance engines designed to try to produce that balance and maintain it?

Morgenthal: Before he answers, Dana, I have one clarification on your question. "Ad hoc" is going to occur, whether you allow it to occur or not. You've got the right question: How can the business attain that governance?

Gardner: Okay.

Rowley: There is governance over a number of things. There's governance that's essentially authorization for individual operations or tasks about how can who change what documents, once its been signed. Who can sign? Who can modify what? That's at the level of an individual task.

Then there's also who can make a formal change to the process, as opposed to ad-hoc changes, where people go in and collaborate out of band, whether you tell them they can or not. But, in the formal process, who is allowed to do that? One nice thing about a BPM is that you have the ability to have authorization decisions over these various aspects of the business process.

Gardner: This strikes me as hugely important, particularly now in our economy. This is really the nub up against which productivity ends up getting hamstrung or caught up. If we're looking to do transformation level-benefits and bring business requirements and outcomes into alignment with IT, this is the real issue and it happens at so many different levels.

I can even see this progressing now towards complex event processing (CEP), where we want to start doing that level of high-scale and high-volume complex events across domains and organizational boundaries. But, again, we're going to bring people into that as well and reflect it both ways. Jim Kobielus, do you agree that this is hugely important and yet probably doesn't get a lot of attention?

Kobielus: The CEP angle?

Need for interactivity

Gardner: No, the overall issue of, if we can get transformational and we can get productivity that helps make the business and financial case for investing in things like SOA and CEP, than the issue of the interactivity between the tactile and the human and the automated and the systems needs to develop further.

Kobielus: That's a big question. Let me just break it down to its components. First, with CEP we're talking about real time. In many ways, it's often regarded as a subset of real-time business intelligence, where you have the consolidation, filtering, and aggregation of events from various sources being fed into a dashboard or to applications in which roles are triggered in real time and stuff happens.

In a broader sense, if you look at what's going on in a workflow environment, it's simply a collection of events, both those events that involve human decision makers and those events that involve automated decision agents and what not.

Looking at the fact that BPEL and BPEL4People are now two OASIS standards that have roughly equal standing is important. It reflects the fact that in an SOA, underlying all the interactions, all the different integration approaches, you have this big bus of events that are happening and firing all over the board. It's important to have a common orchestration and workflow framework within which both the actions of human beings and the actions of other decision agents can be coordinated and tracked in some unified way.

In terms of driving home the SOA value proposition, I'm not so sure that the event-driven architecture is so essential to most SOA projects, Dana, and so it's not clear to me that there is really a strong CEP component here. Fundamentally, when we're talking about workflows, we're talking about more time lags and asynchronous interactions. So, the events angle on it is sort of secondary.

Gardner: Let me take that back to Mike Rowley. I'm looking for a unified theory here that ties together some of what we have been talking about at the people process level with some of this other, larger event bus as Jim described at that more automated level. Are they related, or are they too abstract from one another?

Rowley: No, they're related. It's funny. I bought into everything that Jim was just saying, except for the very end, where he said that it's not really relevant. A workflow system or a business process is essentially an event-based system. CEP is real-time business intelligence. You put those two together and you discover that the events that are in your business process are inherently valuable events.

You need to be able to discover over a wide variety of business processes, a wide variety of documents, or wide variety of sources, and be able to look for averages, aggregations and sums, and the joining over these various things to discover a situation where you need to automatically kickoff new work. New work is a task or a business process.

What you don't want to have is for somebody to have to go in and monitor or discover by hand that something needs to be reacted to. If you have something like what we have with ActiveVOS, which is a CEP engine embedded with your BPM, then the events that are naturally business relevant, that are in your BPM, can be fed into your CEP, and then you can have intelligent reaction to everyday business.

Eventing infrastructure

Kobielus: Exactly, the alerts and notifications are inherent in pretty much, any workflow environment. You're quite right. That's an eventing infrastructure and that's an essential component. I agree with you. I think the worklist can be conceptualized as an event dashboard with events relevant to one decision agent.

Rowley: It's more than just alerts and notifications. Any BPM can look for some threshold and give somebody a notice if some threshold has been exceeded. This is about doing things like joining over event streams or aggregating over event streams, the sorts of things that the general-purpose CEP capabilities are important for.

Gardner: JP, do you agree that we have some commonality here between CEP and its goals and value, and what we are talking about more at the human tactile workflow level?

Morgenthal: From my experience, what I've been looking at with regard to this is what I'm calling "business activity coordination." I think there is important data to be meted out after the fact about how certain processes are running in organizations. When companies talk about waste and reengineering processes, a lot of what they don't understand about processes, the reasons why they never end up changing, is because these ad-hoc areas are not well understood.

Some aspects of CEP could be helpful, if you could tag this stuff going on in that black hole in such a way that you could peer into the black hole. The issue with not being able to see in the black hole is not technical, though. It's human.

Most often, these things are distributed tasks. It's not like a process that's happening inside of accounting, where Sally walks over to Joe and hands him a particular invoice, and says, "Oh look, we could have just made that electronic." It's something leaving this division and going into that division, or it's going from this department to that department to that department. There is no stakeholder to own that process across all those departments, and data gets lost.

You're not going to find that with a CEP, because there are no automation tags at each one of those milestones. It could be useful to postmortem and reengineer after the fact, but somebody has got to gain hold that there is stuff happening in the back hole and automating in the black hole has to get started.

Kobielus: I've got a slightly better and terser answer than the one I gave a moment ago. A concept that's in BPM is business activity monitoring (BAM), essentially a dashboard of process metrics, generally presented to a manager or a steward. In human workflow, what is the equivalent of BAM -- being able to view in real time the running status of a given activity or process?

Gardner: There are also incentives, how you compensate people, reward them, and steer them to behaviors, right?

Morgenthal: On the dashboard, it’s like a remedy, when you have operations and you have trouble tickets, and how quickly are those trouble tickets are being responded to. It doesn't work. I'll tell you a funny example, which everyone out there is going to kick out of. At Sears, when you pick up stuff, after buying something big in the store, they have this monitor with this big flat screen and a list of where you are in the process after you scan your receipt. It shows you how long you're waiting.

What happens is the guy has learned how to overrun the system. He comes out, collects your ticket, and you are still sitting there for 30 minutes, but the clock has stopped on the screen. All of a sudden, behind you, is the thing that says, "We have 99.9 percent response rate. You never wait more than two minutes." Of course not. That guy took my ticket at 1 minute and 53 seconds and let me sit there for 30 minutes until my product came out.

Gardner: I think we're looking out for the best of both worlds. We want the best of what systems automation and documentation and repeat processes can do, but we also need that exception management that only a person can do, and we have all experience of how this can work or not work, particularly in a help desk situation.

Maybe you've have had the experience where you call up a help desk and the person says, "Well, I'd like to help you with that, but my process doesn't allow for it," or "We have no response for that particular situation, so I will have to go back to my supervisor," versus someone who says, "I've got a good process, but I can also work within that process for an exceptional level," and then perhaps bake that back into the process. Back to Mike Rowley.

CEP is core

Kobielus: Actually, Dana, I haven't finished my response, I just want to tie it to CEP. CEP is a core component of BAM quite often, event processing. BAM is basically the dashboard to aggregate events relevant to a given business process. In a human workflow, what is the equivalent of CEP and BAM? To some degree, it's social networks like Facebook, LinkedIn, or whatever, in the sense that I participate as a human being in a process that involves other human beings, who form a community -- my work group or just the workflow in which I'm involved.

How do I get a quick roll up of the status of this process or project or that matter in which I am just one participant? Well, the whole notion of a social network is that I can go there right away and determine what everybody is doing or where everybody else's status is in this overall process. Shouldn't that social network be fed by real time events, so I can know up to the second what Jean is doing, what Joe is doing, what Bob is doing, within the context of this overall workflow in which I am also involved?

So, CEP and BAM relate to social networks, and that's the way that human beings can orient themselves inside these workflows and can coordinate and enable that lateral side-to-side, real-time connection among human beings that's absolutely essential to getting stuff done in the real world. Then, you don't have to rely simply on the clunky asynchronous back-and-forth message passing, that we typically associate with workflows.

Gardner: Mike Rowley, we have a new variable in this, which is the social networking and the ability for people to come up with efficient means for finding a consensus or determining a need or want that hadn't been easily understood before. Is there a way of leveraging what we do within these social networks in a business process environment?

Rowley: Yes. Tying event processing to social networks makes sense, because what you need to have when you're in a social network is visibility, visibility into what's going on in the business and what's going on with other people. BPM is all about providing visibility.

I have a slight quibble in that I would say that some of CEP is really oriented around automatic reaction to some sort of an event condition, rather than a human reaction. If humans are involved in discovering something, looking something up, or watching something, I think of it more as either monitoring or reporting, but that's just a terminology. Either way, events and visibility are really critical.

Gardner: We can certainly go into the whole kumbaya aspect of how this could all be wonderful and help solve the world's ills, but there is the interoperability issue that we need to come back to. As you were mentioning, there are a lot of vendors involved. There is a tendency for businesses to try to take as much of a role as they can with their platforms and tools. But, in order for the larger values that we are discussing to take place, we need to have the higher level of interoperability.

Realistically, Mike, from your perspective in working through OASIS, how well do the vendors recognize the need to give a little ground in order to get a higher value and economic and productivity payback?

Rowley: There seems to be a real priority given to getting this thing done and to getting it to be effective. The technologists involved in this effort understand that if we do this well, everybody will benefit. The whole market will grow tremendously, because people will see that this is an industry wide technology, it’s not a proprietary technology.

Active Endpoints is really at the forefront of having an implementation of BPEL4People in the user's hands, and so we're able to come to the table with very specific feedback on the specs, saying, "We need to make these changes to the coordination protocols," or "We may need to make these changes to the API," because it doesn't work for this, that, or the other reason. What we haven't seen is people pushing back in ways that would imply they just want to do things their own way.

Gardner: With all due respect, I know Active Endpoints is aggressive in this, but a company of your size isn't too likely to sway an entire industry quite yet. What about partnerships? People aren't pushing back, but how many people are putting wind in your sails as well?

Wholehearted adoption

Rowley: That's exactly what they're doing. They're basically adopting it wholeheartedly. We have had, I would say, a disproportionate impact on these specs, primarily because the people involved in them see the technical arguments as being valid. Technical arguments that come from experience tend to be the best ones, and people jump on.

Gardner: How about the professional services, systems integrators, and people like McKinseys who are organizational management focused? Wouldn't this make a great deal of sense for them? If you have a good strategic view as a vendor, you say, "Yes, we'll grow the pie. We'll all benefit. But, there is another whole class of consultant, professional services, and integrator that must clearly see the benefit of this without any need to maintain a position on a product or technology set.

Rowley: Through the standards effort, we haven't seen very much involvement by systems integrators. We have seen integrators that have really appreciated the value of us having a standard and knowledge, knowing that if they invest in learning the technology, they're not stuck if they invest and develop a framework.

Integrators often will have their own framework that they take from one to the other. If they build it on top of BPEL4People and WS-Human Task, they really get substantial investment protection, so that they don't have to be stuck, no matter what vendor they're picking. Right now, in our case, they pick Active Endpoints, because we have the earliest version.

Gardner: The question is JP that we've been hearing how the role of systems integrators and consultants is important in evangelizing and implementing these processes and helping with interoperability across the business as well as the human, as well as the systems. Do you see yourself as an evangelist, and why wouldn't other consultants also jump on the bandwagon?

Morgenthal: Well, I do take that role of helping to get out there to advance the industry. I think a lot of system integrators though are stuck with having to deal with day-to-day issues for clients. Their role is not to help drive new things as much as it is to respond to client need and heavily utilize the model.

Gardner: You've hit on something. Whose role is it? As Jim was saying, BAM makes sense at some level, but whose role is it to come in and orchestrate and manage efficiency and processes across these boundaries?

Morgenthal: Within the organization?

Gardner: Yes.

Morgenthal: It's the management, the internal management. It's their job to own these processes.

Gardner: So it's the operating officer?

Morgenthal: The COO should drive this stuff. I haven't yet seen a COO who takes these things by the hand and actually drives them through.

Gardner: Mike Rowley, who do you sell your Active Endpoints orchestration tools to?

Rowley: Primarily to end users, to enterprises, but we also sell to system integrators sometimes.

Gardner: But who inside of those organizations tends to be the inception point?

Rowley: Department level people who want to get work done. They want to develop an app or series of apps that help their users be productive.

Kobielus: It hasn't changed. I've written two books on workflow over the past 12 years, and workflow solutions are always deployed for tactical needs. The notion that companies are really itching to establish a general-purpose workflow orchestration infrastructure as a core of their SOA, so that they can then leverage out and extend for each new application that comes along isn't how it works in the real world. I think Mike has laid it out there.

As far as the notion that companies are looking to federate their existing investments -- whether Oracle, IBM, SAP, or others workflow environments -- by wrapping them all in a common SOA standards framework and make them interoperable, I don't see any real push in the corporate world to do that.

Morgenthal: One thing I really like about SOA is that it really should be the case that if you have got an overarching SOA mandate in the enterprise, that should enable lower-level, department-level freedom, as long as you fit with providing and consuming services.

BPM doesn't have to be an enterprise-wide decision, because it just gets clogged, too many decision makers have to sign off. If you get something like BPEL4People, it's really oriented around not just workflow in kind of the older workflow systems, but its workflow in a way that fits in a SOA, so that you can fit into that larger initiative without having to get overall approval.

Gardner: We're going to have to leave it there. We are about out of time. We've been discussing the issue of BPEL4People and better workflow productivity, trying to join systems and advances in automation with what works in the field, and somehow coordinating the two on a lifecycle adoption pattern. I'd like to thank our guests. We've been discussing this with Mike Rowley, director of technology and strategy at Active Endpoints. I appreciate your input, Mike.

Rowley: Thank you.

Gardner: We have also been joined by Jim Kobielus, senior analyst at Forrester Research; thank you Jim.

Kobielus: Yeah, thanks Dana, always a pleasure.

Gardner: Lastly, JP Morgenthal, independent analyst and IT consultant. You can be reached at Is that the right address, JP?

Morgenthal: That's the right address, thank you, Dana.

Gardner: I'm Dana Gardner, principal analyst at Interarbor Solutions. I would like to thank our sponsors for today's podcast, Active Endpoints, maker of the ActiveVOS, Visual Orchestration System, as well as the support of TIBCO Software.

Listen to the podcast. Download the podcast. Find it on iTunes and Learn more. Charter Sponsor: Active Endpoints. Additional underwriting by TIBCO Software.

Special offer: Download a free, supported 30-day trial of Active Endpoint's ActiveVOS at

Edited transcript of BriefingsDirect Analyst Insights Edition podcast, Vol. 37 on aligning human interaction with business process management. Copyright Interarbor Solutions, LLC, 2005-2009. All rights reserved.

Tuesday, February 17, 2009

Cloud Computing, Enterprise Architecture Align to Make Each More Useful to Other, Say Experts

Transcript of a podcast with industry practitioners and thought leaders at The Open Group's Enterprise Cloud Computing Conference in San Diego.

Listen to the podcast. Download the podcast. Find it on iTunes and Learn more. Sponsor: The Open Group.

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions and you're listening to BriefingsDirect. Today, we welcome our listeners to a sponsored podcast discussion coming to you from The Open Group's Enterprise Cloud Computing Conference in San Diego, February, 2009.

Our topic for this podcast, part of a series on events and major topics at the conference, centers on cloud computing and its intersection with enterprise architecture. You might consider this a discussion about real-world cloud computing, because this subject has been often discussed across a wide variety of topics, with many different claims, and perhaps a large degree of hype.

We're going to be talking with a few folks who will bring cloud and its potential into alignment with what real enterprises do and will be expecting to do, in terms of savings and productivity in the coming years.

Here to help us sort through cloud computing in enterprise architecture, is Lauren States, vice president in IBM's Software Group; Russ Daniels, vice president and CTO Cloud Services Strategy at Hewlett-Packard (HP), and David Linthicum, founder of Blue Mountain Labs. Welcome to you all.

There's an early-adopter benefit in some technologies. I expect that that might be the case with cloud computing as well. But, in order for us to assert where cloud computing makes the most sense, I think it's important to establish what problem we're trying to solve.

Why don't we start with you, Dave? What are the IT problems that cloud computing is designed for, or is being hyped to solve?

Dave Linthicum: Thank you very much, Dana. Cloud computing is really about sharing resources. If you get down to the essence of the value of cloud computing, it's about the ability to leverage resources much more effectively than we did in the past. So, number one, it's really designed to simplify the architectures that we are dealing with.

Most enterprises out there have very complex, convoluted, and inefficient architectures. Cloud provides us with the ability to change those architectures around any business needs, as the business needs change, and expand and contract those architectures as the business needs require.

Gardner: What problem are we solving from your perspective, Russ?

Russ Daniels: Hi, Dana. For most enterprises today, most of what they are really interested in is exactly what was described. It's a question of, "How can I source infrastructure in a way that's more flexible, that allows me to move more quickly, and that allows me potentially to have more variable costs?"

Maybe I can provision internally for my average demand rather than peak demand, and then be able to take advantage of external capacity to handle those peaks more cost effectively.

We think all that's really quite important. There's something else that's going on that people tend to talk about as cloud, which has different implications and takes advantage of some of that same flexible infrastructure, but allows us to go after different problems.

Most enterprises today are trying to figure out, "How can I improve my efficiency? Rather than having capacity dedicated to each of the application workloads that I need to deliver to the business, can I flexibly bind the pools of resources, whether they are in my data center or in somebody else's?"

Gardner: Okay. Let's rephrase the question slightly for you, Lauren. What business problems are we solving with cloud computing?

Agile response

Lauren States: Thank you very much, Dana. I agree with both what Dave and Russ said. I think the business problem that we're trying to solve is how we can make IT respond to business in a more agile way. The opportunity that we have here is to think about, how to industrialize IT and create an IT services supply chain.

The combination of the technologies available today, the approaches that we're using in the underlying architecture, plus our collective experience gives us a chance to use cloud computing to realize the value of IT to an organization. We can stop having these conversations about, what is the additional cost that you are bringing in, why is IT separate, and why is it such a burden, and really integrate IT with business.

Gardner: As I mentioned, we also want to put this in the context of enterprise architecture. For organizations that see some potential in the cloud models that are emerging, are looking at new ways to develop software and source their services, and where they are located, and deployed in their production facilities, there probably also needs to be some preparation. Jumping in too soon might have some downside as well. Given we are in a tough economy, economics is very much top of mind.

When it comes to enterprise architecture, what do you need to do or have in place, in order to put yourself at an advantage or be in a good position to take advantage of cloud? Let's start with you, Dave.

Linthicum: Number one, you need to assess your existing architecture. Cloud computing is not going to be a mechanism to fix architecture. It’s a mechanism as a solution pattern for architecture. So, you need to do a self-assessment as to what's working, and what's not working within your own enterprise, before you start tossing things outside of the firewall onto the platform in the cloud.

Number two, once you do that, you need to have a good data-level understanding, process-level understanding, and a service-level understanding of the domain. Then, try to figure out exactly which processes, services, information are good candidates for cloud computing.

One of the things I found out implementing this within my clients is that not everything is applicable for cloud computing. In fact, 50 percent of the applications that I look at are not good candidates for cloud. You need to consider that in the context of hype.

Gardner: Lauren, from your perspective, what organizational management, technical underpinnings, and foundations might put you on a better position to leverage cloud?

States: Just building on what Dave said, I have a couple of thoughts. First, I completely agree that you have to have an aspirational view of where you are trying to go. And, you have to have a good understanding of your current environment, including simple things like knowing all the things in your environment and their relationship to each other. Lay out the architecture and develop the roadmap and the steps that you need to take to achieve cloud computing.

The other aspect that's really important is the organizational governance and culture part of it, which is true for anything. It's particularly true for us in IT, because sometimes we see the promise of the technology, but we forget about people.

In clients I've been working with, there have been discussions around, "How does this affect operations? Can we change processes? What about the work flows? Will people accept the changes in their jobs? Will the organization be able to absorb the technology? "

Enterprise architecture is robust enough to combine not only the technology but the business processes, the best practices, and methodologies required to make this further journey to take advantage of what technology has to offer.

The right environment

Gardner: Let's flip the question over a little bit at this point and look at what would not be a good environment to start embarking on cloud. Is there something you should not do or, if you are lacking something, perhaps be leery of in using clouds? Russ?

Daniels: It's very easy to start with technology and then try to view the technology itself as a solution. It's probably not the best place to start. It's a whole lot more useful if you start with the business concern. What are you trying to accomplish for the business? Then, select from the various models the best way to meet those kinds of needs.

When you think about the concept of, "I want to be able to get the economies of the cloud -- there is this new model that allows me to deliver compute capacity at much lower cost," we think that it's important to understand where those economics really come from and what underlies them. It's not simply that you can pay for infrastructure on demand, but it has a lot to do with the way the software workload itself is designed.

There's a huge economic value you can get, if the software can take advantage of horizontal scaling -- if you can add compute capacity easily in a commodity environment to be able to meet demand, and then remove the capacity and use it for another purpose when the demand subsides.

This is a real important problem. We know how to do that well for certain workloads. Search is a great example. It scales horizontally very effectively. The reason is that search is pretty tolerant of stale data. If some of the information on some of the nodes is slightly out of date, it doesn't really matter. You'll still get the right answer.

If you look at other types of workloads, high degrees of transactionality are critical. When you take an item out of inventory, you really only get to do that once. When you try to scale those things horizontally, you have real issues with the possibility of a node failure or causing a lock not to be released. That then creates some nasty back-operational process that has to be implemented correctly by your IT organization for everything to work.

It's the balance between what are the problems we are trying to solve and how well this particular architectural patterns match up to those. Every IT organization has to keep that in mind.

Gardner: While there's been quite a bit of hype around cloud, there is also a fair amount of naysaying about it here at the TOGAF 9 launch and the practitioners conference for The Open Group.

I've spoken to several people who really don't have a lot of favorable impressions of cloud. They seem to think that this is a way of dodging the IT department and perhaps bringing more complication and a lack of governance, which could then spin out of control and make things even worse.

So, what are some best practices that we could establish at this early juncture of how to approach cloud and bring it into some alignment not only with the business, but with the existing IT services and infrastructure? I guess this is our architecture question. Dave?

Set the policy

Linthicum: The first thing you need to do is to create, publish, and widely distribute the policy on cloud computing. Someone needs to figure out what it is, the value that it’s going to have for the particular enterprise, and the vision or the strategy or the approach that they need to leverage to get there.

The next thing you do is publish policies around cloud computing. Lots of my clients are building what I call rogue clouds. In other words, without any kind of sponsorship from the IT department, they're going out there to Google App Engine. They're building these huge Python applications and deploying them as a mechanism to solve some kind of a tactical business need that they have.

Well, they didn't factor in maintenance, and right now, they're going back to the IT group asking for forgiveness and trying to incorporate that application into the infrastructure. Of course, they don't do Python in IT. They have security issues around all kinds of things, and the application ends up going away. All that effort was for naught.

You need to work with your corporate infrastructure and you need to work under the domain of corporate governance. You need to understand the common policy and the common strategy that the corporation has and adhere to it. That's how you move to cloud computing.

Gardner: How do we know if companies are doing this right? Are there yet any established milestones or best practices? Clearly, we've seen that with other technology adoptions, we have certain signals that say, "Aha, we're doing something wrong. We need to reevaluate."

Any ideas, Lauren, about how companies would know whether they are doing cloud properly? What should they be getting in return?

States: That's a great question, Dana. Let me just take it from a couple perspectives.

First, we've looked at our own IT transformation within IBM to try to discover what were the activities we did to make sure that we could take out cost and reduce complexity. We feel that looking at the financial aspects helps drive an organization to a common goal.

In our company, we took $4 billion out of our IT infrastructure over the past five years, and that's part of our strategy for our common centralized functions. There's nothing like achieving a specific target to make an organization focus.

Our initial feeling is that you really have to get your arms around virtualization, so you can take out the capital expense and then have the real hard discussions around standardization.

You can reduce the complexity of the application portfolios, reduce the administration and support costs, and take a very serious look at your service management capability, so that you can get at the operations and implement the policies that you described, Dave, and continue to make progress.

I don't think that there's any completely done use-case out there that we can all look to and say, "Oh, that's what it looks like." It's starting to get clearer as we get more experienced. But, as I said, you need a specific target.

Our target was cost. Other organizations have other targets, like shared services or creation of new business models. You can get the whole organization clear and managed to that, and, as in our case, have some of these items be part of the executive compensation structure. Then, you have a better chance of achieving what the business is looking to do.

Gardner: I'm going to take the same question to you, Russ. What should companies be looking for if they do cloud properly? What are the returns?

Key questions

Daniels: This really starts with a couple of key questions. First, why do you have an IT function in your enterprise? Our answer to that is that you need to have someone responsible for the sourcing and delivering of services in a form that is consistent with the businesses needs.

The cloud just represents one more sourcing opportunity. It’s one more way to get services, and you have to think of it in the context of the requirements that the business has for those services. What value do they represent, and then where is the cloud an appropriate way to realize those benefits? Where is it the best answer?

It starts with that. To be able to answer that question is a significant issue for enterprise architecture. It means you have to have a pretty good model of what the enterprise is about -- how does it work, what are the key processes, what are the key concerns? That picture, that design of the enterprise itself, helps you make better choices about the appropriate way to source and deliver services.

There's a particular class of services, needs for the business, that when you try to address them in the traditional application-centric models, many of those projects are too expensive to start or they tend to be so complex that they fail. Those are the ones where it's particularly worthwhile to consider, "Could I do these more effectively, with a higher value to the business and with better results, if I were to shift to a cloud-based approach, rather than a traditional IT delivery model?"

It's really a question of whether there are things that the business needs that, every time we try to do them in the traditional way, they fail, under deliver, were too slow, or don't satisfy the real business needs. Those are the ones where it's worthwhile taking a look and saying, "What if we were to use cloud to do them?"

Gardner: Back to you, Dave. We've heard quite a bit about private clouds or on-premises clouds. On one hand, what's interesting about clouds is that you have a one size fits all. You have a common set of services or a common infrastructure, but lot of companies are interested in customization and differentiation and they also need to integrate with what's been running underneath the hood inside the organizations anyway.

Tell us how in your practice you see the role of a private cloud emerging, and particularly how that offsets this notion of it's all just a big common denominator cloud.

Linthicum: The value of private clouds is you can take what's best of cloud computing and implement it behind your firewall. So, you get around the whole control and security issues that people deal with -- and also the not-invented-here attitude out there.

The difficulty that people are running into right now is trying to figure out how to leverage cloud-computing environments when their existing architectures are so tightly coupled. They're coming to the conclusion that it's very difficult to do that. They can't use Amazon, Google, or other cloud-based services, because the information is so bound to the behaviors inside those systems and the systems are so tightly coupled. It's very difficult to decouple pieces of them and put them in the cloud. So, private clouds are an option for that.

You provide that on infrastructure that's shareable. You can expand it as you need it, and, as Russ mentioned earlier, give as many cycles as you need to particular applications that need them and take away the cycles from the applications that don't. Therefore, you end up with an architecture that's much more effective and efficient.

It also syncs up very well with the notion of service-oriented architecture (SOA) and is additive to an enterprise architecture and not necessarily negatively disruptive.

Gardner: Do you use your traditional enterprise architecture principles and skills when you construct your cloud on-premises or does it require something different?

It's enterprise architecture

Linthicum: You do. At the end of the day, it's enterprise architecture. So you're doing enterprise architecture and you're doing the sub-pattern of SOA. You're using cloud computing, specifically private clouds, as an end-state solution. So, it's nothing more than an instance of a solution in that matter.

That doesn't degrade it as far as having value, but you get to that through requirements, planning, governance, all the things that are around enterprise architecture -- and you get to the end-state. Cloud computing is in the arsenal of the technology you have to solve your problem, and that's how you leverage it.

Gardner: Lauren, in your presentation earlier today, you described some economic benefits that IBM is enjoying, or beginning to enjoy, as a result of some cloud activities. Tell us about the return-on-investment (ROI) equation. How substantial is it, and is it so enticing, particularly in today's tough economy where every dollar counts, that companies should be moving toward this cloud model quickly?

States: The ROI that we've done so far for one of our internal clouds, which is our technology adoption program, providing compute resources and services to our technical community so that they can innovate, has actually had unbelievable ROI -- 83 percent reduction in cost and less than 90-day payback.

We're now calibrating this with other clients who are typically starting with their application test and development workloads, which are good environments because there is a lot of efficiency to be had there. They can experiment with elasticity of capacity, and it's not production, so it doesn't carry the same risk.

Gardner: Let's just unpack those numbers a little bit. Are you talking about an on-premises cloud or grid that IBM has put together? Or is this leveraging outside third parties; a hybrid? What were you able to do those very impressive feats with?

States: This is an on-premises cloud. It’s at our data center in Southbury, Conn. There are three major levers for cost. First was virtualization. They virtualized the infrastructure. So, they cut down their hardware, software, and facilities cost.

They were able to put in significant automation, particularly around self-service request for the resources. We took out quite a bit of labor through automation, and that was what gave the substantial savings -- particularly the labor cost, from roughly 14 or 15 administrators, down to a couple or three. That's where we saved the cost.

Gardner: Russ Daniels, we have heard quite a bit from HP about transformation in IT, modernization, and consolidation. Do you see cloud as yet another facet of the larger topic, which is really IT transformation, and how big a piece of IT transformation will cloud end up being?

Daniels: It's very easy to get so excited about technologies that you forget about the fundamental challenges that every business face around change management, this concept of transformation.

Change management

Ultimately, if you want an organization to do something different than what it does, you have to take on the real work involved in that change management, getting people comfortable with doing things differently, moving out of their current comfort zones or current competencies, and learning new skills and new ways to do things. So, yeah, we think that that's a major component.

When we think about these kinds of applications of taking advantage of what sometimes is called a private cloud, what we tend to think of as an internal infrastructure utility. What we've discovered is that change management concerns -- getting people comfortable that their workloads will be adequately secure, that their needs will be met, when they are being delivered in the shared form -- has been a real challenge.

A lot of times the adoption of these technologies is slowed by the business' concern that they are going to end up at the end of the queue, rather than getting their fair share.

As you think about all of these opportunities, you have to source and deliver these services. It's critical that you build the right economic models and understand the trade-offs effectively.

If you have an internal shared capacity, you still, as a business, are taking on all of the fixed costs associated with the operation. It's different than if some third party is handling those fixed costs and you're only paying variable costs.

It's also true though that many times the least expensive way to do it is to do it for yourself, to do it internally, in the same way that, if you use a car 20 days a year, renting the car can be a real cost saver. If you use a car every day, it's typically better to just go ahead and buy the car, take on the maintenance responsibilities, the insurance cost, etc., yourself, because if you are using the car a lot over the course of that year, the costs amortize much more effectively.

Gardner: How does a cloud approach help organizations change more rapidly? There's some concern there that going to a cloud model, in this case a third-party cloud, might end up being another form of lock-in, and that you might lose agility. Public or private, what is it about a cloud model that makes your company more agile?

Daniels: Our view is that the real benefits, the real significant cost savings that can be gained. If you simply apply virtualization and automation technologies, you can get a significant reduction of cost. Again, self-service delivery can have a huge internal impact. But, a much larger savings can be done, if you can restructure the software itself so that it can be delivered and amortized across a much larger user base.

There is a class of workloads where you can see orders-of-magnitudes decreases in cost, but it requires competencies, and first requires the ownership of the intellectual property. If you depend upon some third-party for the capability, then you can't get those benefits until that third-party goes through the work to realize it for you.

Very simply, the cloud represents new design opportunities, and the reason that enterprise architecture is so fundamental to the success of enterprises is the role that design plays in the success of the enterprise.

The cloud adds a new expressiveness, but imagining that the technology just makes it all better is silly. You really have to think about, what are the problems you're trying to solve, where a design approach exploiting the cloud generates real benefit.

Gardner: The same question to you, Dave Linthicum. Public-private-hybrid: What is it about a cloud model that makes a company more responsive from a business outcomes perspective?

Key to agility

Linthicum: I don't think a cloud model inherently makes them more responsive. It's the fact that they're leveraging all kinds of technology, inclusive of cloud computing, as a mechanism to provide more agility to the enterprise.

In other words, if they're able to do that with external clouds to get applications up and running very quickly, with the security and the performance requirements still in line, design that into their enterprise architecture, and leverage the private clouds to get virtualization and get at resources in a shareable state among the various entities within the organization, they are able to share the cost. Then, they're going to be able to do IT better. That's what it's all about.

What we're looking to do is not necessarily reinvent or displace IT or throw out the old legacy stuff and put in this new cloud stuff. We're looking to provide a layer of good architecture and good technology on the existing things, as well as get back into the architecture and fix things that need to be fixed and provide good IT to address the business.

Gardner: There's an interesting confluence now with the harsh economic environment. We're looking at this cloud phenomenon largely as a cost benefit. Yes, do IT better, but there is a significant cost, better utilization, perhaps flexibility in services, and how even an IT organization runs itself.

Coming down to the end now. Do you agree, Lauren, that what's going to drive cloud into organizations and its use through a variety of models over the next couple of years is largely a function of cost?

States: Yes, cost will be a huge driver in this. Cost is a conversation that is very active in the C suite. The conversations on cloud have re-established some of the conversations with lines of business, because they are curious about how can they take out cost and achieve the agility that they're looking for.

But I'd also be mindful that there is an opportunity for us to drive innovation and economic growth with new business models, new businesses, new service deliveries, and new workloads. This will be something that large organizations look for, but it will unlock IT for many smaller organizations that don't have the resources within their organizations to provide these services to their constituents.

Gardner: Okay. Russ Daniels, same question. In an economic maelstrom, what are the economic drivers for cloud, and is that going to be the primary driver?

Daniels: I've not seen any time in the industry where the conversation between business and IT didn't have a significant cost component. Certainly, when the times become more difficult, that intensifies, but there's never a point at which that isn't an interesting question.

A few years ago, when Mark Hurd came in as our CEO, HP started to go through a very significant reduction in the cost of IT. Economic times were fine, but that was a very important focus.

A great opportunity

Cloud is relevant to that, but, as Lauren was saying, there is a great business opportunity as well. Every IT organization that's having those cost conversations would love to be able to have a value conversation, would love to be able to talk about how technology cannot just help control cost but can generate new business opportunities, open new markets, help the business gain share, improve the region and relationship that it has with its customers, and differentiate from its competitors better.

We think that cloud is really very suitable for many of those kinds of concerns. The ability to understand better what your customers care about and to tailor your offerings to those is something the cloud is particularly well suited to do, and allows the business to have a different conversation with IT, and one that the IT organization yearns for.

Gardner: This is probably a question that would be good for an entire hour-long additional podcast, but Dave Linthicum, on this notion of additional business and revenue, innovative processes that can create new wealth creation, what do you see as the top opportunities and using cloud in that regard, in creating new business?

Linthicum: Consulting companies are benefiting from it right now. They're getting a wealth of new business based on a new paradigm coming in. Lots of people are confused about how the paradigm should be used and they are building methodologies and those sorts of things.

The primary cloud component and the benefit that businesses will get will be the ability to leverage the network effect from the cloud-computing environment. In other words, they'll benefit if they're willing to engage infrastructure that's outside their firewall that they don't control in their host, and use that as a service -- in essence rent it -- and then they're able to see some additional value that the Internet web can bring, such as the social networking things and the ability to get analytical services.

I thought you put it great, saying that ultimately people are going to realize huge cost savings based on the ability to leverage what they have in a much more cost effective way. That's really where things are going right now.

So, I think the consultants are going to make the additional money and I think the hardware and software vendors are going to make some money, even though cloud computing will displace some hardware and software.

People are retooling right now and actually buying stuff, especially cloud providers that are building infrastructure. Then, it will come down to the core benefits that are being built around the private clouds and the public clouds that are being leveraged out there.

Gardner: So, it's perhaps a win-win-win just at the time in the economy when we need that. We'll have a win perhaps in being able to further leverage existing resources and assets and architectural methods and processes, further reduce the overall operating costs as a result of cloud, and at the same time, conjure up new business opportunities and models and ways of driving income across ecologies of players in ways we hadn't before.

That's a fairly auspicious position for cloud computing, and that's perhaps why we are hearing so much about it nowadays.

I want to thank our panelists. We have been joined by Lauren States, vice president in the IBM Software Group; Russ Daniels, vice president and CTO cloud services strategy for Hewlett-Packard; and Dave Linthicum, founder of Blue Mountain Labs.

Our conversation comes to you today through the support of The Open Group from the 21st Enterprise Architecture Practitioners Conference and Enterprise Cloud Computing Conference in San Diego in February, 2009.

I'm Dana Gardner, principal analyst at Interarbor Solutions. Thanks for listening and come back next time.

Listen to the podcast. Download the podcast. Find it on iTunes and Learn more. Sponsor: The Open Group.

Transcript of a podcast with industry practitioners and thought leaders at The Open Group's Enterprise Cloud Computing Conference in San Diego, February, 2009. Copyright Interarbor Solutions, LLC, 2005-2009. All rights reserved.

View more podcasts and resources from The Open Group's recent conferences and TOGAF 9 launch:

The Open Group's CEO Allen Brown interview

Live panel discussion on enterprise architecture trends

Deep dive into TOGAF 9 use benefits

Reporting on the TOGAF 9 launch

Panel discussion on security trends and needs

Access the conference proceedings

General TOGAF 9 information

Introduction to TOGAF 9 whitepaper

Whitepaper on migrating from TOGAF 8.1.1 to version 9

TOGAF 9 certification information

TOGAF 9 Commercial Licensing program information

Saturday, February 14, 2009

Effective Enterprise Security Begins and Ends With Architectural Best Practices Approach

Transcript of a podcast on security as architectural best practices, recorded at the first Security Practitioners Conference at The Open Group's 21st Enterprise Architecture Conference in San Diego, February 2009.

Listen to the podcast. Download the podcast. Find it on iTunes and Learn more. Sponsor: The Open Group.

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you're listening to BriefingsDirect. Today, we welcome our listeners to a sponsored podcast discussion coming to you from The Open Group's first Security Practitioners Conference in San Diego, the week of Feb. 2, 2009.

Our topic for this podcast, part of a series of events and coverage at this conference, centers on enterprise security and the intersection with enterprise architecture (EA). The goal is to bring a security understanding across more planning- and architectural-level activities, to make security pervasive -- and certainly not an afterthought.

The issue of security has become more important over time. As enterprises engage in more complex activities, particularly with a boundaryless environment -- which The Open Group upholds and tries to support in terms of management and planning -- security again becomes a paramount issue.

To help us understand more about security in the context of enterprise architecture, we're joined by Chenxi Wang, principal analyst for security and risk management at Forrester Research; Kristin Lovejoy, director of corporate security strategy at IBM; Nils Puhlmann, chief security officer and vice president of risk management of Qualys, and Jim Hietala, vice president of security for The Open Group.

Let's start with you, Jim. Security now intersects with more elements of what information technology (IT) does, and there are more people responsible for it. From the perspective of The Open Group, why has it been a transition or a progression in terms of bringing security into architecture? Why wasn't it always part of architecture?

Jim Hietala: That's a good question, but probably predates my involvement with The Open Group. In TOGAF 9, the latest iteration of TOGAF that we announced this week, there is a whole chapter devoted to security, trying to get to the idea of building it in upfront, as opposed to tacking it on after the fact.

You've seen movement, certainly within The Open Group, in terms of TOGAF, and our enterprise architecture groups try to make that happen. It's a constant struggle that we've had in security -- the idea that functionality precedes security, and security has to be tacked on after the fact. We end up where we are today with the kind of security threats and environment that we have.

Gardner: Chenxi, we've seen security officer emerge as a role in the past several years. Shouldn't everyone have, in a sense, the role of security officer as part of their job description?

Chenxi Wang: Everyone in the organization or every organization? My view is slightly different. I think that in the architecture group there should be somebody who is versed in security, and the security side of the house should have an active involvement in architecture design, which is what we are seeing as an emerging trend in a lot of organizations today.

Gardner: We're also facing a substantial economic downturn globally. Often, this accelerates issues around risk, change management, large numbers of people entering and leaving organizations, mergers and acquisitions, and provisioning of people off of applications and systems.

Kristin, perhaps you can give us a sense of why security might be more important in a downturn than when we were in a boom cycle?

New technologies

Kristin Lovejoy: There are a couple of things to think about. First of all, in a down economy, like we have today, a lot of organizations are adopting new technologies, such as Web 2.0, service-oriented architecture (SOA) style applications, and virtualization.

Why are they doing it? They are doing it because of the economy of scale that you can get from those technologies. The problem is that these new technologies don't necessarily have the same security constructs built in.

Take Web 2.0 and SOA-style composite applications, for example. The problem with composite applications is that, as we're building these composite applications, we don't know the source of the widget. We don't know whether these applications have been built with good secured design. In the long-term, that becomes problematic for the organizations that use them.

It's the same with virtualization. There hasn't been a lot of thought put to what it means to secure a virtual system. There are not a lot of best practices out there. There are not a lot of industry standards we can adhere to. The IT general control frameworks don't even point to what you need to do from a virtualization perspective.

In a down economy, it's not simply the fact we have to worry about privileged users and our employees, blah, blah, blah. We also have to worry about these new technologies that we're adapting to become more agile as a business.

Gardner: Nils, how do you view the intersection of what an enterprise architect needs to consider as they are planning and thinking about a more organized approach to IT and bringing security into that process?

Nils Puhlmann: Enterprise architecture is the cornerstone of making security simpler and therefore more effective. The more you can plan, simplify structures, and build in security from the get-go, the more bang you get for the buck.

It's just like building a house. If you don't think about security, you have to add it later, and that will be very expensive. If it's part of the original design, then the things you need to do to secure it at the end will be very minimal. Plus, any changes down the road will also be easier from a security point of view, because you built for it, designed for it, and most important, you're aware of what you have.

Most large enterprises today struggle even to know what architecture they have. In many cases, they don't even know what they have. The trend we see here with architecture and security moving closer together is a trend we have seen in software development as well. It was always an afterthought, and eventually somebody made a calculation and said, "This is really expensive, and we need to build it in."

Things like security and the software development lifecycle came up, and we are doing this now for architecture. Hopefully, we'll eventually do this for complex systems. Kristin mentioned Web 2.0. It's the same thing there. We have wonderful applications, and companies are moving towards Facebook en masse, but it's a small company. The question is, was security built in, has anyone vetted that, or are we not just repeating the same mistake we did so many times before?

A matter of process

Gardner: We see with security that it's not so much an issue of technology but really about process, follow through, policy determination and enforcement, and the means to do that.

Chenxi, when it comes to bringing security into a regulated provision, policy-driven process, it starts to sound like SOA. You'd have a repository, you'd have governance, and the ways in which services would be used or managed and policies applied to them. Is there actually an intersection between some of the concepts of architecture, SOA, and this larger strategic approach to security?

Wang: There is definitely some intersection. If you look at classic SOA architecture, there is a certain interface, and you can specify what the API is like. If you think about a virtual approach to security, it's also a set of policies you need to specify upfront, hopefully, and then a set of procedures in which you adhere to these policies.

It's very much like understanding the API and the parameters that go into using these APIs. I hadn't actually thought about this really nicely laid out analogy, Dana, but I think that's a quite good one.

Gardner: I think we're talking about lifecycles and managing lifecycles and services. I keep seeing more solutions, shared services, and then actual business and IT services, all being managed in a similar way nowadays with repository and architecture.

Jim, this is your first security conference at The Open Group. It's also coinciding with a cloud computing conference. Is there an element now, with the "boundarylessness" of organizations and what your architectures have tried to provide in terms of managing those permeable boundaries and this added layer, or a model for the cloud? More succinctly, how do the cloud and security come together?

Hietala: That's one of the things we hope to figure out this week. There's a whole set of security issues related to cloud computing -- things like compliance regulation, for example. If you're an organization that is subject to things like the payment card industry data security standard (PCI DSS) or some of the banking regulations in the United States, are there certain applications and certain kinds of data that you will be able to put in a cloud? Maybe. Are there ones that you probably can't put in the cloud today, because you can't get visibility into the control environment that the cloud service provider has? Probably.

There's a whole set of issues related to security compliance and risk management that have to do with cloud services. The session this week with a number of cloud service providers, we think, will bring a lot of those questions to the surface.

Gardner: Clearly, those on the naysaying side of the cloud argument often have a problem with the data leaving their premises. As we've heard from other speakers at the conference, having data or transactions that are separate from your organization or that happen at someone else's data center is actually quite common, and is sort of a cultural shift in thinking.

Nils, what do you think needs to happen from this cultural perspective in order for people to feel secure about using cloud models?

A shift in thinking

Puhlmann: We need to shift the way we think about cloud computing. There is a lot of fear out there. It reminds me of 10 years back, when we talked about remote access into companies, VPN, and things like that. People were very fearful and said, "No way. We won't allow this." Now is the time for us to think about cloud computing. If it's done right and by a provider doing all the right things around security, would it be better or worse than it is today?

I'd argue it would be better, because you deal with somebody whose business relies on doing the right thing, versus a lot of processes and a lot of system issues. A lot of corporations today are understaffed, or there is a lot of transition, and a lot of changes there. Simply, things are not in order or not the way they should or could be.

Then, we have the data issue. Let's face it, we already outsource so much work to other places. If ever my data is in a certain place, where I have audited and vetted that provider, or somebody from a remote country as a DBA is accessing my data in-house, is there really a difference when it comes to risk? In my mind, not really, because if you do both well, then it's a good thing.

There's too much fear going into this, and hopefully the security community will have learned from the past and will do a good job in addressing what we don't have today, like best practices, and how vendors and customers strive for that.

Gardner: Kristin, I read a quote recently where someone said that the person or persons that manage the firewall are the most important people in the IT organization. Given what we are dealing with in terms of security, and also trying to bail ourselves of some of these hybrid models, do you agree with that, and if so, why?

Lovejoy: That's a leading question. Is the firewall administrator important? Obviously, yes. More important than ever. In a world with no boundaries, it becomes very hard to suggest that that is accurate.

What we're seeing from a macro perspective is that the IT function within large enterprises is changing. It's undergoing this radical transformation, where the CSO/CISO is becoming a consultant to the business. The CSO/CISO is recognizing, from an operational risk perspective, what could potentially happen to the business, then designing the policies, the processes, and the architectural principles that need to be baked in, pushing them into the operational organization.

From an IT perspective, it's the individuals who are managing the software development release process, the people that are managing the changing configuration management process. Those are the guys that really now hold the keys to the kingdom, so to speak.

Particularly when you are talking about enterprise cloud, they become even more important, because you have to recognize -- and Nils was mentioning this or inferred this -- that cloud provides a vision of simplicity. If you think about cloud and the way it's architected, a cloud could be much simpler than the traditional enterprise. If you think about who's managing that change and managing those systems, it becomes those folks that are key.

Gardner: Why is the cloud simpler? Is it because you're dealing now at a services and API level and you're not concerned necessarily with the rest of the equation?

Lovejoy: That's correct.

Gardner: Is that good for security or bad?

Aligning security and operations

Lovejoy: We've been dancing around the subject, but my hope is that security and operations become much more aligned. It's hard to distinguish today between operations and security. So many of the functions overlap. I'll ask you again, changing configuration management, software development and release, why is that not security? From my perspective, I'd like to see those two functions melding.

Gardner: So, security concerns and approaches and best practices really need to be pervasive throughout IT?

Lovejoy: Exactly. They need to come from the top, they need to move to the bottom, and they need to be risk based.

Gardner: Now, when it comes to the economics behind making security more pervasive, the return on investment (ROI) for security is one of the easier stories. Not being secure is very expensive. Being publicly not secure is even more expensive. Let's go back to Chenxi, the economics of security, isn't this something that people should get easy funding for in an IT organization?

Wang: The economics of security. This issue has been in research for a long time. Ross Anderson, who is a professor at University of Cambridge, runs this economics of security workshop since 1996, or something like that. There is some very interesting research coming out of that workshop, and people have done case studies. But, I'm not sure how much of that has been adopted in practice.

I've yet to find an organization that takes a very extensive economics-based approach to security, but what Kristin said earlier and what you just said is happening. We're seeing the IT security team in many organizations now have a somewhat diminished role, in the sense that some of the traditional security tasks are now moving into IT operations or moving into risk and compliance.

We're even seeing that security teams sometimes have dotted reporting responsibility to the legal team. Some of the functions are moving out of the security team, but at the same time, IT security now has an expanded impact on the entire organization, which is the positive direction.

Gardner: If there is a relationship between doing your architecture well, making systemic security, thought, vision, and implementation part and parcel with how you do IT, then it seems to me that the ROI for security becomes a very strong rationale for good architecture. Would you agree with that, Jim?

Hietala: I would. Organizations want, at all costs, to avoid plowing ahead with architectures, not considering security upfront, and dealing with the consequence of that. You could probably point to some of the recent breaches and draw the conclusion that maybe that's what happened. So, I would agree with that statement.

Gardner: We did have quite a few high profile breaches, and of course, we're seeing a lot more activity in the financial sector. Actually, we could fairly call it a restructuring of the entire financial sector. Do you expect to see more of these high-profile breaches and issues in 2009?

Same song - second verse

Hietala: I'll be interested to hear everyone else's opinion on this as well, but my perspective would be yes. It's been interesting to me that 2009 has started out with what I would call "same song, second verse." We've had a massive worm that propagated through a number of means, but one of which is removable storage media. That takes me back to 1986 or 1988, when viruses propagated through floppy disk.

We've had the Heartland breach, which may be as many as 100 million credit cards exposed. Those kinds of things, unfortunately, are going to be with us for some time.

Gardner: Let's get the perspective of others. Kristin, is this going to be a very bad year for security?

Lovejoy: The more states that pass privacy disclosure requirements that mandate that you actually disclose a breach, the more we're going to hear. Does this mean that there haven't always been breaches? There have always been breaches, but we just haven't been talking about them. They're becoming much more public today.

Do I see a trend, where there are employees terminated or worried employees who are perpetrating harm on the business? The answer is yes. That is becoming a much more of an issue.

The second issue that we're seeing, and this is one of those quasi-security, quasi-operational issues, is that, because of the resource restrictions within organizations today, people are so resource starved, particularly around the changing configuration management process.

We're beginning to see where there are critical outages, particularly in infrastructure systems like those associated with nuclear power and heavy industry, where the folks are making changes outside the change process simply because they are so overloaded. They're not necessarily following policy. They're not necessarily following process.

So, we are seeing outages associated with individuals who are simply doing a job that they are ill-informed to do or overwhelmed and not able to do it effectively.

Gardner: Or perhaps cutting corners as a result of a number of other diminished resources.

Lovejoy: That's exactly right.

Gardner: Nils, do you have any recommendations for how to come into 2009 and not fall into some of these pitfalls, if you are an enterprise and you are looking at your security risk portfolio?

Security part of quality

Puhlmann: Security to me is always a part of quality. When the quality falls down in IT operations, you normally see security issues popping up. We have to realize that the malicious potential and the effort put in by some of the groups behind these recent breaches are going up. It has to do with resources becoming cheaper, with the knowledge being freely available in the market. This is now on a large scale.

In order to keep up with this we need at least minimum best practices. Somebody mentioned earlier, the worm outbreak, which really was enabled by a vulnerability that was quite old. That just points out that a lot of companies are not doing what they could do easily.

I'm not talking about the tip of the iceberg. I'm talking about the middle. As Kristin said, we've got to pay attention to these things and we need to make sure that people are trained and the resources are there at least to keep the minimum security within the company.

Gardner: As we pointed out a little earlier, security isn't necessarily an upfront capital cost. You don't download and install security. It's process and organizational and management centric. It sounds like you simply need a level of discipline, which isn't necessarily expensive, but requires intent.

Puhlmann: Yes, and that is actually similar to architecture. Architecture also is discipline. You need to sit down early and plan, and it's the same for security. A lot of things, a lot of low hanging fruit, you can do without expensive technology. It's policies, process, just assigning responsibility, and also changing security so it's a service of a business.

The business has no interest in either a breach or anything that would negatively affect the outcome of a business, for example, business continuity.

We talked earlier about how IT security might change. My feeling is that security will more and more become a partner of the business and help the business achieve its goals. At some point, nobody will talk about ROI anymore, because it's just something that will be planned in.

Gardner: Jim, what about this issue of intent? Is this something that we can bring into the architectural framework, elevate the need, and focus on intent for security?

Hietala: I believe so. Most system architects are going to be looking at trying to do the right things with respect to security and to ensure that it's thought about upfront, not later on in the cycle.

Gardner: Chenxi, in the market among suppliers that are focused on security, how are they adapting to 2009, which many of us expect to be a difficult year? We mentioned that it's about intent, but there are also products and technologies. Is there any top-of-mind importance from your perspective?

Slight increase in spending

Wang: We haven't seen a severe cut of IT security budget yet from organizations we surveyed, perhaps because some of those budgets were set before the economic downturn happened.

For some of them, we actually saw a slight increase, because just as Lehman Brothers is now Barclays, you have to merge the two IT systems. Now, you have to spend money on merging the two systems, as well as security. So, there is some actually increase in budget due to the economic situation.

A lot of vendors are taking advantages of that, and we are seeing an increased marketing effort on helping to meet security regulations and compliance. Most of us anticipate an increase of regulatory pressure coming down the pipeline, maybe in 2009, maybe in 2010. My belief is that we'll see a little bit more security spending there, because of the increased regulatory pressure.

Gardner: Kristin, we've discussed process and architecture, but are there any particular technologies that you think will be prominent in the coming year or two?

Lovejoy: Interestingly enough, identity and access management (IAM) is likely to be one of the more significant acquisitions that most businesses make.

This goes back to the business value point of security that we have been making, if you think about what's happening in the world with all of these folks wanting to access the network via smart devices. How are they going to do that? Well, they are going to do that using some sort of authentication mechanism that allows them to securely connect back.

Most organizations want to be able to access the new customer, the new consumer, via smart devices. They want to be able to allow their employees access to the network via smart devices or via any kind of other mobile device, which allows them to do things like telecommute.

IAM, as an example, is a technology that enables the business to offer a service to the employee or to that new consumer. What we're seeing is that organizations are purchasing IAM, not necessarily for security, but for the delivery of a secure service. That's one area where we are seeing uplift.

Gardner: Let's just unpack that a little bit. How is this is different from directory provisioning or some of the traditional approaches? These folks wouldn't be in the directories at that point?

Identity managements

Lovejoy: What we're seeing is much more of a focus on federated identity management and single sign-on. In fact, we're beginning to see this trend in our customer base, and a lot of organizations have been talking about this issue of mobile endpoint management. It's very hard in the new world to secure these mobile devices. What organizations are saying to us is, "Why can't we just use single sign-on and federated identity management?"

Single sign-on, in particular, has the capacity, if you think about it in the right way, to uncouple the device from the individual who is using the device, define the policy, apply the policy to the role, and then based on the role, secure the endpoint or isolate the endpoint. It's a very interesting way in which organizations are beginning to think about how they can use this technology as an alternative to traditional secure mobile endpoint management.

Gardner: It also sounds, while pertinent to mobile, that they would have a role in cloud or hybrid boundaryless types of activities.

Lovejoy: That's absolutely correct.

Gardner: Does anyone have anything to offer on this IAM in the cloud.

Puhlmann: Kristin is right. We've tried IAM for many years, and there have been many expensive failed projects in large corporations. Perhaps, we need the cloud to give us this little push to really solve it once and for all in a very federated model. I'd very much like to see that. Based on past experience, though, I'm a little cautious how quickly it will happen.

I think what we will see is a simplification of security, because it has gotten to a point where it's just too complex to handle with too many moving parts, and that makes it hard to work with and also expensive.

Also, we'll see a more realistic approach to security. What really matters? Do we really need to secure everything, or do we need to focus on certain types of data, and where is that really? Do we have to close off every little door, or can we leave some doors open and go closer to where our assets are. How much do they really mean to us?

Gardner: Great. We've been discussing security and some of the pressures of the modern age, this particular economic downturn period, but also in the context of process and architecture.

I want to thank our panelists. We were joined by Chenxi Wang, principal analyst for security and risk management at Forrester Research; Kristin Lovejoy, director of corporate security strategy at IBM; Nils Puhlmann, chief security officer and vice president of risk management of Qualys, and Jim Hietala, vice president of security for The Open Group.

Thanks to you all. Our conversation comes to you through the support of The Open Group, from the first Security Practitioners Conference here in San Diego in February, 2009.

Listen to the podcast. Download the podcast. Find it on iTunes and Learn more. Sponsor: The Open Group.

Transcript of a podcast on security as architectural best practices, recorded at the first Security Practitioners Conference at The Open Group's 21st Enterprise Architecture Conference in San Diego, February 2009. Copyright Interarbor Solutions, LLC, 2005-2009. All rights reserved.

View more podcasts and resources from The Open Group's recent conferences and TOGAF 9 launch:

The Open Group's CEO Allen Brown interview

Live panel discussion on enterprise architecture trends

Deep dive into TOGAF 9 use benefits

Reporting on the TOGAF 9 launch

Panel discussion on cloud computing and enterprise architecture

Access the conference proceedings

General TOGAF 9 information

Introduction to TOGAF 9 whitepaper

Whitepaper on migrating from TOGAF 8.1.1 to version 9

TOGAF 9 certification information

TOGAF 9 Commercial Licensing program information