Showing posts with label BPEL4People. Show all posts
Showing posts with label BPEL4People. Show all posts

Wednesday, March 11, 2009

BriefingsDirect Analysts Discuss Solutions for Bringing Human Interactions into Business Process Workflows

Edited transcript of BriefingsDirect Analyst Insights Edition podcast, Vol. 37 on aligning human interaction with business process management.

Listen to the podcast. Download the podcast. Find it on iTunes and Podcast.com. Learn more. Charter Sponsor: Active Endpoints. Additional underwriting by TIBCO Software.

Special offer: Download a free, supported 30-day trial of Active Endpoint's ActiveVOS at www.activevos.com/insight.

Dana Gardner: Hello, and welcome to the latest BriefingsDirect Analyst Insights Edition, Volume 37.

This periodic discussion and dissection of IT infrastructure related news and events, with a panel of industry analysts and guests, comes to you with the help of our charter sponsor, Active Endpoints, maker of the ActiveVOS, visual orchestration system, as well as with the support of TIBCO Software.

I'm your host and moderator, Dana Gardner, principal analyst at Interarbor Solutions. Our topic this week, the week of Feb. 9, 2009, returns to the essential topic of bringing human activity into alignment with IT supported business processes.

The need to automate and extend complex processes is obvious. What's less obvious, however, is the need to join the physical world of people, their habits, needs, and perceptions with the artificial world of service-oriented architecture (SOA) and business process management (BPM). This will become all the more important, as cloud-based services become more common.

We're going to revisit the topic of BPEL4People, an OASIS specification that we discussed when it first arrived, probably a year-and-a-half ago. We'll also see how it's progressing with someone who has been working with the specification at OASIS since its beginning.

I'd like to welcome our guest this week, Michael Rowley, director of technology and strategy at Active Endpoints. Welcome, Mike.

Michael Rowley: Thank you.

Gardner: I'd also like to introduce our IT analyst guests this week. Our panel consists of regular Jim Kobielus, senior analyst at Forrester Research. Welcome back, Jim.

Jim Kobielus: Thanks, Dana. Hi, everybody.

Gardner: And someone who is beginning to become a regular, JP Morgenthal, independent analyst and IT consultant. Welcome back, JP.

JP Morgenthal: Thanks, Dana. Hi, everyone.

Gardner: Let's go to you first, Mike, as our guest. I've pointed out that Active Endpoints is the sponsor of the show, so I guess we will try to be nice to you, but I can't guarantee it. Tell us a little bit about your background. You were at BEA for some time. You've been involved with Service Component Architecture (SCA) and a few other open standards around OASIS. Give us the bio.

Rowley: I was at BEA for five years. I was involved in a couple of their BPM-related efforts. I led up the BPELJ spec effort there as part of the WebLogic integration team. I was working in the office of the CTO for a while and working on BPEL-related efforts. I also worked on the business process modeling notation (BPMN) 2.0 efforts while I was there.

I worked a little bit with the ALBPM team as well, and a variety of BPM-related work. Then, I've been at Active Endpoints for a little over half a year now. While here, I am working on BPEL4People standards, as well as on the product itself, and on some BPMN related stuff as well.

Gardner: Let's just jump into BPEL4Ppeople. Where do we stand, and is this getting traction to people? Not to be a punster, but do people grok BPEL and BPEL4People?

Good feedback

Rowley: We've had some very good feedback from our users on BPEL4People. People really like the idea of a standard in this area, and in particular, the big insight that's behind BPEL4People, which is that there's a different standard for WS-Human Task. It's basically keeping track of the worklist aspect of a business process versus the control flow that you get in the BPEL4People side of the standard. So, there's BPEL4People as one standard and the WS-Human Task as another closely related standard.

By having this dichotomy you can have your worklist system completely standards based, but not necessarily tied to your workflow system or BPM engine. We've had customers actually use that. We've had at least one customer that's decided to implement their own human task worklist system, rather than using the one that comes out of the box, and know that what they have created is standards compliant.

This is something that we're seeing more and more. Our users like it, and as far as the industry as a whole, the big vendors all seem to be very interested in this. We just recently had a face-to-face and we continue to get really good turnout, not just at these meetings, but there's also substantial effort between meetings. All of the companies involved -- Oracle, IBM, SAP, Microsoft, and TIBCO, as well as Active Endpoints -- seem to be very interested in this. One interesting one is Microsoft. They are also putting in some special effort here.

Gardner: I want to ask you a question, but at two levels. What is the problem that we're trying to solve here? Let's ask that first at the business level and then at the technical level?

Rowley: At the business level, it's pretty straightforward. It's essentially the promise of workflow systems, in which you can automate the way people work with their computers and interact with other people by pulling tasks off of a worklist and then having a central system, the BPM engine, keep track of who should do the next thing, look at the results of what they have done, and based on the data, send things for approval.

It basically captures the business process, the actual functioning of a business, in software in a way that you can change over time. It's flexible, but you can also track things, and that kind of thing is basic.

Gardner: Before you go to the technical issues, one of the things that's really interesting to me on this is that I understand the one-way street of needing to take processes, making that understood, and then finding out who the people are who can implement it. But, is this a two-way street?

Is it possible for the people who are involved with processes in the line of business, in the field, to then say, "Listen, this doesn't quite work?" Sometimes you can't plan things in advance. We have some insight as to what we think the process should be, how to improve it, and how can we then relate that back into what the SOA architecture is delivering." Are we on a two-way street on this?

Rowley: Absolutely. One value of a BPM engine is that you should be able to have a software system, where the overall control flow, what's happening, how the business is being run can be at the very least read by a nontechnical user. They can see that and say, "You know, we're going through too many steps here. We really can skip this step. When the amount of money being dealt with is less than $500, we should take this shortcut."

That's something that at least can be described by a layperson, and it should be conveyed with very little effort to a technical person who will get it or who will make the change to get it so that the shortcut happens. I'm leery about the end user, the nontechnical person, going in and mucking with fundamental control flow, without at least collaborating with somebody who can think about it from more of an IT angle.

Gardner: No. Clearly, we want to have a lifecycle between design, requirements and refinements, but not just throw the keys to the locker room out of the window. What is it technically that we need to overcome in order to solve those problems?

Need for standards

Rowley: I'm going to take this from a standards aspect, because one of the hardest questions is what you standardize and how you divvy up the standards. One thing that has slowed down this whole vision of automating business process is the adoption of standards.

Let's say a business school wants to describe how to do management and how to run your organization. Right now, I don't believe any of them have, as part of the coursework for getting an MBA, something that says, "Here's how you deal with the BPM engine to design and control your organizations."

The reason it isn't at that level of adoption yet is because the standards are new and just being developed. People have to be quite comfortable that, if they're going to invest in a technology that's running their organization, this is not just some proprietary technology.

Gardner: We're at that chicken and egg stage, aren't we, before we can get this really deeply adopted?

Rowley: Yes. I think we're spinning up. We're starting to get the kind of momentum that's necessary, with all the vendors getting on board. Oftentimes, with things like this, if the vendors can all get on the same bandwagon at the same time, the users get it. They see that, "Okay, now this is real. This is not just a standard that is a de jure standard, but it's actually a de facto standard as well."

Gardner: Let's go to Jim Kobielus. Jim, how important is this, and how might this chicken-and-egg conundrum get jump-started?

Kobielus: It's extremely important. One thing that users are challenged with all the time in business is the fact that they are participating in so many workflows, so many business processes. They have to multi-task, and they have to have multiple worklists and to-do lists that they are checking all the time. It's just a bear to keep up with.

It's a real drag on productivity, when you've got tasks coming from all angles at you and you're floundering, trying to find a way to manage them in a systematic way, to roll them up into a single worklist.

BPEL4People, by providing an interoperability framework for worklisting capabilities of human workflow systems, offers the promise of allowing organizations to help users have a single view of all of their tasks and all the workflows in which they are participating. That will be a huge productivity gain for the average information worker, if that ever comes to pass.

That's why I agree with Mike that it's critically important that the leading BPM and workflow vendors get on board with this standard. In many ways, I see BPEL4People as having a similar aim to business intelligence in general. Where business intelligence environments are geared towards providing a single view of all business metrics. BPEL4People is trying to provide a single view of all business processes that either you participate in or which you might manage.

Process steward

A term that I have batted around -- I don't think its really gained any currency -- is the notion of a process steward, somebody whose job it is to define, monitor, track, and optimize business processes to achieve greater productivity and agility for the business.

What Mike was getting at that was really interesting is the fact that you want an environment, a human workflow environment, that not only wraps up all of your tasks in a single worklist, regardless of a back-end execution engine. You also want the ability of not only the end user but especially the process steward, to begin to do what-if analysis in terms of re-engineering. They may have jurisdiction over several processes and have a single dashboard, as it were, looking at the current state and the dependencies of the various workflows they are responsible for.

This is critically important for SOA, where SOA applications for human workflows are at the very core of the application.

Gardner: JP, do you agree with me on this two-way street, where the users, the people who are actually doing the work, feel like they are empowered at some level to contribute back into refinement? It seems to me that otherwise workers tend to say, "Okay, I can't have any say in this process. I don't agree with it. Basically, I do an end run around it. I'm going to find ways to do my work that suits me and my productivity." Then, that value and intelligence is lost and doesn't ever make it back into the automated workflow. How important from your perspective is this two-way street capability?

Morgenthal: I'm going to answer that, but I'd like to take a step back, if I could, to answer the business problem. Interestingly enough, I've been working on and researching this particular problem for the past few months. One interesting aspect from the business side is that this has been looked at for quite a while by the business, but hasn't fully been identified and ferreted out as a niche.

One key term that has been applied here industry wide I found only in the government. They call this "suspense tracking." That's a way of saying that something leaves the process and goes into "ad hoc land." We don't know what happens in there, but we control when it leaves and we control when it comes back.

I've actually extended this concept quite a bit and I am working on getting some papers and reports written around something I am terming "business activity coordination," which is a way to control what's in the black hole.

That's what you're talking about -- controlling what's happening in that black hole. It ties into the fact that humans interact with humans, humans interact with machines, and data is changing everywhere. How do we keep everything on track, how do we keep everything coordinated, when you have a whole bunch of ad-hoc processes hitting this standardized process? That requires some unique features. It requires the ability to aggregate different content types together into a single place.

An example that was mentioned earlier, where you have this thing that happens and somebody does something and then something else. The next step is going to analyze what that step does. The chances are that's related to some sort of content, probably semi-structured or maybe even unstructured content, something like a negotiation over what date something will occur. It's often human based, but when that date locks, something else will trigger, maybe the release of a document, or an invoice, or something out of an automated system.

So, you have these ongoing ad hoc processes that occur in business everyday and are difficult to automate. I've been analyzing solutions to this, and business activity coordination is that overlap, the Venn diagram, if you will, of process-centric and collaborative actions. For a human to contribute back and for a machine to recognize that the dataset has changed, move forward, and take the appropriate actions from a process-centric standpoint, after a collaborative activity is taking place is possible today, but is very difficult. I don't necessarily agree with the statement earlier that we need to have tight control of this. A lot of this can be managed by the users themselves, using common tools.

Solid foundation

One thing I'm looking at is how SharePoint, more specifically Windows SharePoint Services, acts as a solid foundation that allows humans and machines to interact nicely. It comes with a core portal that allows humans to visualize and change the data, but the behavioral connections to actually notify workflows that it's time to go to the next step, based on those human activities, are really critical functions. I don't see them widely available through today's workflow and BPM tools. In fact, those tools fall short, because of their inability to recognize these datasets.

They'll eventually get there. What you see today with regard to workflow and these BPM and workflow management tools is really around enterprise content management. "Jim approved this, so now Sally can go buy her ticket." Well, whoopie do. I could have done that with Ruby code in about ten minutes.

Gardner: It tends to follow a document trail rather than a process trail, right?

Morgenthal: Exactly. So, BPEL4People, from a standards perspective, is a standard route suspense tracking? All I'm controlling is going into the black hole and coming out of the black hole. Neither WS-Human Task nor BPEL4People addresses how I control what's happening inside the black hole.

Rowley: Actually it does. The WS-Human Task does talk about how do you control what's in the black hole -- what happens to a task and what kind of things can happen to a task while its being handled by a user? One of the things about Microsoft involvement in the standards committee is that they have been sharing a lot with us about SharePoint and we have been discussing it. This is all public. The nice thing about OASIS is that everything we do is in public, along with the meeting notes.

The Microsoft people are giving us demonstration of SharePoint, and we can envision as an industry, as a bunch of vendors, a possibility of interoperability with a BPEL4People business process engine like the ActiveVOS server. Maybe somebody doesn't want to use our worklist system and wants to use SharePoint, and some future version of SharePoint will have an implementation of WS-Human Task, or possibly somebody else will do an implementation of WS-Human Task.

Until you get the standard, that vision that JP mentioned about having somebody use SharePoint and having some BPM engine be able to coordinate it, isn't possible. We need these standards to accomplish that.

Gardner: Mike, doesn't governance come into play in this as well? If we want to reach that proper balance between allowing the ad hoc and the worker-level inputs into the system, and controlling risk, security, compliance, and runaway complexity, aren't policies and governance engines designed to try to produce that balance and maintain it?

Morgenthal: Before he answers, Dana, I have one clarification on your question. "Ad hoc" is going to occur, whether you allow it to occur or not. You've got the right question: How can the business attain that governance?

Gardner: Okay.

Rowley: There is governance over a number of things. There's governance that's essentially authorization for individual operations or tasks about how can who change what documents, once its been signed. Who can sign? Who can modify what? That's at the level of an individual task.

Then there's also who can make a formal change to the process, as opposed to ad-hoc changes, where people go in and collaborate out of band, whether you tell them they can or not. But, in the formal process, who is allowed to do that? One nice thing about a BPM is that you have the ability to have authorization decisions over these various aspects of the business process.

Gardner: This strikes me as hugely important, particularly now in our economy. This is really the nub up against which productivity ends up getting hamstrung or caught up. If we're looking to do transformation level-benefits and bring business requirements and outcomes into alignment with IT, this is the real issue and it happens at so many different levels.

I can even see this progressing now towards complex event processing (CEP), where we want to start doing that level of high-scale and high-volume complex events across domains and organizational boundaries. But, again, we're going to bring people into that as well and reflect it both ways. Jim Kobielus, do you agree that this is hugely important and yet probably doesn't get a lot of attention?

Kobielus: The CEP angle?

Need for interactivity

Gardner: No, the overall issue of, if we can get transformational and we can get productivity that helps make the business and financial case for investing in things like SOA and CEP, than the issue of the interactivity between the tactile and the human and the automated and the systems needs to develop further.

Kobielus: That's a big question. Let me just break it down to its components. First, with CEP we're talking about real time. In many ways, it's often regarded as a subset of real-time business intelligence, where you have the consolidation, filtering, and aggregation of events from various sources being fed into a dashboard or to applications in which roles are triggered in real time and stuff happens.

In a broader sense, if you look at what's going on in a workflow environment, it's simply a collection of events, both those events that involve human decision makers and those events that involve automated decision agents and what not.

Looking at the fact that BPEL and BPEL4People are now two OASIS standards that have roughly equal standing is important. It reflects the fact that in an SOA, underlying all the interactions, all the different integration approaches, you have this big bus of events that are happening and firing all over the board. It's important to have a common orchestration and workflow framework within which both the actions of human beings and the actions of other decision agents can be coordinated and tracked in some unified way.

In terms of driving home the SOA value proposition, I'm not so sure that the event-driven architecture is so essential to most SOA projects, Dana, and so it's not clear to me that there is really a strong CEP component here. Fundamentally, when we're talking about workflows, we're talking about more time lags and asynchronous interactions. So, the events angle on it is sort of secondary.

Gardner: Let me take that back to Mike Rowley. I'm looking for a unified theory here that ties together some of what we have been talking about at the people process level with some of this other, larger event bus as Jim described at that more automated level. Are they related, or are they too abstract from one another?

Rowley: No, they're related. It's funny. I bought into everything that Jim was just saying, except for the very end, where he said that it's not really relevant. A workflow system or a business process is essentially an event-based system. CEP is real-time business intelligence. You put those two together and you discover that the events that are in your business process are inherently valuable events.

You need to be able to discover over a wide variety of business processes, a wide variety of documents, or wide variety of sources, and be able to look for averages, aggregations and sums, and the joining over these various things to discover a situation where you need to automatically kickoff new work. New work is a task or a business process.

What you don't want to have is for somebody to have to go in and monitor or discover by hand that something needs to be reacted to. If you have something like what we have with ActiveVOS, which is a CEP engine embedded with your BPM, then the events that are naturally business relevant, that are in your BPM, can be fed into your CEP, and then you can have intelligent reaction to everyday business.

Eventing infrastructure

Kobielus: Exactly, the alerts and notifications are inherent in pretty much, any workflow environment. You're quite right. That's an eventing infrastructure and that's an essential component. I agree with you. I think the worklist can be conceptualized as an event dashboard with events relevant to one decision agent.

Rowley: It's more than just alerts and notifications. Any BPM can look for some threshold and give somebody a notice if some threshold has been exceeded. This is about doing things like joining over event streams or aggregating over event streams, the sorts of things that the general-purpose CEP capabilities are important for.

Gardner: JP, do you agree that we have some commonality here between CEP and its goals and value, and what we are talking about more at the human tactile workflow level?

Morgenthal: From my experience, what I've been looking at with regard to this is what I'm calling "business activity coordination." I think there is important data to be meted out after the fact about how certain processes are running in organizations. When companies talk about waste and reengineering processes, a lot of what they don't understand about processes, the reasons why they never end up changing, is because these ad-hoc areas are not well understood.

Some aspects of CEP could be helpful, if you could tag this stuff going on in that black hole in such a way that you could peer into the black hole. The issue with not being able to see in the black hole is not technical, though. It's human.

Most often, these things are distributed tasks. It's not like a process that's happening inside of accounting, where Sally walks over to Joe and hands him a particular invoice, and says, "Oh look, we could have just made that electronic." It's something leaving this division and going into that division, or it's going from this department to that department to that department. There is no stakeholder to own that process across all those departments, and data gets lost.

You're not going to find that with a CEP, because there are no automation tags at each one of those milestones. It could be useful to postmortem and reengineer after the fact, but somebody has got to gain hold that there is stuff happening in the back hole and automating in the black hole has to get started.

Kobielus: I've got a slightly better and terser answer than the one I gave a moment ago. A concept that's in BPM is business activity monitoring (BAM), essentially a dashboard of process metrics, generally presented to a manager or a steward. In human workflow, what is the equivalent of BAM -- being able to view in real time the running status of a given activity or process?

Gardner: There are also incentives, how you compensate people, reward them, and steer them to behaviors, right?

Morgenthal: On the dashboard, it’s like a remedy, when you have operations and you have trouble tickets, and how quickly are those trouble tickets are being responded to. It doesn't work. I'll tell you a funny example, which everyone out there is going to kick out of. At Sears, when you pick up stuff, after buying something big in the store, they have this monitor with this big flat screen and a list of where you are in the process after you scan your receipt. It shows you how long you're waiting.

What happens is the guy has learned how to overrun the system. He comes out, collects your ticket, and you are still sitting there for 30 minutes, but the clock has stopped on the screen. All of a sudden, behind you, is the thing that says, "We have 99.9 percent response rate. You never wait more than two minutes." Of course not. That guy took my ticket at 1 minute and 53 seconds and let me sit there for 30 minutes until my product came out.

Gardner: I think we're looking out for the best of both worlds. We want the best of what systems automation and documentation and repeat processes can do, but we also need that exception management that only a person can do, and we have all experience of how this can work or not work, particularly in a help desk situation.

Maybe you've have had the experience where you call up a help desk and the person says, "Well, I'd like to help you with that, but my process doesn't allow for it," or "We have no response for that particular situation, so I will have to go back to my supervisor," versus someone who says, "I've got a good process, but I can also work within that process for an exceptional level," and then perhaps bake that back into the process. Back to Mike Rowley.

CEP is core

Kobielus: Actually, Dana, I haven't finished my response, I just want to tie it to CEP. CEP is a core component of BAM quite often, event processing. BAM is basically the dashboard to aggregate events relevant to a given business process. In a human workflow, what is the equivalent of CEP and BAM? To some degree, it's social networks like Facebook, LinkedIn, or whatever, in the sense that I participate as a human being in a process that involves other human beings, who form a community -- my work group or just the workflow in which I'm involved.

How do I get a quick roll up of the status of this process or project or that matter in which I am just one participant? Well, the whole notion of a social network is that I can go there right away and determine what everybody is doing or where everybody else's status is in this overall process. Shouldn't that social network be fed by real time events, so I can know up to the second what Jean is doing, what Joe is doing, what Bob is doing, within the context of this overall workflow in which I am also involved?

So, CEP and BAM relate to social networks, and that's the way that human beings can orient themselves inside these workflows and can coordinate and enable that lateral side-to-side, real-time connection among human beings that's absolutely essential to getting stuff done in the real world. Then, you don't have to rely simply on the clunky asynchronous back-and-forth message passing, that we typically associate with workflows.

Gardner: Mike Rowley, we have a new variable in this, which is the social networking and the ability for people to come up with efficient means for finding a consensus or determining a need or want that hadn't been easily understood before. Is there a way of leveraging what we do within these social networks in a business process environment?

Rowley: Yes. Tying event processing to social networks makes sense, because what you need to have when you're in a social network is visibility, visibility into what's going on in the business and what's going on with other people. BPM is all about providing visibility.

I have a slight quibble in that I would say that some of CEP is really oriented around automatic reaction to some sort of an event condition, rather than a human reaction. If humans are involved in discovering something, looking something up, or watching something, I think of it more as either monitoring or reporting, but that's just a terminology. Either way, events and visibility are really critical.

Gardner: We can certainly go into the whole kumbaya aspect of how this could all be wonderful and help solve the world's ills, but there is the interoperability issue that we need to come back to. As you were mentioning, there are a lot of vendors involved. There is a tendency for businesses to try to take as much of a role as they can with their platforms and tools. But, in order for the larger values that we are discussing to take place, we need to have the higher level of interoperability.

Realistically, Mike, from your perspective in working through OASIS, how well do the vendors recognize the need to give a little ground in order to get a higher value and economic and productivity payback?

Rowley: There seems to be a real priority given to getting this thing done and to getting it to be effective. The technologists involved in this effort understand that if we do this well, everybody will benefit. The whole market will grow tremendously, because people will see that this is an industry wide technology, it’s not a proprietary technology.

Active Endpoints is really at the forefront of having an implementation of BPEL4People in the user's hands, and so we're able to come to the table with very specific feedback on the specs, saying, "We need to make these changes to the coordination protocols," or "We may need to make these changes to the API," because it doesn't work for this, that, or the other reason. What we haven't seen is people pushing back in ways that would imply they just want to do things their own way.

Gardner: With all due respect, I know Active Endpoints is aggressive in this, but a company of your size isn't too likely to sway an entire industry quite yet. What about partnerships? People aren't pushing back, but how many people are putting wind in your sails as well?

Wholehearted adoption

Rowley: That's exactly what they're doing. They're basically adopting it wholeheartedly. We have had, I would say, a disproportionate impact on these specs, primarily because the people involved in them see the technical arguments as being valid. Technical arguments that come from experience tend to be the best ones, and people jump on.

Gardner: How about the professional services, systems integrators, and people like McKinseys who are organizational management focused? Wouldn't this make a great deal of sense for them? If you have a good strategic view as a vendor, you say, "Yes, we'll grow the pie. We'll all benefit. But, there is another whole class of consultant, professional services, and integrator that must clearly see the benefit of this without any need to maintain a position on a product or technology set.

Rowley: Through the standards effort, we haven't seen very much involvement by systems integrators. We have seen integrators that have really appreciated the value of us having a standard and knowledge, knowing that if they invest in learning the technology, they're not stuck if they invest and develop a framework.

Integrators often will have their own framework that they take from one to the other. If they build it on top of BPEL4People and WS-Human Task, they really get substantial investment protection, so that they don't have to be stuck, no matter what vendor they're picking. Right now, in our case, they pick Active Endpoints, because we have the earliest version.

Gardner: The question is JP that we've been hearing how the role of systems integrators and consultants is important in evangelizing and implementing these processes and helping with interoperability across the business as well as the human, as well as the systems. Do you see yourself as an evangelist, and why wouldn't other consultants also jump on the bandwagon?

Morgenthal: Well, I do take that role of helping to get out there to advance the industry. I think a lot of system integrators though are stuck with having to deal with day-to-day issues for clients. Their role is not to help drive new things as much as it is to respond to client need and heavily utilize the model.

Gardner: You've hit on something. Whose role is it? As Jim was saying, BAM makes sense at some level, but whose role is it to come in and orchestrate and manage efficiency and processes across these boundaries?

Morgenthal: Within the organization?

Gardner: Yes.

Morgenthal: It's the management, the internal management. It's their job to own these processes.

Gardner: So it's the operating officer?

Morgenthal: The COO should drive this stuff. I haven't yet seen a COO who takes these things by the hand and actually drives them through.

Gardner: Mike Rowley, who do you sell your Active Endpoints orchestration tools to?

Rowley: Primarily to end users, to enterprises, but we also sell to system integrators sometimes.

Gardner: But who inside of those organizations tends to be the inception point?

Rowley: Department level people who want to get work done. They want to develop an app or series of apps that help their users be productive.

Kobielus: It hasn't changed. I've written two books on workflow over the past 12 years, and workflow solutions are always deployed for tactical needs. The notion that companies are really itching to establish a general-purpose workflow orchestration infrastructure as a core of their SOA, so that they can then leverage out and extend for each new application that comes along isn't how it works in the real world. I think Mike has laid it out there.

As far as the notion that companies are looking to federate their existing investments -- whether Oracle, IBM, SAP, or others workflow environments -- by wrapping them all in a common SOA standards framework and make them interoperable, I don't see any real push in the corporate world to do that.

Morgenthal: One thing I really like about SOA is that it really should be the case that if you have got an overarching SOA mandate in the enterprise, that should enable lower-level, department-level freedom, as long as you fit with providing and consuming services.

BPM doesn't have to be an enterprise-wide decision, because it just gets clogged, too many decision makers have to sign off. If you get something like BPEL4People, it's really oriented around not just workflow in kind of the older workflow systems, but its workflow in a way that fits in a SOA, so that you can fit into that larger initiative without having to get overall approval.

Gardner: We're going to have to leave it there. We are about out of time. We've been discussing the issue of BPEL4People and better workflow productivity, trying to join systems and advances in automation with what works in the field, and somehow coordinating the two on a lifecycle adoption pattern. I'd like to thank our guests. We've been discussing this with Mike Rowley, director of technology and strategy at Active Endpoints. I appreciate your input, Mike.

Rowley: Thank you.

Gardner: We have also been joined by Jim Kobielus, senior analyst at Forrester Research; thank you Jim.

Kobielus: Yeah, thanks Dana, always a pleasure.

Gardner: Lastly, JP Morgenthal, independent analyst and IT consultant. You can be reached at www.jpmorgenthal.com. Is that the right address, JP?

Morgenthal: That's the right address, thank you, Dana.

Gardner: I'm Dana Gardner, principal analyst at Interarbor Solutions. I would like to thank our sponsors for today's podcast, Active Endpoints, maker of the ActiveVOS, Visual Orchestration System, as well as the support of TIBCO Software.

Listen to the podcast. Download the podcast. Find it on iTunes and Podcast.com. Learn more. Charter Sponsor: Active Endpoints. Additional underwriting by TIBCO Software.

Special offer: Download a free, supported 30-day trial of Active Endpoint's ActiveVOS at www.activevos.com/insight.

Edited transcript of BriefingsDirect Analyst Insights Edition podcast, Vol. 37 on aligning human interaction with business process management. Copyright Interarbor Solutions, LLC, 2005-2009. All rights reserved.

Thursday, August 09, 2007

BriefingsDirect SOA Insights Analysts on Appliances, BPEL4People, and GPL v.3

Edited transcript of weekly BriefingsDirect[TM] SOA Insights Edition podcast, recorded June 29, 2007.

Listen to the podcast here. If you'd like to learn more about BriefingsDirect B2B informational podcasts, or to become a sponsor of this or other B2B podcasts, contact Interarbor Solutions at 603-528-2435.

Dana Gardner: Hello, and welcome to the latest BriefingsDirect SOA Insights Edition, Volume 21, a weekly discussion and dissection of Services Oriented Architecture (SOA) related news and events with a panel of industry analysts and guests.

I’m your host and moderator, Dana Gardner, principal analyst at Interarbor Solutions. We are joined this week on our panel by Tony Baer. He's a principal at OnStrategies. Welcome back, Tony.

Tony Baer: Hey, Dana, how are you doing?

Gardner: Great. Also Jim Kobielus, principal analyst at Current Analysis. Hey, Jim.

Jim Kobielus: Good morning, one and all.

Gardner: Brad Shimmin, also principal analyst at Current Analysis. Welcome back, Brad.

Brad Shimmin: Thanks for having me back, Dana.

Gardner: Also joining us, Todd Biske, an enterprise architect at Momentum SI, an Austin, Texas consultancy. Good to have you, Todd.

Todd Biske: Thanks, Dana. Glad to be here.

Gardner: And our guest this week, and it is the week of June 25, 2007, is Jim Ricotta. He is the vice president and general manager of appliances within IBM’s software group. Glad you could be with us, Jim.

Jim Ricotta: Glad to be here.

Gardner: Now, Jim, let’s get into a little bit of background on how you came to be at IBM. You were with DataPower, a Boston-based XML messaging appliance vendor until -- when was it -- about a year ago, maybe a little bit more?

Ricotta: That’s about right, Dana. We’ve been part of IBM for about 18 months. We were acquired toward the end of 2005. Before that, I was the CEO of DataPower for the previous three years starting in 2003. Prior to that, I ran the content networking division of Cisco Systems. So, I went from Layer 4 through Layer 7 of networking to this middleware appliance concept, and now I find myself on the other end of the fence in the world’s biggest middleware business, which is IBM.

Gardner: Could you explain what your purview is there under appliances? How wide is your role in terms of product and development?

Ricotta: IBM acquired DataPower for the current products, but really more for the potential. IBM sees a lot of potential to take appropriate functions, "appliance-ize" them, and deliver a lot more value to clients that way.

I know we're going to talk about this in the discussion, but the basic concept of an appliance is to allow customers to get their projects going more quickly, experience lower total cost of ownership (TCO), etc. My role is the general manager and VP of appliances, not just WebSphere DataPower SOA appliances. We have a broader remit and we are looking at a number of different appliance efforts for different parts of the IBM product set.

Gardner: Right. I wonder if you could also relate this to the SOA discussion. In general, SOA conceptually aligns with appliances rather well. It’s about individual parts that contribute to the larger iterative types of development and being able to manage the runtime more dynamically. How do appliances conceptually, in your mind, align with SOA?

Ricotta: One of the reasons that SOA has been a very fertile ground for appliances is the standards -- the idea of standards and the idea of a layered architectural approach. Thinking of my background, if you look at networking products, what really made routers and other types of networking such big horizontal businesses was that there were standards. The first routers were software products that ran on Unix Boxes.

But as you got standard protocols and the ISO stack took hold, it became possible to build a device that you didn’t have to program or patch. You just turned it on, configured it, it did its function, and that allowed that business to really grow.

SOA has its own version of an ISO stack with the WS-Standards and the layers from things like BPEL, all the way down to XML and the basics. That’s what enabled this approach of putting together a device that supports a bunch of these standards and can fit right into anybody’s SOA architecture, no matter what they are doing with SOA.

Gardner: Okay. Another topic, a subset of SOA, is the discussion around Enterprise Service Bus (ESB). Is the relationship between an appliance improvement for performance aligned with where you see ESBs in the market?

Ricotta: At IBM, we see ESB as a key part of any SOA architecture and deployment. If you do it properly, and we can talk later about what it means to do an appliance well, you tend to get a performance solution. You’ve done optimization. You’ve done a pruning back of all the potential functions.

So, the ones that you have, you tend to have good performance from, as well as the other benefits I pointed to, easy deployment and low TCO. So, given that ESB is the core of SOA, in many ways having an appliance alternative is important.

Gardner: Let’s go to Jim Kobielus. Jim, you were curious about the scope of appliances as a term, but also as a concept. Why don’t you take the questioning from here?

Kobielus: Okay. Thank you, Jim, and thank you, Dana. The notion of appliances in the industry has been expanded and stretched almost to the breaking point over the last few years.

I agree with Jim on what he's saying in terms of some of the core features of any so-called appliance -- quick deployment, low TCO, a basic function-limited component of some sort that is fairly easy to slot into your existing architecture and be deployed because it incorporates open standards and all that. But, the notion of an appliance comes out of the hardware world.

That’s no problem for IBM/DataPower, because from the get-go your appliances have been hardware based -- circuit boards and other devices that could be merged into racks and so forth. In recent years, the term "appliances" has been stretched to the point where now there is something called a "software appliance," or a concept of a software appliance, that many vendors are starting to tout in their products -- and not just individual vendors, but in collaborations.

In fact, just this very week, actually it was a couple of weeks ago, I ran across a couple of additional new mentions of software appliances or when Sybase and Red Hat announced that they're working together on a so-called software appliance that’s just a bundling and integration of two software products: Sybase’s database in business intelligence (BI) products and the Red Hat Linux operating system. Ingres about five months ago announced that it has a software appliance-product family called Icebreaker.

Some BI vendors, like JasperSoft, have been saying, “Hey, we're going to integrate our product with that so-called software appliance and voila! Here's something that you can install quickly at low TCO, etc.”

What I'm getting at is what they are now calling a software appliance is no different than what has been traditionally simply been called a solution, or simply a software package, that integrates two or more disparate components into a single component -- a single package with a single install.

I'm trying in my small way to beat the drum that the industry needs to scale the definition of an appliance back to its traditional scope. It’s a hardware-centric performance-built component, because, at some point, if everything is a software appliance, then the very term "appliance" is redundant.

Gardner: How about that, Jim? I guess you mentioned that appliances started with software and then became baked into a hardware lock-down, and now the term is expanding.

Ricotta: We've got to be careful, Dana. When I talked about routers, what I meant to say was that they were software products and then they became appliances. When they became appliances, they ceased to be software plus hardware. They were one thing. We see that in our industry all the time. It’s good to be at the beginning of a trend, but then, if your trend becomes too popular, everyone wants to jump on the bandwagon and the message can get diluted.

In fact, some of you who we talked to years ago when we were DataPower, might recall that, for a while, we stopped using "appliance." We started using the term "network device," because everyone saw what we were doing. Even though all they had was a Dell server with a CD with preinstalled Linux and their app, they would put a badge on the front say it’s an appliance.

I agree. You've got to be careful, because, again, there’s usually a performance value, although not always. Think about your TiVo or your iPod. That’s not a high performance-value proposition, but you always have to have a consumability value proposition and a low cost-of-ownership value proposition.

Our customers say, “Geez. We could do what your box does with software running on a server, but the operations folks tell us it would be two times or four times more expensive to maintain, because we have to patch all the different things that are on there. It’s not the same everywhere in the world in our infrastructure. Whereas with your box, we configure it; we load a firmware image, and it’s always the same wherever it exists.” Again, from my experience, that’s the way people treat routers.

So, our view is an appliance is three things that the customer buys at the same time: They buy hardware, software, and support, and it’s all together. That’s really what we think is the core value proposition. It’s cool to make a VMware image with your stuff that someone can easily deploy, but that’s something different. That's a solution, an application, a bundle, or something.

Kobielus: I think that the three core definitions of an appliance should be, "It is tangible." That’s something that you can actually throw against the wall if it screws up. Next, "Is it simple?" Now, Dana, "warehousing appliance" is not an appliance. It’s like saying that my Toyota Camry is an appliance. It’s the assemblage of many components, each of which can individually screw up. Then, thirdly, it should be pain free -- no setup and no administration or very little.

Gardner: Okay, so to understand, Jim Ricotta, you don’t consider a virtualized instance of something that’s bundled to be really an appliance?

Ricotta: No, we don’t. Again, you have to put that on a server somewhere and it doesn’t have the properties that an appliance has.

Gardner: You've got to be able to plug-in, swap-in, swap-out physically.

Ricotta: Yeah. I read a good article about the history of the networking business, and it talked about this transition I just described, where routing software moved into these boxes and then became very, very popular. This article noted that some of the early networking companies -- Cisco, Nortel, and others -- found that if you took software, locked it down, and put it in a box that had a fan and got warm, people had an affinity. IT people have an affinity for things that you plug-in, have a fan, get warm and do something useful.

Gardner: Todd Biske, as an enterprise architect, you probably concern yourself mostly with software. Do you think through on the level of an appliance or do you let the operations people worry about that?

Biske: No. Actually, I’ve got a lot of background in working with appliances. When I was an enterprise architect back at an enterprise, or actual Fortune company, we had this natural convergence that was always occurring between the group responsible for our middleware, or our software infrastructure, with the network engineering team. You can look at something like an HTTP proxy, and you’ve got Apache as a software-based solution, but then there is also a whole variety of appliances that can do the same thing.

So, there is always this natural tension of smart network devices versus some of the software products that were involved. The key thing for me that hasn’t been mentioned yet is that it does have to be more than just commodity hardware with some preconfigured software put on it.

Marketers for companies looking at leveraging either VMware machines and things that are preconfigured are looking for a term for this. "Appliance" does fit, because it gives you the right conceptual model.

A manager I worked for had the term "Dial-tone Infrastructure." You want to plug it in, pick it up, and it works. That’s the model that everybody is trying to get to with their solutions. But, when you're dealing with an appliance, you have to have that level of integration between the hardware and the software, so that you're getting the absolute best you can out of the underlying physical infrastructure that you have it on.

Any software-based approach that’s on a commodity hardware is not going to be optimized to the extent that it can be. You look at where you can leverage hardware appropriately and tune this thing to get every last ounce of performance out of it that you can.

Gardner: So, you like the notion of having some secret sauce in this, but you also like the notion of not having to create that secret sauce yourself?

Biske: Absolutely. You always have to look at where you want to leverage it. Another example where the technology could be applied would be in the use of blade servers. The biggest knock that I see from software guys on appliances is that it’s this gateway model. You’ve got to figure out the appropriate choke point at which to have it. If you adopt a blade server architecture, now you’ve got this backplane that's the perfect gateway for a lot of these hardware-based capabilities.

The ability to leverage some of these appliance technologies and hardware-optimized solutions in a blade center approach has a lot of potential as well. Then, you’ve naturally got that choke point, and you don’t have to figure out, "Well, because I’ve got datacenters all over the place, I really need hundreds of these appliances, rather than just two or three, because of how I’ve designed my middleware distribution."

Ricotta: That’s a great point, Todd. I'm not here to introduce products on this call, lest I run afoul of all of IBM’s attorneys, but we are looking at different form factors, like blades, as a good way to expand the appliance portfolio.

Gardner: Great. Thanks, Jim. Brad Shimmin, any thoughts in this subject?

Shimmin: Absolutely. When I look at this, I see two camps. You’ve got the hardware manufactures and then the software manufacturers in the SOA space, both seeing the benefits we’ve all been talking about thus far in terms of TCO, ease of use, and simplicity. Back to what Todd was saying, the key differentiator we’ve been talking about this far is the performance; the speed at which these things run, and their abilities based on that.

When you look historically at appliances like SSL accelerators, the reason they're not sitting on servers today is because servers can’t keep up with that wire speed you need. If I look at something like Layer 7 Technologies, they have their XML accelerators, and I see that as a perfect way to utilize a piece of hardware to run something that needs to go fast. I look at companies like Cape Clear and others in the ESB space, and I see them desperately trying to make their ESBs go as close to wire speed as possible, although we know they’ll never get there. I see them saying, “I wish I was running on an appliance.”

I see the two sides converging, but at the same time, I see there being something very valid about a piece of software that acts like an appliance. Layer 7, for example, released what they called a "virtual soft-appliance." If it quacks like a duck, and walks like a duck, it is a duck, right?

But the difference is, it’s just not going to go as fast as it was going to go on the Layer 7 device. If you and your enterprise are going to get all the advantages from a piece of software that you are going to get from a single piece of server hardware, you don’t need the performance from it. I don’t see that as being a problem and something we should try to shun.

Gardner: Let’s talk about the hardware for a moment. Jim Ricotta, the conventional thinking around appliances is commodity-level x86 hardware, perhaps a Linux kernel, but IBM has other hardware lines, and there is Power and this new Cell architecture. We're also getting into multi-core in a big way. For an appliance, perhaps the end user isn’t necessarily concerned with the hardware or even the kernel. Is there an opportunity for the secret sauce to extend across different types of hardware in the future?

Ricotta: The idea with an appliance is that the clients don’t care what’s inside. They care about the functions that the device does. The way we have architected our product, we do have lots of choices. We can pick the right processors and, even before we became part of IBM, we had used some ASICS to speed up certain parts of the XML processing pipeline.

Now, we are doing much more of that, we’ve got some new projects kicked off, because IBM has a lot of various state-of-the-art custom silicon and ASIC technology. So, yes, we will continue to leverage whatever hardware constructs give us the qualities we need, from performance, cost, and reliability, and we will continue to shield the IT users from that, because they don’t want to see it really.

Gardner: I know you can’t pre-announce it, and we don’t certainly expect that, but there seems to be some momentum building here for a cascade of announcements at some point -- the year of the IBM appliance if you will. Is that going to be in 2007 or 2008?

Ricotta: We'll be active in both. You'll hear from us, later this year, as well as next year.

Gardner: Okay. So, the "years" of IBM appliances. Let’s revert to the SOA discussion. Is there anyone on the panel who want to discuss why they think appliances dovetail conceptually and with SOA?

Kobielus: They dovetail, because the very concept of an appliance is something that’s loosely coupled. It’s a basic, discrete component of functionality that is loosely coupled from other components. You can swap it out independently from other components in your architecture, and independently scale it up or scale it out, as your traffic volume grows, and as your needs grow. So, once again, an appliance is a tangible service.

Shimmin: I see it similarly, in that an appliance can act as an enabler for other pieces of software in terms of providing that level of performance and scalability that those pieces can't do on their own. Such as we are seeing with ESBs and other areas. Those pieces of software need desperately some piece of hardware somewhere that can get them the information need in any timely manner.

Gardner: Do you think there is a parallel here between what we’ve seen on the World Wide Web in terms of content delivery networks and application management and acceleration -- what the enterprises are going to want to do internally, and not just enterprises, but also service providers, those who are going to be doing software-as-a-service (SaaS) and co-location activities, similar top what we’ve seen from Amazon and others.

I'll throw this back to Jim Ricotta. Is there a bit more than what we are discussing in terms of the role here?

Ricotta: We definitely see some parallels to what went on with the Web and CDNs. We have some discussions underway with network providers that have big corporate clients who are now launching their first B2B Web services, and they are basically utilizing SOA-type functions between organizations across Wide Area Networks. These carriers are looking at how to provide a value-added service, a value-added network to this growing volume of XML, SOA-type traffic. We see that as a trend in the next couple of years.

Gardner: Before we move on to our next subject, did anyone else want to address the general issue of appliances?

Baer: I have a very short observation, which is that history tends to go in cycles, and I imagine or recall similar discussions with the CAD/CAM vendors back in the 1980s with all their turnkey systems.

Gardner: Whoa! We're going way back.

Baer: Exactly. So, appliances are not new in this space. There’s always been a need to do optimized processing. We've just taken a detour during the era of open systems, but now we can start the approach again without the religious wars that we fought about 10 or 15 years ago.

Gardner: Great.

Ricotta: Let me make one comment also. I’ve heard a lot about performance in appliances, and I want to implore you all to think more, and maybe talk to someone like Todd, who has done the ROI, the evaluations, and all that kind of thing. It’s really much more. In fact, when I talked to our customers, it’s about TCO and then "time to solution" and "time to deployment." And, then performance.

Gardner: Jim Ricotta, do you have any metrics, a typical instance of a better ROI or reduction in total cost, compared to a distributed computing environment approach?

Ricotta: I can give you some data points that I collected. I’ve heard big global IT organizations, when they do their TCO calculation, say a router is $100 a month to support, a server is $500, and a DataPower SOA appliance is maybe $200 to $250. Those are the kind of ranges I hear.

Gardner: So, we are talking a potential 50 percent reduction in total cost?

Ricotta: Yes.

Gardner: Well, that does tend to get people’s attention.

Ricotta: Yes.

Biske: Something that hasn’t been brought up, and I think it’s something organizations have to consider, when they look at appliances versus software-based solutions, is the operational model. A lot of this space in the middle in SOA is all about what I would call a "configure-not-code" approach. Appliances, by definition, are something you configure, not something that you are going to be developing code for. So, it’s really tuned for an operational model, and not for a developer having to go in and tinker around with it.

A lot of the vendors claiming to produce software appliances are now trying to move closer to that. There’s still a big conceptual difference there and that’s really where a lot of the savings can come in the total cost of ownership. It isn’t how much work you have to go through it to actually make a change to the policies that are being enforced by this software appliance or device, and there are big differences between the products out there.

Gardner: So, you really like the idea of specifying, and that perhaps is another layer of efficiency in cost reduction when it comes to creating an architecture.

Biske: Absolutely, but the key to it all that Jim mentioned earlier on is standards. You don’t have much of a market for devices in the space, unless you’ve got the standards.

Gardner: You're back to an integration problem.

Biske: Exactly.

On BPEL4People and WS-Human Task ...

Gardner: Speaking of standards, let’s move on to our next topic. The BPEL4People specification came to fruition this week. Tony Baer, you wrote about it. Why don’t you tell us about this extension to BPEL, and a separate specification, WS-Human Task.

Baer: It’s interesting that they made both spec proposals separate. But, it’s not any type of surprise. IBM and SAP have been talking about this for about 18 months to two years, if I recall. What was a little interesting was that Oracle originally dissented from this, and now Oracle is part of that team.

Essentially what the hubbub is all about is that all the SOA folks have looked at BPEL and find something interesting. It does well with machine-to-machine, or at least with designed-for-automated processes to trigger other automated processes based on various conditions and scenarios, and do it dynamically. But, the one piece that was missing was most processes are not 100 percent automated. There’s going to be some human input somewhere. It was pointed out that this is a major shortcoming of the BPEL spec.

So, IBM, SAP, Oracle, BEA, Adobe and Active Endpoints have put together a proposal to patch this gap, and we’re going to submit it to OASIS. We’re going to do with two pieces. One we’re going to call this BPEL4People. We’re going to add a stopping point to say, "Put a human task here." That’s essentially BPEL4People. It’s a little more than that, but essentially boils down to that.

In terms of the actual description of the task, the semantics of the task, this could be a whole separate standard called WS-HumanTask. Where I tend to see the value in this is that invoking a human task as a service is not necessarily a call for relationship with orchestration. You don’t necessarily have to orchestrate in order to invoke a human task.

What makes this little more interesting than a normal spec announcement is that it’s pretty controversial. It draws a lot of heated opinion, as you don’t sit on the fence on something like this. The BPM folks, who tend not to be IT folks, but more process analysts, said, “Heck, BPEL has never been robust enough for our needs. It’s too simple. It’s too much of a lowest common denominator. It doesn't represent subtleties of complex processes.”

So, you have this “tug-of-war.” The announcement of BPEL4People and WS‑HumanTask has simply not settled this. It’s brought the issue back even louder again. It just makes life kind of interesting here.

Gardner: Now, let’s go to Todd Biske. In the real world that you live in, people and process coexist, and we'd like them to coexist better. Does this satisfy some of the needs that you observe in the market?

Biske: I think that we definitely need this. There's a constant tension with trying to take a business-process approach within IT when developing solutions. If you look at the products that are out there, you have one class of products that are typically called "workflow products" that deal with the human task management, and then you have these BPM Products or ESBs with orchestration in them that deal with the automated processes. Neither one, on their own, gives you the full view of the business process.

As a result, there’s always this awkward hand-off that has to occur between what the business user is defining as the business process and what IT has to turn around and actually cobble together as a solution around that. Finally getting to a point where we’re saying, "Okay, let’s come up with something that actually describes the true business process in the business definition of it," is really important. The challenge, though, is that it does potentially involve a fundamental change to the architecture of the solution.

It’s very different to develop a middleware product that can handle human workflow, because now you’ve got to have that state management. Previously, in an orchestration product, you didn't really have to worry about the state. The initial process gets kicked off, it automates that all the way through to the end, and you’re done. Then, you can release all of those resources for processing.

Now, you have to sit and go into this "wait" cycle for humans to do what they need to do, and you have to have a fundamentally different architecture for the solutions that provide that. It will be interesting to see when we actually see products that are claiming to support BPEL4People, what it changes to the landscape these vendors provide, and whether they have to take two products they previously had and combine them into one.

Gardner: I wonder, on one level, whether this is going to address something, but open up a potential can of worms around how things are done. It seems to me that this is a Pandora’s Box that we’ve attached to business process and now have opened up. But, potentially there are some benefits, particularly if you consider more interest these days in Web 2.0 or Enterprise 2.0 activities, where collaboration, social networking, and the “wisdom of crowds” is brought to bear on how businesses behave and how systems react.

Any response to that, Jim Kobielus or Brad Shimmin?

Kobielus: That’s right, Dana, because if you look at the whole notion of orchestration, it implies a rule-driven flow of context and control throughout a distributed process. It’s very much the machine assembly-line metaphor, but if you look at actual business processes, they’re very unstructured or semi‑structured and dynamically self-redefining. In other words, most real-world workflows are a coordination or collaboration process and not really amenable to strict rule definition or strict flow definitions upfront. Everything is very ad hoc.

I am not very sanguine about the prospects for BPEL4People to take off in terms of actual adoption in the real world, in real human driven workflows. It’s just messy for standards.

Gardner: So, you think there’s a need but you don’t think this is the band aid or approach?

Kobielus: There is a need for modeling tools that can help organizations to find roles, rules, and routes among human beings within workflows. I see the human workflow industry and the BPM market as being two distinct markets that don’t really benefit from a common standards framework. I am the jury that is still out on this whole issue.

Gardner: They remain orthogonal for you.

Kobielus: They definitely are orthogonal.

Gardner: Brad, how about you?

Shimmin: I'm glad we’re doing this, although I also feel it’s weird that we’re pushing it out as two different standards, one with a really sad name, and the fact that if it takes the same course that we took with BPEL, it’s going to take another two years at least for this to become truly actionable.

Gardner: You don’t think there will be a line outside of the store waiting for a BPEL4People.

Shimmin: No, the iPhone line was not to be confused with the BPEL4People line. This can be useful, if other standards come along for the ride, like BPMN. If we can pull those along together, then this is going to make a big difference for people. As Jim was saying, the idea of creating business processes that involves humans is nothing new, but it’s something that's very fleeting and hard to nail down. The folks who have been doing B2B integration for years have been looking at this problem and have been trying to solve it, because most of their tasks, like order-to-cash, has some sort of human aspect to it, no matter what.

Gardner: Do we need to put little chips in people’s heads to communicate by, say, Bluetooth to an appliance? Is that what’s needed? Jim Ricotta, what’s your take on this?

Ricotta: It does seem like you’re not going to be able to realize the vision of SOA, unless you can work in the human aspect. I haven’t spent a lot of time with things like BPEL and the top levels of the SOA stack, so I can’t really comment about how workable it is, but it seems like it certainly has to be addressed somehow.

Gardner: So, we seem in agreement about the need. Tony Baer, do you want to have the last word on this -- or Todd Biske? We have pessimism about this approach.

Baer: I'm not very sanguine about BPEL4People. The HumanTask probably has some potential interesting application, if you have a very simple task, a real commodity task that’s done often and you want to be able to reuse it.

The fact that it’s divorced from the BPEL4People stack, is probably a good thing, because there’s some use for this outside of that. I'm very leery about BPEL4People, and I think even the BPEL4People folks are not exactly sure of themselves either. The other thing I'll throw in, and I am not trying to imply any sort of ultimate solution, is that there are other approaches that are being attempted to solve the problem and to get around the bottleneck.

The analysts, the process folks, do take to modeling tools, because they provide a high-level picture of their processes. I don’t know what’s going to come of this, but you’re starting to see some efforts to make models executable. Now, it’s not going to boil the ocean or anything like that, but it’s an interesting approach and might have some niche uses.

Biske: Following up on Tony’s and Brad’s comments, from the enterprise perspective, I have much more interest in things like BPMN than I do in BPEL4People or even the original BPEL. Unlike some of the Web services specifications, when you are to hide a lot of that, developers had to deal with WSDL. There was no getting around it.

A business process developer doesn’t have to deal with BPEL. They’re dealing with some graphical interface that the BPM product has provided, and behind the scenes, that may be turned into BPEL. I may want it for portability, if I decide to change my business process engine, but the average developer shouldn’t even have to see that.

They do need to work on things like the modeling tools. So, the efforts around BPMN are much more important for enterprise developers. The BPEL space is probably of interest just to the vendors in the space, so they can promote some level of portability for these solutions across products. Or, if you’ve got a heterogeneous environment, can we make sure that we go across the environment. The average developer shouldn’t have to deal with it.

On iPhone Day and GPL v.3 Day ...

Gardner: Okay, thanks. Today, and people who listen to this or read it in the future weeks might not recognize the importance of the 29th of June, but it’s iPhone Day, and it’s also GPL Version 3 Day. Apparently at noon eastern, we’re going to hear about the long-awaited and somewhat controversial drop of the new GNU Public License Version 3.

I suppose one of the things that’s caught my attention about this is, since the Microsoft-Novell covenant on patent issues and protection for Novell users of SUSE Linux distribution through Novell, the people who were drafting this new version of the license decided that there was a loophole that needed closing.

Apparently, new terms were designed to prevent a repeat of the Microsoft patent covenant with Novell, and also to extend any patent protection across anyone using the similar products under GPL v3. It’s a little bit murky, this license, as it comes out. Apparently, some people will be moving to this by default, without even knowing it. There are other people who have already put in a statement saying that we’re still going to adhere to GPL v2, and, therefore, not going to version 3. I think it’s going to be a challenge for those using, deploying, and managing open source to sort that out.

We’re also addressing some possible issues around the Sun Microsystems’ OpenSolaris kernel and operating system, and there might be an opportunity for them to come together in such a way that you could get OpenSolaris under a GPL v3 license. Sun has as much as said that they’re interested, but hasn’t committed.

There’s also an interesting aspect of this in that the Apache software license is going to be closer, and that there’s more agreement, so that developers who have the ability, can merge these two code bases without running afoul of, or being in violation of, either license. So, some potentially large impacts by the arrival of this new GPL v3.

Let’s go around the table and see what the impressions are. Tony Baer, do you think this is a big deal or does is cast more confusion? And, do you think it’s really politics more than technology?

Baer: My sense is it’s going to cast more confusion. I mean even Linus Torvalds has come out against GPL v3, saying that it puts too much of a straightjacket. I just think it’s just adding yet another new variant. If there were 50 Open Source licenses, I am just picking that number arbitrarily, today there are now 51.

Gardner: Well, they say that three quarters of open-source code in use is under the GPL.

Baer: Right, but the thing is whether it’s under GPL or whether it’s under GPL3. I haven’t followed this real closely, I would presume that if you’ve licensed your code or you’re licensing code under GPL v2, that it's not automatically advanced to v3, but correct me if I am wrong.

Gardner: I think that is actually the case with some licenses that don’t state otherwise.

Baer: Okay, that would make sense. It’s going to create a lot of confusion, because obviously the Microsoft-Novell deal was very controversial. The idea that an open-source vendor would even concede that a non-open source vendor might have some intellectual property rights here, after the joke of the SCO lawsuit. Novell was trying to get a halo effect, saying, "Hey, we’ll protect all you SUSE Linux users," and instead it just incurred a lot of ire throughout the community which is saying, "You just admitted that we might have some transgressions here."

Gardner: Novell came back and said, “No, no, we didn’t mean to make such a statement,” right?

Baer: Then Microsoft said, “No, no, no, that was our statement.” So, I just don’t see this solving anything, I think it’s just adding a lot more turbulence to the waters.

Gardner: What we haven’t heard is the response from Microsoft as to whether they think that this license closes a loophole, and that those who now go and get these coupons or these vouchers for support, in doing so, extend this protection across any users. As you point out, the actual Linux kernel is apparently going to remain under version 2. As with an appliance, do we need to bolt a lawyer onto every software distribution? It seems like it’s getting a little bit more complicated, whereas the point of open source was to get away from that. Anybody want to respond?

Shimmin: The open-source community is playing right into the hands of those who are detractors of the open-source community with this GPL v3. So, I can understand why Linus Torvalds is against it, just from that perspective alone. Even if it closes a loophole, it doesn’t matter what it does, if it fractures an already shattered -- if I could say something bold -- landscape in terms of licensing.

The people or the companies I worry about mostly with this are the ISVs who are utilizing open-source software for their wares. Over the last five or six years, we’ve seen a huge upswing, and companies are making good money utilizing open source in their foundations. If you don’t have to build a J2EE Server, great. You can build something on top of that and make good money on it. But, now they’re going to be dissuaded a bit, and they’re going to have to look over their shoulder a lot more than they did in the past.

Gardner: Do you think the compatibility with the Apache license will make some things easier?

Shimmin: Well, go all the way to the MIT license, if you really want to take away any limitations and restrictions. You’re going to see a lot of vendors try to use software that is utilizing those more open licenses. If they can utilize something that’s going to not come back and bite them two years later, I would do it. Wouldn’t you?

Gardner: That’s the whole point, isn’t it, to try to alleviate future complications and gotchas, and have a straightforward focus on the software and not the legality issues? Todd Biske is a practitioner. I assume that you’re involved with using open-source code from time to time. Does this announcement and new version make your life easier or more complicated?

Biske: It’s probably going to make things more complicated. Again, I have been involved with enterprises where, as they figure out how they were going to leverage open source, they had to get the legal department involved. The more complexities that are introduced into that environment, the longer it’s going to take and the more painful it’s going to be for developers who want to leverage some of these solutions.

While we haven’t seen any significant legal activity around the use of open source, with the exception of IBM and SCO efforts and some of the other things out there, you haven’t really seen any end users gone after. Should we get to that point, enterprises are really going to be running in terror from anything open source. I hope that doesn’t happen, because that’s really against everything that the open-source community is trying to achieve. I tend to be pragmatic on these things, and any time I see someone that’s really taking an extreme position, it gives me concern.

Gardner: I guess the complexity is there, but virtually all companies in the software business have some relationship with open source now, whether they use it in their development, base their products on top of a platform that can use it, or is an open-source or partially open-source company. It seems like this is not something to reverse, but something we need to manage. I wonder if Tony has any thoughts on that -- or any of you?

Ricotta: One thing I've observed, being part of IBM and product development, is that we devote a lot of resources, time, engineers, lawyers, and other people being very, very certain that if any open-source is used, we know about its origin and we can clear the legal hurdles. What this means is we’re going to have to spend even more resources doing that. I don’t know if there’s a solution in sight, but that’s our view of it.

Gardner: So, this might require a little bit more due diligence, and perhaps there are some caveats in the license that will make things easier for some folks and more difficult for others. But, in the final analysis, if open source still has compelling business and technical value that overrides this now additional delta of complexity, it’s probably business as usual.

Ricotta: Yeah, and companies, ISVs, or commercial producers of software will still utilize open source, but it’s going to be more work to do it, more expensive, and the clients, the enterprise architects and others who select the vendors, are going to have to ask more tough questions.

Gardner: So, the lawyers end up winning.

Ricotta: I don’t know about that, but it ends up raising the cost of development of software. That’s for sure.

Gardner: Well, if we have no other thoughts on the GPL v3, I think we can wrap up our show. I want to thank our panel.

We’ve been joined by Tony Baer, principal at onStrategies. Thanks, Tony.

Baer: Thanks, Dana.

Gardner: Jim Kobielus, principal analyst at Current Analysis. Great to have you with us, Jim.

Kobielus: It’s been a pleasure.

Gardner: Brad Shimmin, also principal analyst at Current Analysis.

Shimmin: Thanks, Dana.

Gardner: Todd Biske, enterprise architect at Momentum SI. Thanks again, Todd.

Biske: Thanks, Dana. Good luck with your iPhone.

Gardner: I'm going to wait a little while on that. Also, a special thank you to our guest, Jim Ricotta, vice president and general manager of appliances at IBM Software Group. Thanks for coming, Jim.

Ricotta: Thank you for asking me, and a very good discussion. Glad to be part of it.

Gardner: This is Dana Gardner, principal analyst at Interarbor Solutions. You’ve been listening to BriefingsDirect SOA Insights Edition Vol. 21. Come back and listen again next week.

Listen to the podcast here. Produced as a courtesy of Interarbor Solutions: analysis, consulting and rich new-media content production. If any of our listeners are interested in learning more about BriefingsDirect B2B informational podcasts or to become a sponsor of this or other B2B podcasts, please fill free to contact Interarbor Solutions at 603-528-2435.

Transcript of Dana Gardner’s BriefingsDirect SOA Insights Edition, Vol. 21. Copyright Interarbor Solutions, LLC, 2005-2007. All rights reserved.