Monday, March 05, 2007

BriefingsDirect SOA Insights Analysts on SOA Suites Vs. Best-of-Breed SOA, and Master Data Management

Edited transcript of weekly BriefingsDirect[TM] SOA Insights Edition, recorded Jan. 26, 2007.

Listen to the podcast here. If you'd like to learn more about BriefingsDirect B2B informational podcasts, or to become a sponsor of this or other B2B podcasts, contact Dana Gardner at 603-528-2435.

Gardner: Hello, and welcome to the latest BriefingsDirect, SOA Insights Edition, Volume 10. This is a weekly discussion and dissection of Service-Oriented Architecture (SOA)-related news and events with a panel of industry analysts and guests. I’m your host and moderator Dana Gardner, principal analyst at Interarbor Solutions, ZDNet software strategies blogger, and Redmond Developer News magazine SOA columnist.

Our panel this week consists of show regular Steve Garone. Steve is a former IDC Group vice president, founder of the AlignIT Group, and an independent industry analyst. Welcome again, Steve.

Steve Garone: Hi, Dana, great to be back.

Gardner: Also joining us again Joe McKendrick, research consultant, columnist at Database Trends, and a blogger at ZDNet and ebizQ. Thanks for coming, Joe.

Joe McKendrick: Thanks, Dana, glad to be here.

Gardner: Also Tony Baer is making another appearance. He is principal at onStrategies. Thank for coming, Tony.

Tony Baer: Hey, Dana, good to be here.

Gardner: We’re also talking with Neil Macehiter. He is a research director at Macehiter Ward-Dutton in the U.K. Thanks for coming, Neil.

Neil Macehiter: No problem, Dana.

Gardner: And last on our list -- we have a large group today -- Jim Kobielus. Jim is a principal analyst at Current Analysis. Thanks for coming along, Jim.

Jim Kobielus: Thanks a lot, Dana. Hi, everybody.

Gardner: For our first topic this week -- and this is the week of Jan. 22, 2007 -- we’ll begin with the notion of SOA suites, a merging and definable market segment. We’re going to be looking at how mature such suites are. I suppose we should also look at the distinction between the best-of-breed-approach, where one could pick and choose various components within their SOA arsenal, or a more complete suite, a holistic full-feature set with the benefits, trade-offs, and detriments of each of these approaches.

Jim, you’re the one who was interested in this topic. Why don't you give us a little set-up as to what you think the state of the market is?

Kobielus: Thanks a lot. Over time, we’ve all been seeing this notion of a SOA suite take root in the industry’s productization of their various features, functions, and applications. Now, the big guys -- SAP, Oracle, Microsoft, webMethods, for that matter lots of software vendors -- are saying, “Hey, we provide a bigger, 'badder' SOA suite than the next guy.” That raises an alarm bell in my mind, or it’s an anomaly or oxymoron, because when you think of SOA, you think of loose coupling and virtualization of application functionality across a heterogeneous environment. Isn’t this notion of a SOA suite from a single vendor getting us back into the monolithic days of yore?

This thought came to me when I was reading a Wall Street Journal article earlier in the week about SAP, “SAP Trails Nimble Start-Ups As Software Market Matures.” There was one paragraph in there that just jumped out at me. They said, “Some argue that SAP's slump highlights a broader shift under way in business software, in which startup companies wield an advantage over established titans. Under this traditional business model companies buy large, costly packages of software from SAP and Oracle to help them run their back-office functions and so forth, but as the business software industry matures, many companies already have the big software pieces they need, and feel little urgency to replace them.”

So, clearly SAP is then sort of a driver in the SOA suite arena for few years with NetWeaver. Is the notion of SOA suite an oxymoron? Are there are best-of-breed-suites? There are also best-of-breed SOA components, and I’m not sure that the notion of a suite, an integrated suite is really what companies are looking for from SOA. They want best-of-breed components with the assurance, of course, that those components are implementing the full range of SOA standards for heterogeneous interoperability. So, I’m taking issue with this notion of a "best-of-breed" suite. Anybody else have any thoughts on that?

Macehiter: I’ll give you a couple of perspectives on this. We have to recognize that organizations increasingly are looking to rationalize their supply strategy. So, they’re increasingly looking to deal with a smaller number of vendors and suppliers, which is, in part, driving the move toward larger vendors attempting to offer a suite or portfolio of product capabilities that can help organizations manage the lifecycle of an SOA initiative.

That’s one factor that’s driving it. The second issue is the use of the term "suite," and what that really entails, versus what the market is currently delivering. Companies are putting together a bunch of products under a common brand, whether it’s Oracle Fusion, SAP NetWeaver, or under the IBM WebSphere brand. That's one thing. Actually making sure the products are well integrated and that they have a common management environment, common configuration environment, and common policy definition environment is the second thing. That’s one element of it.

The second issue is what actually constitutes a suite to support service-oriented initiatives. There is a tendency, certainly among the larger vendors, to focus on SOA from a development and integration proposition, rather than thinking more broadly about the capabilities you need to support service-oriented initiatives throughout the lifecycle. That extends beyond development and integration into things like security and identity, which have to be incorporated into an overall SOA offering.

Management and monitoring, usage management, audit logging are in the broad range of capabilities that you need. There’s a question as to whether it’s feasible for one vendor to offer all of those capabilities that you need to support an SOA initiative versus a set of core capabilities. Then the hooks in the interoperability allow you to exploit existing security and management infrastructure. There are a number of factors that we need to consider, and a lot of the SOA suite propositions are very much focused around development and integration, rather than management and monitoring, and really dealing with the lifecycle of services.

Gardner: I guess that explains and is consistent with the past. If you can have a cohesive approach to the development side, then the deployment tends to follow, and that’s where you monetize. Steve Garone, what do you think of this breakdown between best-of-breed and a suite?

Garone: All of us on this podcast today know that the debate over best-of-breed versus integrated-stack approach has been going for many years in a variety of scenarios and contexts, and it hasn’t stopped. I don’t really like the word "suite." It reeks more of marketing than functionality. I think what you really have to look at in terms of SOA is how people are actually approaching getting into building SOA-based environments.

What we’ve seen so far -- and we’ve talked about this on other podcasts -- is that up to this point people have tended to do pilot projects that are much lower in scale than what they will eventually do if they have success with the immediate projects. One tends to think that what they’re going to do at that point is pick and choose the individual products and functions that they need to make that happen in the short term.

I think that’s what we’re seeing, but I also sense that, despite the fact that everybody wants an open environment where they can pick and choose and not be tied to one vendor, what overrides all this is a desire to get things done quickly, efficiently. They want a way in which they don’t have to be concerned about integrating a lot of products and what that entails, and having potentially an unreliable environment. What that points to is working toward one vendor. End users will do that even in the short term by choosing someone that they know they can grow with in the future.

Gardner: Pragmatically, these vendors are also looking at their future and they’re saying, “We have an installed base. We have certain shops where we’re predominant. We want to be able to give them a clear path as to how to attain SOA values from their investment in our legacy. Therefore, we need to follow through with add-ons that smack of a integrated-stack approach.” So, it is almost incumbent on vendors to try to produce this "whole greater than the sum of the parts" -- if not to build out more SOA business, then just to hold on to their previous business.

Garone: That up brings up another interesting point, which is vendors, especially the platform vendors. The larger vendors, like IBM, Sun, and so on, tend to try to walk the line between being able to offer a fully integrated stack of software to accomplish whatever the goal it is -- in this case SOA implementations -- and also being what might be called “integratable.” This means you can bring in another product, because we adhere to standards, we’ll be able to help you do that.

They try to walk that line; where that really makes a difference is not so much what you are going to do in the future, but rather what you have done in the past. If you've got an existing registry that you used for identity management with your current applications, if you have existing app servers -- which is probably more common -- whomever you choose is going to have to be able to allow you to continue to work with those as part of a legacy environment. It sounds funny calling application servers legacy, but at this point you can do that, and that’s really where the "integratability" aspects of a fully integrated stack come into play.

Gardner: So how about you, Joe McKendrick? Do you see that the drive for simplicity and working from your installed base creates a compelling case for an integrated SOA approach? Or is the trade-off such that this is really not going to happen anymore? Is that the old way -- and is SOA fundamentally different, and therefore one should look for a different strategy?

McKendrick: Perhaps a little of both, Dana. Basically the industry still operates under the traditional mode where a lot of enterprises rely on one vendor -- we'll call it a master vendor -- that supplies most of its solutions. We see that in the IBM and in the Oracle markets. I agree with Jim that the notion of a SOA suite is very much an oxymoron. The idea of a SOA is to have "hot-swappable" software components that you could install and take out as needed in a loosely coupled architecture.

Dana, you hit upon the point that the vendors themselves have to demonstrate that they have some type of path to their installed base. They need some type of path to show that, "Yes, we are on top of the technology." In fact, if you speak with vendors out there about this strategy, even if the products or the path that they're offering are something customers aren’t adopting at the moment, it’s something they want to see with the vendor. If Oracle, hypothetically, wasn’t talking about SOA at all, there would be a lot of consternation, a lot of concern, among their installed base as to where the vendor is going.

Gardner: SAP would walk in, and their sales people would beat them up in these accounts, right?

McKendrick: Exactly. Now, Oracle is an interesting case. When I think of suites, I think Oracle demonstrates the best tendency in this area. In fact, they called their offering "The SOA Suite," and they include a number of components. I have spoken with some companies that have Oracle installations. Now, it should be noted that typically the customers for these suites are the installed base. The people who will be buying into the components of the Oracle SOA suite are companies that either have the Oracle applications, the E-Business suite or the Oracle database underneath. And, in most cases, they are buying into components of the suite.

I've heard a lot of positive things said about the BPEL Process Manager, for example. And, they are buying into pieces of the solutions, and as Steve pointed out -- it’s still in the pilot-project stage. We’re not seeing widespread enterprise implementations, but they are beginning to buy into pieces of these solutions such as the BPEL Process Manager.

Gardner: Hey, Tony Baer, how about you? Do you think that we are mature enough in SOA that we should be looking for homogeneity when it comes to tools and even deployment side? Or, is heterogeneity the issue that we are trying to manage?

Baer: As Steve was saying before, we can’t decompose it down to the age-old argument of best-of-breed versus integrated-stack. There is always going to be a tension between homogeneity and heterogeneity. For the customer, it’s going to be dictated obviously by what is already in place, basically as Joe pointed out. If 60 percent of my functionality, or even say 30 or 40 percent of my functionality, is SAP, I’m likely to listen when SAP tells me about a NetWeaver Solution.

On the other hand, if I’m in a sector that does not lend itself to package solutions, I will more than likely tend to take a best-of-breed approach -- especially if I do a lot of homegrown development, because my business is so unique. There will always be that creative tension there. That being said, the fact is that at the infrastructural level, there is a desire to have consistency. I don’t want to have five security engines. I don’t want to have three different authentications, if possible. Obviously, we’re never going to get that one, centralized identity repository in the sky, but I want to at least have my management framework be as consistent as possible and to manage what will inevitably be, in most large organizations, a federation of different installed bases of different technologies.

The other side of this is that for vendors -- and Oracle is probably the best poster child for this -- the reality in the enterprise software industry has been one of merger, acquisition, and consolidation. This means that vendors who started as organic developers now have four or five different product lines and each has had a separate lineage. The only way to put some rationality there is something like an Oracle Fusion SOA framework. Oracle has to develop this, if only out of the necessity to keep its own product offerings consistent.

Gardner: Now, back to Jim Kobielus’s point about this integrated approach being an oxymoron for SOA. Shouldn’t the vision of SOA allow us to have it both ways? If you have a culture and mindset in an organization, maybe it’s because of your legacy. Maybe it’s because of how you operate and the value you’ve perceived in past IT investments. Thus, you might want to remain with more of a single-vendor or an integrated-stack approach, but there might be other vendors without a legacy to drag along. The enterprise may want to take advantage of any innovation they can to be functionally heterogeneous and to explore and test open-source componentry as that becomes available. Shouldn’t SOA allow both of these approaches -- and pretty much equally?

Macehiter: In principle it should. We have to be careful to distinguish between the infrastructure that you require to enable SOA initiatives and what you’re trying to enable with that service-oriented initiative. Just because you want to have a loosely coupled component that you can combine in multiple ways to deliver business outcomes, doesn’t mean that the infrastructure that underpins that has to be similarly loosely coupled and based on the heterogeneous offerings from different vendors. So, there is a separation there.

We also we have to bear in mind the challenges around going for best-of-breed approach, which are well understood. It’s not so much whether the individual components can actually talk to one another but more about things like the management environment and how you manage the configuration and how do you deal with policy definition?

We’ve done some detailed assessments of service infrastructure offerings from SAP, BEA, IBM, Oracle, Sun, and webMethods. If you actually dig under the covers, you will see that each of the components has its own policy definition approach. So, the way you configure policy within the orchestration engine is inconsistent with the way you do it within the security and identity management capabilities, and that challenge occurs within suites. That’s going to be compounded as you look across different components. That introduces risk into the deployment. It reduces the visibility of the end-to-end deployment. It's those factors that are going to be important, as well as whether a communication and brokerage capability can integrate with the registry and repository. There are a number of factors that you have to bear in mind there.

Kobielus: I agree -- I think that the notion of a best-of-breed SOA suite makes more sense from an enterprise customer’s point of view. Most enterprises want to standardize on a single vendor and a single stack for the SOA plumbing -- the registries and repositories and also the development tools. They want the flexibility to plug in the different application layer components from Oracle and SAP and others, that are SOA-enabled and that can work with that single-core-plumbing-stack from a single vendor.

Gardner: Perhaps the tension here is between what aspects of SOA should be centralized, repeatable, simplified, and consolidated, and which ones should not. It’s not really a matter of SOA homogeneous or SOA heterogeneous. In moving toward SOA, should you say, "Listen, this is going to be common throughout. Let’s reuse this. Let’s manage our policy as centrally as possible.

"We might say the same for other federated and directory services. We might say the same for our tooling, so that we don’t have myriad tools and approaches from our developers. On the other hand, we want to have great flexibility and loosely coupled benefits, when it comes to which services, be they internal or external, be they traditional nature or more of a ‘software as a service’ nature that we can easily incorporate and then manage those as process."

So, is the dividing line here, Steve Garone, between what architecturally makes sense as centralized and not?

Garone: Actually I’ve just sort of been chomping on the bit here a little, because I’ve been listening to the conversation. This a really important point, mostly because there is a lot of stuff -- a lot of analyst opinion, a lot of blogging -- floating around that I’ve read, and I know you guys have probably read, on this very subject, the sort of philosophical dichotomy between what SOA is supposed to be and the notion of an SOA suite or a SOA integrated stack.

Frankly, from the end-user perspective, the message ought to be that the whole notion of SOA, as it relates to loose coupling, is really focused on the services and the applications that you’re going to deliver. That doesn't imply or even suggest that your infrastructure cannot be based on an integrated stack or software that’s designed to work well together. It allows you to work with a single vendor, and to be very efficient about how you both develop, deploy, and maintain and manage your environment.

Gardner: We also have to remember that this evolution of SOA is not happening in a vacuum. There are other major IT trends and business trends of worth. Many of them are focused on trying to reduce the cost of ongoing maintenance and support somewhere between 60 and 80 percent of total IT costs, and maybe more, to free up discretionary spending and to reduce the total spending for IT in many organizations. The trends often involved include data center consolidation, moving toward a more standardized approach for underlying hardware, embracing virtualization/grid/utility principles, and so on. Perhaps we have to recognize that even as SOA moves on its own sort of trajectory, organizations are going to be consolidating and looking for commonality of services, and improved support and maintenance types of features throughout their infrastructure.

Garone: Just to make one more small point. The one area that may diverge from the philosophy that we’ve been talking about is in the area of open source. I think that people who go out and try to implement SOA-based solutions on a variety of levels using open source technology may tend to take a more best-of-breed, individual-component approach than those who would run to their local IBM sales rep and say, “What do I do with SOA?” Even that’s going to change over time, and we’re starting to see SOA suites develop around open-source technology as well. So, that’s going to move in that direction as time goes on.

Gardner: That's another trend that is in tandem with SOA and needs to be woven together with it. It’s obviously a large undertaking. I‘m also reminded, after an interesting briefing I took this week with Informatica, and Ash Kulkarni. We had a really long, interesting discussion about the role of data, master data, and metadata when it comes to moving toward SOA. We really shouldn’t lose track of the fact that as you move to applications as services, and you go loosely coupled, and you adopt more reuse across development with common frameworks, and use rich internet application interfaces -- what about the data?

The data has to be managed as well. Increasingly, companies that have had mergers and acquisitions, or have just gotten myriad applications with varying views of something as specific as a customer identity -- there might be 10 or 15 different views of a customer, as defined by a variety of different applications. How do you manage that? And when you think about the progression of the data, it seems to me that if not in actuality, in a virtual sense, you want to become centralized with your data so that data can be used in a clean and impactful or productive way across all of your services.

Does anyone out there have some thoughts about what considerations to have when it comes to data in this decision about best-of-breed or integrated approach?

Macehiter: I was just going to say, the issue is that data has always been treated as a second-class citizen, and that it has been the product of applications which have then been subsequently analyzed. More organizations are recognizing this need to treat data as a peer, and deliver access to information, whether it’s structured or unstructured, as a service, which can be incorporated as needed into business process.

IBM was quick to identify this when they sold the information as a service strategy. And Oracle, surprisingly, given where they have come from, has actually not really enunciated data services, vision and platform. Although I did notice something on the Oracle Technology Network a couple of weeks ago, where they are just starting to talk about Oracle Data Integrator, based on an acquisition they made of a company called Sunopsis.

So, increasingly that's going to become part of the broader suite proposition. And, this is not just in the area of data but -- more broadly as customer adoption matures -- what constitutes an SOA suite. We’ve seen this around registry and repository, which historically was a best-of-breed proposition from the likes of Systinet and Infravio. Where are they now? They're part of a broader suite proposition from HP and webMethods, respectively. We’ll see this again.

Through acquisition what constitutes a suite will broaden as organizations become more mature in their approach to SOA. "Information as a service" is exactly one of those areas. Initially, that will probably be served by best-of-breed components, and then through a combination of acquisitions or very close partnership relationships will gradually be subsumed into what organizations believe is a SOA suite.

Gardner: Any other thoughts on the data services level and how that relates to this discussion?

Kobielus: I cover SOA for Current Analysis, primarily with reference to data management; and SOA in the data management realm is really consistent with master data management (MDM) as a discipline. Basically, master data management revolves around how you share, reuse, and enable maximum interoperability of your core master reference data, your single version-of-truth information, which is maintained in data warehouses and various operational data stores, and so forth.

Informatica is one of many vendors -- you mentioned Informatica earlier -- that has a strong MDM strategy. But there are are a lot of enterprise information integration (EII) vendors out there. EII revolves around really federated MDM, where you keep the data in its source repository, and then provide a virtualized access layer. This allows your business intelligence and other applications to access that data through a common object or model and a common set of access schemas -- wherever that data might reside -- but facilitated through a virtualized access layer. That’s very much EII as implemented by Business Objects, BEA, and many other vendors, and is very much the approach for federated MDM.

Gardner: Let me pause you there for a minute, Jim. If a virtualized centralization works for information, why wouldn’t it work for other aspects of SOA?

Kobielus: Oh, it does. Virtualization, of course, is one of the big themes in SOA.

Gardner: You can enjoy the benefits of a homogeneous approach, but, in fact, have great heterogeneity beneath the covers. Isn't that the whole idea of SOA -- to provide homogeneity in terms of productivity control management, and yet with flexibility and agility?

Kobielus: SOA, first and foremost, is a virtualization approach -- virtualization defined as an approach for abstracting the external call interface from the internal implementation of a resource, be it data or application functionality.

Gardner: So SOA is best-of-breed -- and it’s integrated. And you can pick and choose how to proceed, based perhaps on your legacy and your skill sets.

Macehiter: We just have to be clear to distinguish between the assets or resources that you’re virtualizing through SOA, which is typically going to be functional assets versus whether you need to virtualize the infrastructure and apply SOA to the underlying infrastructure. That’s the key distinguishing point. And that gets the point that was being raised earlier about virtualized access to information.

The infrastructure could be common, but the information assets that you’re accessing will be in heterogeneous repositories accessed in a number of different ways. This is exactly what IBM is doing with its offerings around information-as-a-service, and BEA as well. It's having the equivalent of application adapters by applying them to information assets and then exposing those through a service interface, so it’s virtualized and transparent: where the information is, how it’s stored and what format it’s in.

Kobielus: You mentioned Oracle’s acquisition of Sunopsis, which is interesting, because Sunopsis is an ETL vendor and the transform side of it is critically important. When you are extracting data from source repositories, you’re transforming it in various ways. Traditionally, Sunopsis’s tools have been used primarily to support transformation of data, which will then be loaded into centralized data warehouses.

But transformation functionality is important, whether you’re doing it in an ETL data warehousing environment -- in other words, the traditional bus for MDM -- or whether you’re doing the transformation in an EII environment. There, in fact, you are not ultimately loading the transform data into a central store, but rather simply transforming the data, keeping it in it’s original schema, but transforming it so it can be rationalized, harmonized, or aligned with a virtualized data access model provided by that EII environment.

Macehiter: Exactly. The transformation should occur behind the service interface, and this is why you need the idea of common information models and common schema models.

Gardner: Before we get down too much in the weeds on EII -- we can address that perhaps in a whole show in the future with a guest who is very much involved with that industry. Let’s move on to our second topic today, given the amount of time we have.

There are a burgeoning number of critical skill sets that need to be applied to SOA. We’ve talked about data, whether it’s cleansing, transforming, virtualizing and approaching some sort of a MDM capability. We have talked about development and process, BPEL. We talked about infrastructure. There is the management, the architectural overview, and what’s our philosophy.

It seems like we’re going to need a lot of very skilled people who are both generalists, as well as highly specific and technical. Because for SOA to work, a bunch of people who are highly specific -- but don’t share the same vision or have a general sense of the strategy -- probably won’t fare too well. This issue comes to us from Joe McKendrick. Joe, give us a little setup and overview of where you think things are headed in terms of the necessary skill sets companies are going to need in order to accomplish the promise of SOA.

McKendrick: Thanks, Dana. It’s interesting. Actually, the impetus for my thinking on this came from a report Ron Schmelzer posted and I reported on my blog this week.

Gardner: Ron being with ZapThink.

McKendrick: That’s correct. He is sounding the alarm bells that the folks that we need to drive SOA forward in the enterprise is this class of enterprise architects and enlightened architects, if you will. There are a lot of SOA projects everybody is interested in. Everybody’s kind of ginned up about SOA now, and we’ve been hearing about it. Enterprises really want to begin to either pilot or move SOA past the pilot stage, and 2007 should be a big year.

Ron Schmelzer feels there may not be enough architects who can take this high-level view and drive this process forward. Now, it’s interesting, but when I posted this on my blog, I got lot of feedback that perhaps architects are not the only ones who can really lead this effort. There are plenty of developers out there, high-level developers, who can also contribute to the process and interact with the business. The key behind this argument is that you need folks who know what’s going on technically, but can interact with the business. It can be a rare skill to have both.

Gardner: Yeah, this is going to be demanding. You can get Oracle-certified, you can get Microsoft-certified, IBM-certified. Where do you go to become SOA architect-certified?

McKendrick: Where do you go in terms of higher education institutes to get trained on architectural planning and network design? I’ve talked to lots of people who say, “Yeah, we look at the computer science graduates coming up, but how many of these people really, fully have had any training or courses whatsoever on broad architectural subjects like SOA?" Very few.

Kobielus: That’s true. Not to get reminiscent or anything, but 10 years ago, when we started seeing Java ramp up, we saw a lag there as well. A lot of organizations were really hungry for Java developers, and the universities came through with more focus on it, but later than probably most organizations wanted. What will happen here is that while this ramp-up goes on, we might see a lot of new business and new interest in service organizations that can provide the professional services required to get people through it.

Macehiter: Yeah, that’s true. That’s going to be an important -- absolutely an important source. Also, there’s some work under way. I don’t know whether any of you are familiar with the the International Association of Software Architects (IASA), which is really trying to foster a community that does try and share best practice around software architecture, including SOA.

You hit the nail on the head in terms of the key skills that are required around being able to interface with the business. One of the skills and attributes that you also need as a SOA architect is really this ability to balance supporting short-term business outcomes but keeping an eye on the longer-term objectives in terms of gaining high quality and maximizing IT value. That’s an equally difficult skill because too often architecture historically has been focused on quite discrete initiatives or infrastructure. I’m thinking about server architecture or network architecture rather than this broader perspective. There are skills occurring from such things as Oasis and what they are trying to do around things like SOA blueprints. It will be useful to get someone from Oasis in a future podcast to discuss this, because this is where the education is coming from.

Gardner: I think that if everyone goes about SOA methodically on his or her own track, and based on their own experience, and we are going to come up with a real mish-mash, then it’s going to be a problem. There needs to be some standardization around methodology.

Coincidentally, in April we’re expecting to see version 3 of the Information Technology Infrastructure Library (ITIL). This is focused on the lifecycle of services. It’s really more at the IT service-management level than pure technology, but it does offer blueprints and books and standardized approaches on how to setup an IT department and manage some of these organizational things. It strikes me that that might be another influence on bringing some kind of a cohesive approach to SOA, rather than be totally scatter-shot.

Macehiter: ITIL came out of the U.K. government. What was interesting about it is that it was driven very much from the experience of people who were grappling with these very challenges. That’s where it’s going to come from in SOA. It’s going to come from things like the IASA and others practitioners defining the best practice, rather than a more theoretical, academic approach to defining the ideal methodology.

Gardner: It's my understanding that the global systems integrators are very interested in this coming version of ITIL, and some of these other standardization-for-methodological-benefit approaches. As I’ve said before, SOA is the gift that keeps giving, if you’re a systems integrator in a professional services organization. It will be really interesting to see how successful they are at bringing a standardized set of approaches to the SOA architect role and whether that’s actually in their best interests over time.

McKendrick: And when it washes up on these shores, we’ll call it American ITIL.

Gardner: Actually the number of ITIL users is highest in the private sector and in North America, as I understand it, although it’s hard to see to what degree people actually use it. I think people use it in dribs and drabs and not in entirety.

McKendrick: It’s going to be interesting. There’s a lot of emphasis on compliance now, and data management is a big part of it as well. ITIL is really going to come into play, and should be coming into play, because processes are outsourced. Because processes are being managed by third-party firms, you need to have across-the-board standards to ensure that the data is being managed properly and in accordance with some type of universal standard. And, the regulators are going to want to see that as well.

Gardner: Well, I think we’ve come up with two separate shows we'll need to do -- one on enterprise information integration (EII) and dig in to that topic specifically; and then, perhaps, we should do an ITIL show, get someone who is familiar with some of the authoring there, and dig into its implications for SOA.

Well I think that wraps it up for today. We’ve covered quite a bit of ground in a short amount of time. I want to thank all of our guests. We’ve had Steve Garone, Joe McKendrick, Neil Macehiter, Tony Baer and Jim Kobielus. This is Dana Gardner, your host and moderator for this week’s BriefingsDirect SOA Insights Edition. Please come back and join us next week. Thank you.

If any of our listeners are interested in learning more about BriefingsDirect B2B informational podcasts or to become a sponsor of this or other B2B podcasts, please fill free to contact me, Dana Gardner at 603-528-2435.

Listen to the podcast here.

Transcript of Dana Gardner’s BriefingsDirect SOA Insights Edition, Vol. 10. Copyright Interarbor Solutions, LLC, 2005-2007. All rights reserved.

Wednesday, February 21, 2007

Transcript of BriefingsDirect Podcast on ITIL v3 and IT Service Management

Edited transcript of BriefingsDirect[TM] podcast with Dana Gardner, recorded Jan. 22, 2007.

Listen to the podcast here.
Podcast sponsor: Hewlett-Packard.

Dana Gardner: Hi, this Dana Gardner, principal analyst at Interarbor Solutions and you’re listening to BriefingsDirect. Today, a sponsored podcast discussion about Information Technology Service Management (ITSM) and a related area, the evolving Information Technology Infrastructure Library (ITIL). These are complementary trends that are helping to mature IT as a customer-focused quality-of-service activity.

We’re not so much focused on how technology is deployed and actually used, on a product-by-product basis, but really how IT is delivered to an enterprise for its internal consumers of IT. ITSM and ITIL are following longer term trends that we’ve seen in manufacturing, such as Six Sigma and the quality initiatives that we saw in the post-World War II period. So, this is really an indication of how IT is maturing and how to take it to a further, higher plane of a customer focus and quality of service.

Joining us to look into these subjects and learn more about them as they’re now maturing and coming into some milestones -- particularly ITIL with the next version arriving in the spring of 2007 -- are executives from Hewlett-Packard's Services Consulting and Integration group. Joining us is Klaus Schmelzeisen, director of the global ITSM and security practices at HP’s Services Consulting and Integration group. Klaus is managing the solution portfolio that helps customers manage their applications and infrastructure environment. Welcome to the show, Klaus.

Klaus Schmelzeisen: Hello, everyone.

Gardner: Also joining us is Jeroen Bronkhorst, ITSM program manager within the Services Consulting and Integration group at HP and also an active participant in the ITIL version 3 editorial core team, as well as an author of the integrated ITIL process maps. At HP, he is a consulting coordinator and is helping to develop the core deliverables, as well as an ITSM reference model. Welcome to the show, Jeroen.

Jeroen Bronkhorst: Thank you, Dana. Hello, everyone.

Gardner: As I mentioned, this is part of a longer-term trend. ITIL has been around for quite some time, and ITSM services for management is a bit newer. Perhaps you could help us understand the history. How did the demand for these services come about, and what are some of the current drivers that are bringing this into the fore for an increasing number of enterprises?

Schmelzeisen: Let me take this one on. I've been observing the area of ITSM since the early '90s, and, interestingly enough, it really started with the infrastructure piece. At that point in time, corporations were introducing lots of new technologies, especially in the networking environment. This was a change from the X.25 networks into TCP/IP environments, but it was also a time when a lot of the mainframe environments were superseded and replaced by open-system environments.

Client-server infrastructures came into place. So, there was a big need for infrastructure monitoring that, once this was in place, were followed by IT services, which brought a completely different spin. That was to deliver the output of an IT organization as a service to the business. That's where ITSM started big time, and that was a couple of years later.

So, really, around the mid-'90s and since then, standards and best practices have evolved. HP has had its reference model since 1996 as a trademarked approach. Even before that, there were pre-versions of it. So, ITSM came along really in the early to mid '90s, and ITIL started around the same time.

Gardner: Do you think this is just a response to complexity? Is that what the underlying thrust was for this? Were there just more types of systems to deal with, and therefore they needed more structure, more of a best practices approach?

Schmelzeisen: Definitely. The complexity drove a lot of new and different technologies, but there were also more people in IT organizations. With more people, complexity went up, and then quickly the realization came that processes are really a key part of it. That brings you right to the heart of ITSM and then ITIL.

Gardner: So, it’s really about the complexity of the IT department itself, not necessary the technology?

Schmelzeisen: Definitely.

Gardner: Is there anything in particular about what’s going on now that makes this more relevant? Do you think the expectations for what IT provides are growing? Or is it the fact that IT is becoming much more strategic, the companies need to succeed at IT for the entire company to succeed?

Schmelzeisen: It’s actually multifold. On one side, the old challenges are still here. We see new technologies. We see the need for new services coming up. But there are also a lot of drivers that are putting pressure on the IT department. One is the ongoing topic of cost reduction. The competitiveness of an IT department is related to the efficiency and the quality of processes in place.

There is also the big theme of regulatory compliance. That is permanently on the CIO agenda and that of all C-level management. To achieve this you need to have all the processes very well under control. There is also the ongoing demand to provide more value to the business, to be more agile in your responses, how quickly can you can implement new environments and respond to the needs of the business. Those are really the challenges of today.

Bronkhorst: May I add to that as well, Dana?

Gardner: Please, yes.

Bronkhorst: What I also see is that there are organizations that have an increasing need to demonstrate the level of quality that they are providing to their customers. We now have an industry standard called ISO/IEC 20000 for IT Service Management. There is an opportunity here to become certified, which might be useful in the case, for example, an IT organization wants to go to the outside world and provide services in the open market or as a protection mechanism for the internal IT organization to prevent themselves from become outsourced. This is another driver for organizations to show the value and the quality they provide.

Gardner: So, demonstrating their role and their certification helps to establish them internally against a competing approach, but also gives them more credibility if they want to take these services to an extended enterprise approach.

Bronkhorst: Yes, that’s correct.

Gardner: Now, could you help us understand ITIL, this library of best practices? We’ve got a new refresh coming up with Version 3 this coming spring. What is the impact of ITIL and how does it relate to ITSM?

Bronkhorst: Let me speak to that a little bit. ITIL originated in the '80s actually. It was created by the British Government, which still owns ITIL. It's a set of books that describes best practice guidance in the area of, "How do I organize the operational processes that help me manage my infrastructure and he help me manage my IT services?"

When it was created in the '80s, it initially consisted of more than 30 books. They were condensed in the '90s down to eight books, and that’s basically the set that exists today. However, as we’ve seen the technology and the needs evolve, the British Government is driving a project to further condense ITIL down to five books. This will better link it to the needs of businesses today, which will then help customers to get themselves organized around the lifecycle of IT services, being able to create new services, define them, build them, test them, bring them into production, and take them out of production again once they’re no longer needed.

If I look at the traditional impact of ITIL, I would say that it is typically targeted at the operations department within an IT organization, the area where all the infrastructure and applications are maintained, and where, as a user, you would interact most on a daily basis.

What’s happening with the new ITIL is that the scope of these best practices will be significantly improved, and ITIL Version 3 will be more focused on how you organize an IT organization as a whole. In other words, taking an integral view of how to manage an IT service that consists of applications, infrastructure components, hardware, etc. This means it will be a much bigger scope of best practices compared to what is it today.

Gardner: You mentioned that this began in a government orientation. Is this being embraced by governments, or by certain geographies or regions? Globally, is ITIL something that a certain vertical industry is more likely to adapt? I guess I'm looking for where is this in place and where does it make the most sense?

Bronkhorst: ITIL is not particularly focused on a specific industry segment. ITIL is generic in the sense that it provides best practices to any type of IT organization. It’s also not restricted geographically, although the British Government created it initially. Over the past few years, we could almost say it has conquered the world. This is evidenced by the fact that there are local IT service management forum organizations, which some people call ITIL users groups, but it’s a little bit more than that.

These IT groups focus on best practices in the area of IT service management of which ITIL is a component. And so, across the globe, many of these user groups have started. Actually, HP was a founding member of many of those user groups, because it is important for people who use these best practices to share their experiences and bring it to a higher level.

Gardner: This new version, Version 3, is this a major change? Is this a modest change? I guess I'm looking for the impact. Is this is a point release or a major SKU? How different will Version 3 be from some of the past approaches for these best practices?

Bronkhorst: I would classify ITIL Version 3 as a major release. I say that not because it is changing things from ITIL as it exists today. One of the basic underlying designs is that it builds on the principles that exist in ITIL today. The reason I'm saying it’s a major release, is because it's adding so much more to the scope of what ITIL covers, plus it completely restructures the way in which the information is organized.

What do I mean by that? In the past when you looked at the ITIL books, they were focused on topics that made sense to the people who work within the IT organization. Application Management, Infrastructure Management, Software Asset Management are all topics that make sense from an IT internal view. But, few people who look at IT from the outside care about how you do it, as long as they get the service that they have agreed on with you.

The new ITIL will be organized around five phases of the service lifecycle, starting with strategy. How do you handle strategies around services, followed by how do you design a service, and how do you then transition that service into operations? Service operation is the fourth phase, and then the last phase is all about how to continuously or continually improve service delivery? That will be a major change, especially for people who are familiar with the current ITIL in the way in which it is structured.

Gardner: Now, this isn’t happening in a vacuum. There are many other large trends taking place and affecting IT and businesses, and these are very dependent on IT. I'm thinking of application modernization, services-oriented architecture, consolidation and unification, adoption of new approaches with an emphasis on agile or rapid development. Does ITIL help reduce the risk of embracing some of these new trends? How should companies view this in terms of some of the other activities that they are involved with in their IT department?

Bronkhorst: ITIL basically helps you to set the context for these trends in an IT organization. In other words, if you organize yourself according to ITIL best practices, you have a solid foundation for being able to more quickly adopt new trends in the marketplace or new, evolving technologies, as you are organizing yourself to be much more agile than you have been before.

Gardner: Are there hard numbers to apply here. Perhaps, Klaus, you have some indication? When companies look at this, it seems like it makes great sense. It’s progressive. It’s maturing -- something that is fairly recent and fast-moving in organizations. But, are there hard business metrics, ROI, reduced total cost of IT, or higher productivity? When it comes time to sell this, to convince the business to invest in such things as ITSM, and they say, “Well, what’s the pay-back,” what has been the experience?

Schmelzeisen: We definitely have a lot of numbers. The usual metrics are cost reduction. For example, one of our big customers, DHL, reports 20 percent cost reduction since it implemented their IT processes. We have other cases where they are looking at a total return on investment that includes efficiency gains, as well as staff reduction, improved quality. That showed a breakeven for one of our clients, Queensland Transport, the government agency in Australia, in the second year, and an ROI of 400 percent in five years.

There are other measurements, like decreased amount of rework, decreased response time, how many calls you can solve on the first call. All these measurements are coming together. Alcatel-Lucent, for example, is showing very good returns in terms of quality improvements, as well as things that are much less tangible, like facilitated consolidation of all systems and their subsequent decommissioning.

So, there are very tangible measurements, like cost reduction, the number of call resolutions, and things like that -- quality improvements. And, there are less tangible ones, like how quickly you can get rid of older environments, how quickly you can consolidate, etc.

Gardner: What about the impact on users? Have there been surveys? You mention some of the software paybacks around reduced the time for resolution and better call center performance. Has anyone that you're aware of done user focus surveys after ITSM approaches have been adopted. Have they gauged the satisfaction of the people who are the ones actually using IT?

Schmelzeisen: Basically, the response to the quality of service provided by the IT department?

Gardner: That’s right -- the perception and the sense of confidence in IT.

Schmelzeisen: I don’t have a precise number at hand right now, but you can easily deduce it. If you call a help desk and you get put on hold, or you have to call again and your call is continuously routed to another person, and eventually you get an answer in a couple of days, how is your satisfaction rate based on that? It’s probably going to be very low.

However, if you call, and the person at the other end has all the information available about your case, he knows what type of system you have, he knows how it’s configured, he knows what changes have be done within the last couple of weeks or months, and he knows what environment you’re working in -- and he can help you right away -- I think it does great things for customer satisfaction.

Gardner: Sure, if customers can call in and get resolution and a sense of confidence, they're less likely to go off and start doing IT things on their own, on the department and lower-division level, which then requires that to be brought into a more centralized approach. Then, you lose any of the efficiencies of central procurement, managing licenses, and reducing redundancies.

It seems as if taking this life-cycle approach has a great opportunity to improve on efficiency from a procurement, licensing, and cost basis.

Schmelzeisen: Absolutely.

Bronkhorst: May I add one more thing, Dana? I think what we also see is that it’s not only important to measure, monitor, and manage customer satisfaction. It’s also key for a lot IT organizations to manage and monitor employee satisfaction. This is something that we also do as an integral part of the way that we handle projects where we implement ITSM processes and technologies with our customer.

Gardner: As far as United States market goes -- the one that I am most similar with -- it has been a long-term trend that it’s difficult to get qualified IT people and hold them. They often jump from company to company. I suppose that’s part of the dissatisfaction, but places enterprises in a fire-fighting mode much of the time, rather than in a more coordinated and straightforward productivity approach.

Bronkhorst: It’s also key that if you implement an ITSM solution or an ITSM environment, then what you do is structure the activities within an IT organization. Those activities are performed by a piece of technology, in other words automated, or performed by people. The challenge with many of these implementations is how to configure people in a way that they execute the processes the way they were designed. Technology, you can control, but people, sometimes you can’t, and you need to do something extra in order to make that work.

Schmelzeisen: I think that’s an interesting term that’s been introduced, "configuration of people," which means training and education. As a proof point to that, in all of the big projects -- Alcatel-Lucent, DHL, and Queensland Transport -- we had actually trained and retrained a significant number of people. With DHL, we trained about 4,000 IT professionals from a number of companies. With Alcatel-Lucent, it was training for about 1,000 employees. So, it’s a significant number of people need to be "reconfigured," as you called it.

Gardner: In another economic efficiency area, companies and enterprises have been looking to outsource or offshore. They're, looking to have better choices and more options in terms of how they acquire IT. If they have the certification process, as is being described here, in place, they can go and say, "Well, are your people certified? Are they trained? I am not going to outsource to anyone that isn’t."

It seems like this could make for more professionalism and less guessing, or risk, when it comes to outsourcing. Is that a trend you are seeing as well?

Schmelzeisen: Absolutely. It is important for organizations that want to show that they can achieve a certain level of quality to consider certification. What we did in our approach was to make sure that we use methodologies that have proven themselves in reality for customers to become certified in ITSM.

Gardner: All right. Let’s look to the future a little bit. It seems that there is a life-cycle approach to ITSM itself, and its successes can build upon one another toward a whole greater than the sum of the parts. But on the other hand, with this commoditization, if all companies are certified and all IT departments are operating in the same fashion, some companies that have depended on IT for a competitive edge might lose that. Is there any risk of reducing IT to a commodity set of services that prevents companies from somehow differentiating themselves?

Schmelzeisen: In some respects that’s a valid question, because a lot of IT services will be commoditized over time. On the other side, there is an ongoing wave of new things coming in, and there will always be leaders and followers. So, we will see more and different services being deployed. In the future, you won’t be able to differentiate just through an email service, to give you one example of an IT service.

However, it's different when it comes to other things: the way you manage your environment, integrate things like SOA or deploy SOA in your environment, embrace new technologies, drive mergers and acquisition from an IT integration point of view, how you select whether you are outsourcing, out-tasking or keeping things in-house.

Those are really differentiating points for the future, and I'll elaborate a little bit on the latter one. We are moving to a full IT service provider environment. A lot of these service provider ideas really come down to what you keep in-house, where you compete with others, and where your capabilities complement others. So, they are really looking at a whole supply chain in the sense of looking at complementors and competitors. It’s becoming a value net that IT organizations will have to look at and will have to manage. That is where the differentiation will be in the future.

Gardner: Anything else for us on that subject, Jeroen, about the competitive issues and commoditization? On one hand, we're saying that commoditization happens, but it is good in that it levels the playing field for you to be innovative.

Bronkhorst: I agree with what Klaus said, especially in the area that new technologies keep coming up. You can find new things in the stores almost every day, and the more new technology that’s introduced the more complex the world becomes, especially for IT organizations that have to keep it all up and running.

The challenge for a lot these IT departments is to make the right choices about which technologies to standardize on at what moment in time, and how to balance the cost associated with that with the quality you provide to your customer base. The real challenge is in doing that in a way that you distinguish yourself from the world surrounding you and being aware of the role you play in relation to your competitors and your complementors, as Klaus indicated.

Gardner: On a more pragmatic level, for those companies that are not quite into this yet, but want to be, how do you get started? How do you say, "I want to have a professional approach to ITSM? I also want to learn more about ITIL and how that could be useful tools for me?" Should you do one before the other? Are they something you can do on a simultaneous track? How do you get started?

Schmelzeisen: You always have to look at three main components, and we have mentioned them a couple of times before. It's people, process and technology. As people are driving most of the changes it’s definitely a good idea to have at least a certain number of people trained and certified, so that they can make educated decisions on technologies and processes later on.

When it comes to the process work, this can start in parallel, but definitely requires trained people. Technology is something that is definitely very important, but technology alone will not solve the problem. What's your view on this, Jeroen?

Bronkhorst: I agree with that. For those organizations that do not know yet whether a process-oriented approach is right for them we have a very interesting simulation game from our education department. We simulate processes in a non-IT environment, and make people aware of the value that can bring to their daily job.

We don't go into any of the ITIL or ITSM specifics right away, although there is some theory in the training. It’s really a simulation, and that is what a number of organizations start with. There are others who are more knowledgeable in this area already, and they typically want to go straight into a discussion as to how to compare themselves to industry best practices and what areas to address to improve. Then, we get more into a project simulation and assessment type approach, where you basically have a discussion with each other as to where we are today and where we want to be in the near future.

Gardner: I've been thinking about this as something for very large organizations, but perhaps that’s not the right way to look at it. How does this scale down? Does it fit well with small- to medium-sized businesses, or even smaller divisions within larger corporations? What’s the shakeout in terms of the size of the organization?

Schmelzeisen: You can deploy it to very small organizations as well. There might be one significant change: the need for automation. My experience is that this grows with the size of the organization. So, if you are a 170,000-person company with a huge IT department, you ought to have automated processes. This obviously means the processes need to be standardized and well understood, and people need to be trained on it.

If you are a 10-person IT department, you still have to have processes, but probably if you are such a small group, and you might even be located in one place, you can still do this without automation, using more basic tools, even on paper. Nevertheless, the need to understand your processes and have them well defined is independent of the size of the company.

Bronkhorst: There is actually a book from one of the ITSM’s chapters around how to apply ITIL in a small-size business environment, though I think that underlines the point that Klaus is making.

Gardner: Great. Well, thanks very much. This has been an interesting discussion about IT Service Management, making IT a professional organization with customer-focused quality of service as goals, and how to go about that on step-by-step basis.

Discussing this with us today have been two executives from Hewlett-Packard -- Klaus Schmelzeisen, the global director of the Global ITSM and Security Practices at HP Services Consulting and Integration group, and also Jeroen Bronkhorst, the ITSM program manager with HP Services Consulting and Integration group. I want to thank you gentlemen both for joining us.

Schmelzeisen: Well, thanks, Dana.

Gardner: This is Dana Gardner, principal analyst at Interarbor Solutions, and you have been listening to a sponsored BriefingsDirect podcast. Thank you.

Listen to the podcast here.

Podcast sponsor: Hewlett-Packard.

Transcript of Dana Gardner’s BriefingsDirect podcast on ITIL v3 and IT Service Management. Copyright Interarbor Solutions, LLC, 2005-2007. All rights reserved.

Saturday, February 17, 2007

Transcript of BriefingsDirect SOA Insights Edition Vol. 9 Podcast on TIBCO's SOA Tools News, ESBs as Platform, webMethods Fabric 7, and HP's BI Move

Edited transcript of weekly BriefingsDirect[TM] SOA Insights Edition, recorded Jan. 19, 2007.

Listen to the podcast here. If you'd like to learn more about BriefingsDirect B2B informational podcasts, or to become a sponsor of this or other B2B podcasts, contact Dana Gardner at 603-528-2435.

Dana Gardner: Hello, and welcome to the latest BriefingsDirect SOA Insights Edition, Volume 9. This is a weekly discussion and dissection of Services Oriented Architecture (SOA) related news and events with a panel of IT industry analysts. I’m your host and moderator, Dana Gardner, principal analyst at Interarbor Solutions, ZDNet blogger, and Redmond Developer magazine columnist.

This week, our panel of independent IT analysts includes show regular Steve Garone. Steve is an independent analyst, a former program vice president at IDC and the founder of the AlignIT Group. Welcome back, Steve.

Steve Garone: Hi, Dana. It's great to be here again.

Gardner: Also joining us is Joe McKendrick, an independent research consultant and columnist at Database Trends, as well as a blogger at ZDNet and ebizQ. Welcome back to the show, Joe.

Joe McKendrick: Hi, Dana.

Gardner: Next Neil Ward-Dutton, research director at Macehiter Ward-Dutton in the U.K., joins us once again. Hello, Neil.

Neil Ward-Dutton: Hi, Dana, good to be here.

Gardner: Jim Kobielus, principal analyst at Current Analysis, is also making a return visit. Thanks for coming along, Jim.

Jim Kobielus: Hi, everybody.

Gardner: Neil, you had mentioned some interest in discussing tools. We’ve discussed tools a little bit on the show, but not to any great depth. There have been some recent announcements that highlight some of the directions that SOA tools are taking, devoted toward integration, for the most part.

However, some of the tools are also looking more at the development stage of how to create services and then join up services, perhaps in some sort of event processing. Why don’t you tell us a little bit about some of the recent announcements that captured your attention vis-a-vis SOA tools?

Ward-Dutton: Thanks, Dana. This was really sparked by a discussion I had back in December -- and I think some of the other guys here had similar discussions -- with TIBCO Software around the announcement that they were doing for this thing called ActiveMatrix. The reason I thought it was worth discussing was that I was really kind of taken by surprise. It took me a while to really get my head around it, because what TIBCO is doing with ActiveMatrix is shifting beyond its traditional integration focus and providing a rear container for the development and deployment of services, which is subtly different and not what TIBCO has historically done.

It’s much more of a development infrastructure focus than an integration infrastructure focus. That took me by surprise and it took me a while to understand what was happening, because I was so used to expecting TIBCO to talk about integration. What I started thinking about was, "What is the value of something like ActiveMatrix?" Because at first glance, ActiveMatrix appears to be something with JBI, a Java Business Integration implementation, basically a kind of standards-based plug-and-play ESB on steroids. It's probably a crass way of putting it, but you kind of get the idea.

Let’s look at it from the point of view of a development theme. What is required to help those guys get into building high-quality networks of services? There are loads of tools around to help you take existing Java code, or whatever, right-click on it, and create SOAP and WSDL bindings, and so on. But, there are other issues of quality, consistency of interface definitions, and use of schemas -- more leading-edge thinking around using policies, for example. This would involve using policies at design time, and then having those enforced in the runtime infrastructure to do things like manage security automatically and help to manage performance, availability, and so on.

It seems to me that this is the angle they’re coming from, and I haven’t seen very much of that from a lot of the other players in the area. The people who are making most of the noise around SOA are still approaching it from the point of view: "You’ve got all this stuff already, all these assets, and what you’re really doing is service-enabling and then orchestrating those services." So, I just want to throw that out there. It would be really interesting to hear what everyone else thinks. Is what TIBCO is doing useful? Are they out ahead or are there lots of other people doing similar things?

Gardner: TIBCO’s heritage has been in middleware messaging, which then led them into integration, Enterprise Application Integration (EAI), and now they’ve moved more toward a service bus-SOA capability. Just to clarify, this tooling, is it taking advantage of the service bus as a place to instantiate services, production, and management? And is it the service bus that’s key to the fact that they’re now approaching tooling?

Ward-Dutton: That’s how I believe it, except it extends to service bus in two ways. One is into the tooling, if you think about what Microsoft is doing with Windows Communication Framework. From a developer perspective, they’re abstracting a lot of the glop they need to tie code into an ESB, and TIBCO is trying to do something similar to that.

It’s much more declarative. It’s all about annotations and policies you attach to things, rather than code you have to write. On the other side, what was really surprising to me was that, if I understand it right, [TIBCO] are unlike a lot of the other ESB players. They are trying to natively support .NET, so they actually have a .NET container that you can write .NET components in and hook them into the service bus natively. I haven’t really seen that from anywhere else, apart from Microsoft. Of course, they’re .NET only. I think there’s two ways in which they’re moving beyond the basic ESB proposition.

Gardner: So, the question is about ESB as a platform. Is it an integration platform that now has evolved into a development platform for services, a comprehensive place to manage and produce and, in a sense, direct complex service integration capabilities? Steve Garone, is the definition of ESB, now, much larger than it was?

Garone: I think it is. I agree with Neil. When I looked at this announcement, the first thing that popped into my mind was, "This is JBI." When Sun Microsystems talked about JBI back in 2005, this is what they were envisioning, or was part of what they were envisioning. Basically, as a platform, that raises the level of abstraction to where current ESB thinking was already. At the time was confusing users -- and still is -- because they didn’t quite understand how, or why, or when they should use an ESB?

In my opinion, this raises that level of abstraction to eliminate a lot of the work developers have to do in terms of coding to a specific ESB or to a specific integration standard, and lets them focus on developing the code they need to make their applications work. But, I would pull back a little bit from the notion that this is purely, or at a very high percentage, a developer play. To me, this is a logical extension of what companies like TIBCO have done in the past in terms of integration and messaging. However, it does have advantages for developers who need to develop applications that use those capabilities by abstracting out some of the work that they need to do for that integration.

Gardner: How about you, Joe? Do you see this as a natural evolution of ESB? It makes sense for architects and developers and even business analysts to start devoting logic of process to the ESB and let the plumbing take care of itself, vis-à-vis standards and module connectors.

McKendrick: In terms of ESBs, there’s actually quite a raging debate out there about the definition of an ESB, first of all, and what the purpose of an ESB should be. For example, I quote Ann Thomas Manes . . .

Gardner: From Burton Group, right?

McKendrick: Right. She doesn’t see ESB as a solution that a company should ultimately depend on or focus on as mediation. She does seem to lean toward the notion of an ESB on the development side as a platform-versus-mediation system. I've also been watching the work of Todd Biske, he is over at MomentumSI. Todd also questions whether ESBs can take on such multiple roles in the enterprise as an application platform versus a mediation platform. He questions whether you can divide it up that way and sell it to very two distinct markets and groups of professionals within the enterprise.

Gardner: How about you, Jim Kobielus? Do you see the role of ESB getting too watered down? Or, do you see this notion of directing logic to the ESB as a way of managing complexity amid many other parts and services, regardless of their origins, as the proper new direction and definition of ESB?

Kobielus: First of all, this term came into use a few years back, popularized by Gartner and, of course, by Progress Software as a grand unification acronym for a lot of legacy and new and emerging integration approaches. I step back and look at ESB as simply referring to a level backplane that virtualizes the various platform dependencies. It provides an extremely flexible integration fabric that can support any number of integration messaging patterns, and so forth.

That said, looking at what TIBCO has actually done with ActiveMatrix Service Grid, it's very much to the virtualization side of what an ESB is all about, in the sense that you can take any integration logic that you want, develop it to any language, for any container, and then run it in this virtualized service grid.

One of the great things about the ActiveMatrix service grid is that TIBCO is saying you don’t necessarily have to write it in a particular language like Java or C++, but rather you can compose it to the JBI and Service Component Architecture (SCA) specifications. Then, through the magic of ActiveMatrix service grid, it can get compiled down to the various implementation languages. It can then get automatically deployed out to be executed in a very flexible end-to-end ESB fabric provided by TIBCO. That’s an exciting vision. I haven’t seen it demonstrated, but from what they’ve explained, it’s something that sounds like it’s exactly what enterprises are looking for.

It’s a virtualized development environment. It’s a virtualized integration environment. And, really, it’s a virtualized policy management environment for end-to-end ESB lifecycle governance. So, yeah, it is very much an approach for overcoming and taming the server complexity of an SOA in this level backplane. It sounds like it’s the way to go. Essentially, it sounds very similar to what Sonic Software has been doing for some time. But TIBCO is notable, because they’re playing according to open standards that they have helped to catalyze -- especially the SCA specifications.

Gardner: Now, TIBCO isn’t alone in some releases since the first of the year. We recently had webMethods with its Fabric 7.0. Has anyone on the call taken a briefing with webMethods and can you explain what this is and how it relates to this trend on ESB?

Kobielus: I've taken the briefing on Fabric 7.0, and it’s really like TIBCO with ActiveMatrix in many ways. It's a strong development story there and it’s a strong virtualization story. In the case of webMethods Fabric 7.0, you can develop complex-end-to-end integration process logic in a high-level abstraction. In their case, they’re implementing the Business Process Modeling Notation (BPMN) specification. Then, you can, within their tooling, take that BPMN definition, compile it down to implementation languages like BPEL that can then get executed by the process containers or process logic containers within the Fabric 7.0 environment.

It’s a very virtualized ESB/SOA development environment with a strong BPMN angle to it and a very strong metadata infrastructure. WebMethods recently acquired Infravio, and so webMethods is very deep now both on the UDDI registry side and providing the plumbing for a federated metadata infrastructure that’s necessary for truly platform agnostic ESB and SOA applications.

Gardner: And, I believe BEA has come out through its Liquid campaign with the components that amount to a lot of this as well. I'm not sure there are standards in interoperability, based on TIBCO's announcement, but clearly I think they have the same vision. In the past several weeks, we’ve discussed how complexity has been thrown at complexity in SOA, and that’s been one of the complaints, one of the negative aspects.

It seems to me that this move might actually help reduce some of that by, as you point out, virtualizing to the level where an analyst, an architect, a business process-focused individual or team can focus in on this level of process to an ESB, not down to application servers or Java and C++, and take advantage of this abstraction.

Before we move on to our next topic, I want to go back to the panel. Steve Garone, do you see this as a possible way of reducing the complexity being thrown at complexity issue?

Garone: Yes, I do. A lot of it's going to depend on how well this particular offering -- if you're talking about TIBCO or webMethods, but I think we were sort of focusing mostly on TIBCO this morning.

Gardner: I think I’d like to extend to the larger trend. Elements that IBM is doing relates to this. Many of the players are trying to work toward this notion of abstracting up, perhaps using ESB as a platform to do so. Let's leave it on more general level.

Garone: That’s fine a good point. You’re right. IBM is doing some work in this area, and logically so, although they come at this even though they have a lot of integration products. I consider them a platform vendor, which means their viewpoint is a little more about the software stack than a specific integration paradigm.

I think the hurdle that we’ll need to get over here in terms of users taking a serious look at this is the confusion over what an ESB actually is and what it should be used for by customers. The vendors who talk to their customers about this are going to have to get over a perception hurdle that this is somewhat different. It makes things a lot easier and resolves a lot of those confusion points around ESBs. Therefore, it's something they should look at seriously, but in terms of the functionality and the technology behind it, it's the logical way to go.

Gardner: Joe McKendrick, how about you in this notion of simplicity being thrown at complexity? Are we going to retain that? Is this the right direction?

McKendrick: Ah, ha. Well, I actually have fairly close ties with SHARE, the mainframe user group, and put out a weekly newsletter for them. The interesting point about SOA in general is that TIBCO, webMethods and everybody are moving to SOA. They have no choice. They have to begin to subscribe to the standards they agree upon. What else would they do?

When we talk about what was traditionally known as the Enterprise Application Integration (EAI) market, it’s been associated with large-scale, expensive integration projects. What I have seen in the mainframe market is that there is interest in SOA, and there is a lot of experimentation and pilot projects. There are some very clear benefits, but there is also a line of thinking that says, "The application we have running on the mainframe, our CICS application transaction system, works fine. Why do we need to SOA-enable this platform? Why do we need to throw in another layer, an abstraction of service layer, over something that works fine, as-is?"

It may seem archaic or legacy. You may even have green-screen terminals, but it runs. It’s got mainframe power behind it. It’s usually a two-tier type of application. The question organizations have to ask themselves is, Do we really need to add another layer to an operation that runs fine as-is?

Gardner: If they only have isolated operations, and they don’t need to move beyond them, I suppose it's pretty clear for them from cost-benefit analysis to stay with what works. However, it seems that more companies, particularly as they merge and become engaged in partnerships, or as they ally with other organizations and go global, want to bring in more of their assets into a business process-focused benefit. So, that's the larger evolution of where we’re going. It's not islands of individual applications churning away, doing their thing, but associating those islands for a higher productivity benefit.

Kobielus: The notion of what organizations have to examine is right on the money, but I think that’s more of a fundamental issue around SOA in general. I think the question you asked was how does something like this affect the ease with which one can do that, and will it figure into the cost-benefit analysis that an organization does to see if in fact that's the right way to go.

Gardner: Neil, this was your topic. How do you see it? Does this larger notion strike you as moving in a direction of starting to solve this issue of complexity being thrown a complexity? That is to say, there’s not enough clear advantage and reduced risk as an organization for me to embrace SOA. Do you think what you’re seeing now from such organizations as TIBCO and webMethods is ameliorating that concern?

Ward-Dutton: Yes and no. And I think most of my answers on these podcasts end up like that, which is quite a shame. The "no" part of my answer is really the cynical part, which is that, at the end of the day, too much simplicity is bad for business. It’s not really in any vendor’s interest to make things too easy. If you make things too easy, no one’s going to buy any more stuff. And the easiest thing to do, of course, for the company is to say, "You know what? Let’s just put everything on one platform. We’ll throw out everything we’ve got, and rebuild everything from the ground up, using one operating system, one hardware manufacturer, one hardware architecture, and so on."

If the skills problem would go away overnight, that would be fantastic. Of course, it’s not about technology. It’s all of our responsibility to keep reminding everyone that, while this stuff can, in theory, make things simpler, you can’t just consider an end-state. You've got to consider the journey as well, and the complexity and the risk associated with the journey. That’s why so many organizations have difficulties, and that's why the whole world isn't painted Microsoft, IBM, Oracle, or webMethods. We’re in a messy, messy world because the journey is itself a risky thing to do.

So, I think that what's happening with IBM around SCA, and what TIBCO is doing around ActiveMatrix, and what webMethods is doing, have the capability for people with the right skills and the right organizational attributes. They have the ability to create this domain, where change can be made pretty rapidly and in a pretty manageable way. That's much more than just being about technology. It’s actually an organizational, cultural process, an IT process, in terms of how we go about doing things. It's those issues, as well as a matter of buying something from TIBCO. Everything’s bound up together.

Gardner: To pick up on your slightly cynical outlook on vendors who don’t want to make it too simple, they do seem to want to make things simpler from the tooling perspective, as long as that requires the need for their run time, their servers, their infrastructure, and so on.

TIBCO has also recently announced BusinessWorks 5.4, which is a bit more complex-turnkey-platform approach that a very simplified approach to tools might then engender an organization to move into. I guess I see your point, but I do think that the tooling and the simplification is a necessary step for people and process to be the focus and the priority, and that the technology needs to help to bring that about?

Ward-Dutton: You’re absolutely right, Dana, but I think the part of the point you made when you were asking your question a few minutes ago was around do we see less technical communities getting more heavily involved in development work. This is the kind of the mythical use of programming thing I remember from Oracle 4GL and Ingress 4GL. That was going to be user-programming, and, of course, that didn’t happen either. I do see the potential for a domain where it’s easier to change things and it’s more manageable, but I don’t see that suddenly enabling this big shift to business analysts doing the work -- just like we didn’t do with UML or 4GLs.

Gardner: We’re not yet at the silver-bullet level here.

Kobielus: Neil nailed it on the head here. Everybody thinks of simplicity in terms of, "Well, rather than write low-level code, people will draw high-level pictures of the actual business process, not that technical plumbing." And, voila! the infrastructure will make it happen, and will be beautiful and the business analysts will drive it.

Neil alluded to the fact that these high-level business processes, though they can be drawn and developed in BPMN, or using flow charting and all kinds of visual tools, are still ferociously complex. Business process logic is quite complex in it’s own right, and it doesn’t simply get written by the business analyst. Rather, it gets written by teams of business and IT analysts, working hand in hand, in an iterative, painful process to iron out the kinks and then to govern or control changes, over time, to various iterations of these business processes.

This isn’t getting any simpler. In fact, the whole SOA governance -- the development side of the governance process -- is just an ongoing committee exercise of the IT geeks and the business analyst geeks getting together regularly and fighting it out, defining and redefining these complex flow charts.

Gardner: One of the points here is around how the plumbing relates to the process, and so it’s time and experience that ultimately will determine how well this process is defined. As you say, it’s iterative. It’s incremental. No one’s just going to sit there, write up the requirements, and it’s going to happen. But it’s the ability to take iterations and experience in real time and get the technology to keep up with you as you make those improvements that's part of the “promise” of SOA.

McKendrick: The collaboration is messy. You’re dealing with a situation where you’ve got collaboration among primarily two major groups of people who have not really worked a lot together in the past and don’t work that well together now.
Link
Gardner: Well, that probably could be said about most activities from last 150,000 years. All right, moving onto our next topic: IBM came out with its financials this week, we’re talking about the week of January 15, 2007, and once again, they had a strong showing on their software growth. They had 14 percent growth in software revenues, compared to the year-ago period. This would be for the fourth quarter of 2006, and that's compared to the total income growth for the company of 11 percent -- services growing 6 percent, and hardware growing only 3 percent.

So, suddenly, software, which does include a lot at IBM, but certainly a large contribution form WebSphere and middleware and mainframes. Mainframes are still growing, but not great -- 5 percent. Wow. The poster child at IBM is software. Who'd have thunk it? Anybody have a reaction to that?

Ward-Dutton: Of course, one of the things that's been driving IBM software growth has been acquisitions. I know I’m a bit behind the curve on this one, but the FileNet acquisition was due to close in the fourth quarter. If that did happen, then that probably had quite a big impact. I don’t know. Does anyone else know?

Gardner: I guess we’d have to do a bit more fine-tuning to see what contribution the new acquisition’s made on a revenue basis, but the total income growth being a certain percentage and then the software, as a portion of that, I suppose, is the trend. Even so, if they’re buying their way into growth, software is becoming the differentiator in the growth opportunity for IT companies, not hardware, not necessarily even professional services.

That does point out that where companies are investing, where enterprises are investing, and where they're willing to pay for a high-margin and not fall into a commodization pattern, which we might see in hardware, is in software.

Kobielus: Keep in mind, though, in the fourth quarter of 2006, IBM had some major product enhancements. Those happened both in the third and the fourth quarter in the software space, and those were driving much of this revenue growth. In July, they released a DB2 Version 9, formerly code-named Viper, and clearly they were making a lot of sales of new licenses for DB2 V9. Then, in the beginning of the fourth quarter, they released their new Data Integration Suite. That's not so new, but rather enhancements to a variety of point integration tools that they’ve had for a long time, including a lot of these software products that they'd acquired with Ascential.

Gardner: That’s the ETL stuff, right?

Kobielus: Not only that, it's everything, Dana. It’s the ETL, the EII, the metadata, the data quality tools, and the data governance tools. It’s a lot of different things. Of course, they also acquired FileNet during that time. But also in the late third quarter IBM released at least a dozen linked solo-product upgrades. In the late third quarter, they were clearly behind much of the revenue growth, and in the fourth quarter for the software group. In other words, the third and fourth quarters of this past year had announcements that IBM had primed the pump for in terms of the customers’ expectations. And, clearly, there were a lot of pent-up orders in hand from customers who were screaming for those products.

Gardner: So you're saying that this might be a cyclical effect, that we shouldn't interpret the third and fourth quarter software growth as a long-term trend but perhaps as beneficial but nonetheless a "bump in the road" for IBM.

Kobielus: Oh, yeah. Just like Microsoft is finally having a bump, now that it’s got Vista and all those other new products coming downstream. These few quarters are going to be a major bump for Microsoft, just like the last two were a major bump for IBM.

Gardner: Let’s take that emphasis that you have pointed out, and I think is correct, on the issue of data -- the lifecycle of data, and how to free it and expose it to wider uses and productivity in enterprise. IBM has invested quite a bit in that. We just also heard an announcement this week from Hewlett-Packard that they are going to be moving more aggressively into business intelligence (BI) and data warehouse activities, not necessarily trying to sell databases to people, but to show them how to extract, and associate, and make more relevant data that they already have -- a metadata-focused set of announcements. Anyone have reaction to that?

Garone: I don’t know too much about this announcement, but from what I’ve read it seems as if this is largely a services play. HP sees this as a professional services opportunity to work with customers to build these kinds of solutions, and there's certainly demand for it across the board. I’m not so sure this is as much products as it is services.

Kobielus: HP, in the fourth quarter of 2006, acquired a services company in the data warehousing and BI arena called Knightsbridge, and Knightsbridge has been driving HP's foray into the data warehousing market. But, also HP sees that it’s a major hardware vendor, just as Teradata and IBM are, and wants to get into that space. If you look at the growth in data warehousing and BI, these are practically the Number 1 software niches right now.

For HP it’s not so much a software play. They are partnering with a lot of software vendors to provide the various piece parts, such as overall Master Data Management (MDM), data warehousing, and business intelligence product sets. But, very clearly, HP sees this as a services play first and foremost. If you look at IBM, 50 percent of their revenues are now from the global services group, and a lot of the projects they are working on are data warehousing, and master data management, and data integration. HP covets all that.

They want to get into that space, and there’s definitely a lot of room for major powerhouse players like them to get into it. Also, very interestingly, NCR has announced in the past week or so that it’s going to spin off Teradata, which has been operating more or less on an arms-length basis for some time. Teradata has been, without a doubt, the fastest growing product group within NCR for a long time. They're probably Number 1 or a close Number 2 in the data warehousing arena. This whole data warehousing space is so lucrative, and clearly HP has been coveting it for a while. They’ve got a very good competency center in the form of Knightsbridge.

They have got a good platform, this Neoview product that they are just beginning to discuss with the analyst community. I’m trying to get some time on their schedule, because they really haven't made a formal announcement of Neoview. It’s something that’s been trickling out. I’ve taken various informal briefings for the last six months, and they let me in on a few things that they are doing in that regard, but HP has not really formally declared what its product road map is for data warehousing. I expect that will be imminent, because, among other things, there is a trade show in February in Las Vegas, the Data Warehousing Institute, and I’m assuming that they -- just like Teradata and the others -- will have major announcements to share with all of us at that time.

Gardner: Well, thanks for that overview. Anyone else have anything to offer on the role of data warehousing?

McKendrick: Something I always found kind of fascinating is that the purpose and challenges of data warehousing are very much parallel to those of SOA. The goal of data warehousing is to abstract data from various sources or silos across the enterprise and bring it all into one place. And the goal of SOA is to take these siloed applications, abstract them and make them available across the enterprise to users in a single place. The ROI formula interestingly is the same as well.

When you start a data warehouse, you’re pumping in a lot of money. Data warehouses aren't cheap. You need to take a single data source, apply the data warehouse to that, and as that begins to generate some success, you can then expand the warehouse to a second data source, and so forth. It’s very much the same as SOA.

Kobielus: I agree wholeheartedly with that. Data warehouses are a platform for what’s called master data management. That's the term in the data-management arena that refers to a governance infrastructure to maintain control over the master reference data that you run your business on -- be it your customer data, your finance data, your product data, your supply chain data and so forth.

If you look at master data management, it’s very much SOA but in the data management arena. In other words, SOA is a paradigm about sharing and re-using critical corporate resources and governing all that. Well, what's the most critical corporate resource -- just about the most critical that everybody has? It's that gospel, that master reference data, that single version of the truth.

MDM needs data warehousing, and data warehousing very much depends on extremely scalable and reliable and robust platforms. That’s why you have these hardware vendors like HP, IBM, Teradata, and so forth, that are either major players already in data warehousing or realizing that they can take their scalable, parallel processing platforms, position them into this data warehousing and MDM market, and make great forays.

I don’t think HP, though, will become a major software player in its own right. It’s going to rely on third-party partners to provide much of the data integration fabric, much of the BI fabric, and much of the governance tooling that is needed for full blown MDM and data warehousing.

Gardner: Great. I'd like to thank our panel for another BriefingsDirect SOA Insights Edition, Volume 9. Steve Garone, Joe McKendrick, Neil Ward-Dutton, Jim Kobielus and myself, your moderator and host Dana Gardner. Thanks for joining, and come back next week.

If any of our listeners are interested in learning more about BriefingsDirect B2B informational podcasts or to become a sponsor of this or other B2B podcasts, please fill free to contact me, Dana Gardner at 603-528-2435.

Listen to the podcast here.

Transcript of Dana Gardner’s BriefingsDirect SOA Insights Edition, Vol. 9. Copyright Interarbor Solutions, LLC, 2005-2007. All rights reserved.