Edited transcript of BriefingsDirect[TM/SM] podcast with host Dana Gardner, recorded March 27, 2007. Podcast sponsor: IONA Technologies.
Listen to the podcast here.
Dana Gardner: Hello, this is Dana Gardner, principal analyst at Interarbor Solutions and you are listening to a sponsored BriefingsDirect podcast. Today, a discussion about Services Oriented Architecture (SOA) and open-source software -- how incubation projects and the development of community-based code are a big part of the ongoing maturation of SOA. We’re specifically going to be discussing the incubation Apache CXF project. And here to help us profile and understand this project, its goals and its implications are two representatives from IONA Technologies.
First, we have Dan Kulp. Dan is a principal engineer at IONA Technologies, and he’s been concentrating on Java and Web services technologies. He is also a community lead for IONA’s open-source initiatives, and is furthermore a committer on the Maven Project for plug-ins, Apache Tuscany and Apache Yoko projects.
Also joining us is Debbie Moynihan, the director of open source programs at IONA. I want to welcome both Dan and Debbie.
Debbie Moynihan: Thank you, Dana.
Dan Kulp: Thank you.
Gardner: As we mentioned, there’s an interesting -- and perhaps unprecedented -- intersection between the maturation of SOA as a concept, a philosophy and an approach to computing, and also the role of open source in community-based development. Many times in the past, we’ve seen the commercial development of products that are spun off into open-source projects of a similar nature. But with SOA it seems that things are different. We’ve got a fairly wide variety of projects happening simultaneously as many of the commercial vendors are also putting together products, approaches, frameworks, standards and specifications to help companies develop and manage SOA.
So tell us a little bit about the playing field for open source and SOA, and particularly CXF, which is an ESB project. First let me go to Dan. We’ve seen a variety of different products out there. Why do you think it is that SOA is different from the past, and why do we have so many open source projects simultaneous with commercial products?
Kulp: The open source projects are providing a unique opportunity for developers to get their hands dirty and learn a little bit about the field, as well as contribute back some of their ideas in a form that is very healthy for new technologies like SOA. With SOA being very new, there are a lot of ideas flying around, and people are coming up with new ideas and technologies just about every day. The open-source communities that are popping up are very good places to foster those ideas and solidify them into something that’s maybe not just usable by that particular developer’s applications, but also across a wide variety of customer- and user-driven problems.
Gardner: We’re also seeing a combination of best-of-breed, more discreet components standing on their own for SOA activities, as well as more of an integrated stack or suite approach by many vendors. At the same time, we’re seeing open source and commercial. So, there’s a real mixture, a hodgepodge of code, component and infrastructure for those that are evaluating and working toward SOA. Why is that? Is it that SOA is, by definition, more of a componentized undertaking? I’ll throw this out to either Debbie or Dan.
Kulp: It definitely is. If you look at the goals of SOA, you may have some older legacy systems that you want to expose into your SOA, so that newer applications or newer development efforts can talk to those, but you also have all this new stuff that’s popping up. You have all these brand new AJAX applications and other applications that basically present a whole new set of challenges, a whole new set of connectivity options -- just a lot of technologies to connect all these things.
That’s why you see a bunch of these stacks producing different types of connectivity options. Obviously, a lot of commercial vendors are creating large stacks that are designed to target their customers with things that they have supported in the past, and obviously they have to bring their customers up to the newer technologies. When you look toward the open-source stuff, it’s more about connecting newer systems and newer technologies that are really hot and sexy today.
Gardner: So, a little bit of the old and the new -- the more the merrier.
Kulp: Exactly.
Gardner: I suppose that the good news is that it’s "the more the merrier," and there are lots of options, but I think for some people who are traditional IT folks that that many options and that much choice can be daunting and confusing. How do we look at the current landscape of best-of-breed and suites of open-source and commercial and make some sense of that?
Moynihan: Well, one of the things we're trying to do at IONA is help users with the best-of-breed SOA infrastructure technologies that are out there in open source, and to integrate those together in a certified and tested package. This makes it easier for them to leverage multiple projects together. Because there are quite a few best-of-breed approaches and there are a lot of different options. The other thing is that certain communities seem to attract SOA types of technologies, and we participate in each of those -- Apache, Eclipse Foundation, ObjectWeb, to name three of them -- and that’s a good place for people to start. I think with SOA also there are a lot of loosely coupled components, and that actually lends itself well to best-of-breed, and it allows multiple vendors to participate, with each providing what they're really good at.
Gardner: Maybe we should point out here that CXF has a certain legacy and heritage that is close to IONA. Why don’t we briefly give an overview, Debbie maybe from you, on the lineage and history of CXF?
Moynihan: Sure, about a year and a half ago IONA made a proactive decision to initiate the creation of an open-source project called Celtix in the ObjectWeb community to focus on building an open-source ESB. We got that to the first milestone and got a really good foundation. It was following along the same architectural path as IONA’s other offerings, a lightweight, standards-based approach, allowing you to lay on top of any technology that you already have in place, rather than taking a stack type of approach. At one point we wanted to grow the community. We had a lot of interest from other projects in the Apache community. And there was another project called XFire, with which we had a lot of synergies and shared goals.
That led to some discussions, and we eventually made the decision to merge XFire with Celtix and moved them over to the Apache community. We thought it made sense to start a new community with the merged project, and that evolved into CXF. Dan can go into a lot more detail about where we are with the CXF project, but we’ve taken what we had with Celtix and XFire and brought the best of both of those together. And we continue to make a lot of progress there.
Gardner: One thing I want to understand is why open source is a strong approach for the development of certain products, in this case SOA-type products. As I said, I looked at the incubator page for CXF and I see the goals are, "support for standards," "multiple transports," "bindings," "data bindings," "formats," "flexible deployment," and "support for multiple programming languages."
It seems as if, by nature, an open-source approach to SOA has advantages. A commercial vendor and private-code vendor might have some of these goals as well, but they are also going to be mindful of their heritage and their legacy. Is there, from an open-source community level, an advantage to developing an ESB, for example, in a more inclusive way -- to create an ecology, to create a community, where people will contribute? And let me throw that out to Dan.
Kulp: Oh, definitely. There’s a lot of functionality that ends up in a lot of open-source projects that really wasn’t a priority -- or even sometimes a consideration -- when those projects where originally created by the various vendors that push to get these projects started. One of the things about closed-source projects is anything that’s really developed is specific to that vendor’s customers. If their customers have various requirements, that’s what gets developed. They're trying to get new customers. That’s always a goal. But if one of their customer says, “Hey, I need this now,” a lot of other things don’t get developed.
Whereas one of the goals of an open-source community is to bring new developers in. And a lot of times those new developers have different priorities or different ideas of what an ESB should do. They can provide a lot of expertise and new and fresh ideas that can make the open-source project a bit different than closed source, and provide some unique features.
Gardner: I suppose one of the tricky parts about any private source or closed source or commercial development and requirements phases is where we draw the line. We’ve got a deadline to meet, there are only certain things we can do within that timeframe, and those are going to be dependent upon our business goals. That’s fine -- there's nothing wrong with that. But it’s a different beast when you’re developing your requirements within an open-source ecology of contributors.
Kulp: Definitely. One of the most fascinating things about the open-source community is something may not be my number-one requirement. But if it’s one of the other developer's number-one requirements, they’re more than welcome to work on it and get it done. So in my mind it would have slipped. But in his mind it would have gotten done. It’s a fascinating environment.
Gardner: I suppose it’s also a two-way street. If there’s an ecology that contributors can bring into these definitions and capabilities, they can have many more integration points, many more approaches of how this relates to different implementations in the field. That’s one direction. The other direction is that developers can say, "Listen, we want to be able to work with what this project produces -- and we happen to be of a certain flavor of development" ... like, "I am a Spring developer" or "I am a J2EE developer."
Tell us a little bit about why this makes sense for developers. They can set this project up so that they can better take advantage of what it does, right?
Kulp: Right. You bring up a good example with the Spring stuff that you just mentioned. Originally, when we were doing a lot of the Celtix stuff, we were still in ObjectWeb, and Spring wasn’t really one of our priorities. From IONA’s standpoint, it’s not something that we’ve really experienced much with our customers. But as part of the merge with XFire, that user base was a little different than the Celtix user base.
Priorities got shifted, and we started developing more flexible models for deployment that allow the use of Spring, if you’re a Spring person. If you’re not a Spring developer, we have other options that are available to deploy your applications in a very different format. That provides a lot of flexibility when you get that broad community throwing ideas out there.
Gardner: I suppose that many times, from a commercial perspective, you’ll get the vendor saying, "Here are the tools we’re going to use."
Kulp: Exactly.
Gardner: Let’s dig a little more deeply into Apache CXF. Explain what it encompasses. I referred to it earlier as an ESB, but it seems that with this expanding definition set that it might be larger than that.
Kulp: There are definitely a lot of features being added that target a variety of users and use cases that really work into our original definition of what CXF was going to be. If you take a look at that Apache incubation project page, there’s a list of stuff. It was the original design of what this project was going to be. It’s going to have multiple bindings and multiple transports. We do have that, and that’s good. But with our growing list of cool features that developers keep coming up with, we’ve been adding all these multi-deployment capabilities. We’ve been adding a lot of these new WS specs like WS-Addressing and WS-Reliable Messaging.
Some of them weren’t even really anywhere close to final specs when we started the Apache CXF project. It’s a never-ending battle of more ideas coming at us, which is great -- there are no complaints about that. But there’s definitely a lot of work to be done and a lot of new ideas. So, it’s a growing project with a growing list of features.
Gardner: So we’re getting one of those good-news, bad-news things, right? The good news is that we’ve got a lot of people interested, and they want lots of different things. The bad news is that we've got to try to address all those different things.
Kulp: Right, but being open source if we don’t have time to do something and they want to devote some resources to it we definitely welcome that.
Gardner: Who are the primary contributors and innovators within the CXF project? Obviously, we have IONA involved, but are there any others that you can share with us?
Moynihan: We also have Envoi Solutions participating. We have individuals from various Apache projects, like Geronimo, who are also contributing, because they would like to integrate their projects with CXF. At Apache it’s really more the individual versus a particular corporation.
Gardner: There seems to be quite a bit of other ancillary development in terms of Yoko, Tuscany, and ServiceMix that bring a whole other family of contributors into it. Right?
Kulp: Definitely. One of the other neat things about Apache is how many top-level projects they have. It’s in the 30s now, and a lot of the top-level projects have subprojects. So, there’s a lot of various functionality and different projects. One of the things that we’re trying to do from Apache’s success standpoint is reach out to some of those other communities, get involved with them, and help them get involved with CXF. Hopefully, we can work together to figure out the gaps that we have. Maybe we can use some of their technology, and they can use some of the CXF stuff.
That’s one of the fascinating things about Apache. There’s a lot of neat stuff there.
Gardner: Going back to that earlier point about so many choices in the marketplace today, if I am a chief technology officer or enterprise architect and I am moving toward SOA, I am going to be evaluating projects and products and looking at best-of-breed versus suite and so forth. I would want to know the flavor of CXF as an ESB? How does it fit and compare to others? What characterizes this as an ESB? Is this a high-performance or is it a low-latency? What is it designed for?
Kulp: CXF is really designed for high performance, kind of like a request-response style of interaction for one way, asynchronous messaging, and things like that. But it’s really designed for taking data in from a variety of transports and message formats, such as SOAP or just raw XML. If you bring in the Apache Yoko project, we have CORBA objects coming in off the wire. It basically processes them through the system as quickly as possible with very little memory and processing overhead. We can get it to the final destination of where that data is supposed to be, whether it’s off to another service or a user-developed code, whether it’s in JavaScript or JAX-WS/JAXB code.
That’s the goal of what the CXF runtime is -- just get that data into the form that the service needs, no matter where it came from and what format it came from in, and do that as quickly as possible.
Gardner: So, breadth, versatility, high performance -- are these adjectives that we would use here?
Kulp: Oh, definitely, yes.
Gardner: What are some others?
Kulp: Flexibility. The CXF runtime provides a lot of flexibility. We have a lot of interceptor points where core developers, who really know what they're doing, can intercept various points of that message as it’s going through the system to do some partial processing or validation. We have some work in progress to do, like partial message encryption on some of the XML stuff. That’s done via some of these flexibility touch points, where developers can just take a part of the message and say, "Okay, we are going to encrypt this." So, flexibility is another big word that’s important from a developer’s standpoint.
Gardner: So, we have this rich canvas, and we’ve got lots of different oils and paint that we can apply to it and come up with our own unique painting, if you will, for various use-case scenarios. I'm curious as to what vertical, either industries or use-case scenarios, you think that this level of flexibility and versatility is best designed for? Is this something that an ISV will gravitate to? Is this what a software-as-a-service (SaaS) organization should be looking at? If I'm a business applications systems integrator and I'm looking to pull these together in an SOA, what’s the best fit for this as it is evolving in the current incubation process?
Moynihan: Well, we've definitely seen interest from a few different types of developers and other vertical industries. IONA traditionally has had a lot of customers in telecommunications, financial services, and manufacturing. From our engineers' perspective, they bring a lot of those requirements to the project, but we have also seen interest from a lot of different industries. So I wouldn’t say it's specific to a particular industry. From a developer perspective, what’s nice about the technology is that it's really flexible, as Dan said, in that there are multiple programming models that it can apply to. Also, from a deployment perspective, if you are a developer who is implementing it, you can deploy it in a lot of different types of technology.
Whether you like Spring or you are really focused on application servers and have a deep knowledge of JBoss, you can leverage CXF within any of those types of environments. I do think there is a huge opportunity for ISVs to look at this as something that they could include within their products. That’s something that we have seen with Celtix. So definitely that will be interesting. I hope that we see a lot of people joining and providing feedback on the types of requirements we need to continue to develop for that market as well.
Gardner: I suppose the CXF project has the performance characteristics and flexibility that can be taken in a number of directions, and it’s up to the market where they want to take it.
Kulp: Exactly. Obviously the developers who are contributing have a large say in that. But, if a user is going to get more involved, we definitely encourage them to start looking at our mailing list and our Website and start providing extra suggestions of where they think we are deficient or lacking something that they need, and we’ll address it.
Gardner: I suppose that’s another benefit of open source -- you don’t have a big SKU drop to develop to. It’s an ongoing journey, right?
Kulp: Exactly. It’s not big leaps like you have in your commercial versions. They come out every six months with big leaps with them. With open source, if somebody wants to commit something today, they’re obviously able to download the source, build it themselves, and they would have a solution for themselves today. They wouldn’t have to wait two or three months for the commercial vendors to spin the whole release and do all of the stuff that's required for release.
Gardner: For those folks who now have their appetite whetted a little bit and want to learn some more as to why this might be applicable for their needs, can we get into a little bit about what’s technically going on in terms of inclusiveness and adaptation to what’s new and interesting in the market these days? There has been a lot of interest around rich Internet applications (RIAs) and Web 2.0-types of interfaces and applications. Dan, tell us a little bit about what’s going on in that direction.
Kulp: We’ve been working on some new features that we haven’t had in some of the previous generations of IONA’s SOA tool. Some of the main ones we have are the REST integrations. If you are not familiar with the Web 2.0/REST stuff, AJAX is the popular word that actually uses it. It’s a different style of interaction, where you do “gets” to get your XML data. Then it is a little bit processed on the client side, a little bit processed on the server side. There’s a lot of scripting going on in the marketplace today. There are a lot of JavaScript developers working with AJAX or doing other types of JavaScript, even on the server side. So, a lot of what we’ve done with CXF is to give those file developers some new tools to produce applications.
We’ve created a set of REST annotations. If you have existing Java services that you want to expose via REST capabilities, your AJAX clients can talk to them. You can annotate the code with these REST annotations, and CXF will pick up on them and do the REST or the SOAP interactions. We also provide support for writing your SOA applications in JavaScript. JavaScript is one of those neat interpreted things for rapid development, where you avoid some of that compile-repackage-redeploy cycle.
Gardner: It may be the most popular language in the history of development, right?
Kulp: The way the Web is today, maybe, yes. A lot of people out there are familiar with JavaScript. Having that capability built into the product opens up the project to a whole new breed of developers because we are not restricting it, saying, “Okay, you must know Java JDK 1.5 with JAX-WS."
We do support that too -- we’re not discounting that, but we’re not restricting you to that level of development. With the JavaScript capability, it’s a whole new breed of developers that this opens up to. We have some plans in place for adding things like Jython, and JRuby, and other scripting to broaden that and get more of those people in to open up the opportunities for a wider range of developers.
Gardner: How about specifications and standards? Has there been some more adaptation to what’s being asked for? I guess I’m thinking of some of the WS-* types of specs.
Kulp: Definitely. When we first started the Celtix project at ObjectWeb, JAX-WS itself and the Java API for XML Web Services, the JAX-WS 2.0 spec, wasn’t even finalized. Since then it’s been finalized, and there’s another revision coming up shortly that’s in final draft. Then there are a lot of new Web services specs such as WS-Reliable Messaging, WS-Security, WS-Policy. A lot of new specifications have come out in the last year and a half that provide a standard way of doing a lot of the things that we are trying to do in CXF.
CXF is trying to use those standards whenever possible. Right now in Apache CXF we do support JAX-WS and are working on trying to get it to pass the [compliance test]. It doesn’t right now, but it’s definitely a priority. We are supporting WS-Reliable Messaging, WS-Addressing and WS-Policy. We have started some discussions around WS-Context and WS-Transactions. So, there are a lot of Web service specifications that we are keeping our eyes on and following. As they evolve and finalize, we’re basically trying to get them into CXF.
Now, that said, a lot of those specifications that I just mentioned may or may not be finalized. All this Web service stuff evolves on a day-to-day basis, and it’s actually a lot of work to keep track of those. But from a user standpoint the fact that the project’s doing that, instead of the user doing it, is probably a good thing.
Gardner: Is it fair to predict that these things, when they are ready, would find themselves in CXF before they’d find themselves in commercial ESBs?
Kulp: Potentially, yes. With the commercial product there are release cycles of six months or a year, or something like that. A lot of commercial vendors try to figure out what’s going into a particular release six months before it’s even released. So if those Web services specs aren’t finalized six months before release, they may not make that release cycle. In an open-source environment, where you have a constantly evolving development, as soon as these things get finalized, it can be made available almost immediately.
Gardner: I suppose Eclipse is the most popular "belle at the ball" these days, as well as the SOA Tools project that’s going on there. What would be the relationship between what’s going with SOA Tools and Eclipse and the CXF incubation in Apache? How about to you, Debbie?
Moynihan: The SOA Tools project is geared to provide a broad spectrum of tooling based on the Eclipse platform. It provides a lot of different capabilities for building out SOA services and other types of infrastructure as well. Within that project there is a component that consists of tools that work with CXF specifically. Right now we have JAX-WS tooling, and we’ll continue to expand the tooling part of the SOA Tools project to work with the different capabilities that were built out in CXF.
What’s nice about the SOA Tools project is that it has a lot of other capabilities that are integrated -- like orchestration for BPEL, process modeling using the BPMN standard, and things like building up Service Component Architecture (SCA) tooling and other complementary capabilities; as you have talked about earlier, bringing together the best-of-breed.
Gardner: I suppose another thing we need to look at is the relationship between CXF and the IONA commercial products. I'm thinking of Artix and some of your other offerings. For those people listening who are trying to understand that, can you lay out the land in terms of the relationship between these two? What are your business goals by having such a large active role in the CXF project?
Moynihan: We would like to offer what our customers are looking for, and our customers are looking to elaborate the latest standards in open source. They also have some other needs, which are not being developed in open source. So we have a dual-strategy where we are doing open-source development and then also company-developed or commercial development. We look at both the open-source development and the commercial-development. It’s very complementary from an R&D perspective, in that we’d like to leverage the CXF technology within our commercial offerings.
Also we’d like for all of the Artix plug-ins to work with the open-source technology and to interoperate with the Artix runtime. From a development perspective, we may choose over time to move some of those capabilities into open source. We develop everything so that it can be moved into open source, if and when we decide that it makes sense.
Kulp: This comes back a lot to the flexible nature of the Apache CXF project. One of the design goals of Apache CXF, as I mentioned earlier, was to provide a lot of touch points for plugging in new functionality or to extend the system to customize a little bit. Part of what IONA is doing is using some of those touch points to provide more unique solutions for IONA-specific problems or problems that the IONA customers have been dealing with. The flexibility of Apache CXF provides a lot of capabilities to do that.
Gardner: Okay, who should be interested in CXF in terms of a deploying organization? We talked a little bit about the use-case scenarios. How do you get started, and whom would the people be to do that -- I guess a champion or a maven? Who is the decision-maker that this needs to be appealing to? And then how would those people start taking advantage of what CXF is offering?
Moynihan: What’s nice about CXF is it's small, flexible, and can be consumed in a lot of different ways. Individual developers can actually be the champions, and you see it accepted in their projects. So one group of key users would be corporate developers, people who are working within businesses and building applications and want to service-enable those. On the other end of the platform are people who want to receive and connect with those services.
If you have an application, you want to connect with these new services that are being created and that consume services. Also there are a lot of system integration firms out there who do this type of work.
Those will be the big ones. Then over time you may see more adoption of a particular standard across the organization as people learn about the flexibility and high-performance of the CXF project.
Gardner: To you, Dan ... I suppose if you are downloading an open-source component as a developer, you might be used to things a little less daunting or substantial as an ESB. Or am I reading this wrong? Perhaps there is a perception out there that needs to be adjusted, that it’s okay for me as an individual developer to download an ESB? Do you expect that to be the case? Or is this more of a larger architectural undertaking?
Kulp: It’s definitely good to be able to have a developer download it and get their hands wet immediately. Apache CXF does provide a lot of getting-started-type samples that walk you through the first steps of getting up and running as quickly as possible.
We try to provide a lot of capabilities for developers to get started very quickly with something that's simple, but at least get them started, and then from there to grow their capabilities slowly, and get them more into the advanced features. But you have to start small, and we’re trying to provide samples that will help you do that.
Gardner: That might be something that’s in the best interest of developers for their career. We're certainly seeing a lot of interest in SOA. One of the big question marks in looking at the landscape for SOA is whether there’ll be sufficient manpower or human resources for moving into the role of a SOA architect. One of the best trajectories toward that is from the developer perspective. They might have to learn a lot about a specific business, a domain, and the ins-and-outs of what’s going on in that business. But I would imagine that there are some significant career opportunities for folks who were able to take the developer role, embrace understanding of such products as CXF, and then take that into a business. Do you have any feedback on that in terms of the human resources potential?
Kulp: As in almost all cases, the more you learn, the more potential you have. So, if you can dig into various products and learn more capabilities -- with CXF supporting a bunch of the new Web services standards -- it does give you the opportunity to start using JAX-WS, WS-Addressing, WS-Reliable Messaging, and REST -- all these neat buzz words that you hear on a day-to-day basis.
For developers that aren’t familiar with these things, it does give an opportunity to learn about them and use them in something that’s relatively easy. Expanding their knowledge is always a good thing from a career perspective. The more you know, the better off you are.
Gardner: It's hard to argue with that. Well, we’ve had a good discussion on the Apache Incubator CXF project, an open-source ESB. We have been talking with two representatives from IONA technologies, Dan Kulp, principal engineer, and Debbie Moynihan, director of open-source programs.
You've been listening to a BriefingsDirect sponsored podcast. I'm your moderator and host, Dana Gardner, principal analyst at Interarbor Solutions. Thanks for listening.
Listen to the podcast here.
Podcast sponsor: IONA Technologies.
Transcript of Dana Gardner’s BriefingsDirect podcast on SOA and open source community development. Copyright Interarbor Solutions, LLC, 2005-2007. All rights reserved.
Thursday, May 03, 2007
Monday, April 30, 2007
Transcript of BriefingsDirect Podcast on ALM 2.0 and Borland's Open ALM Approach to Development as a Business Process
Edited transcript of BriefingsDirect[TM/SM] podcast with Dana Gardner, recorded April 3, 2007. Podcast sponsor: Borland Software.
Listen to the podcast here.
Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you're listening to BriefingsDirect. Today, a sponsored podcast discussion about the development of software as a managed business process, about seeking to gain more insight, more data and metrics, and more overall visibility into application development -- regardless of the parts, the components, the platforms, or the legacy. It’s really about gaining an operational integrity view of development, from requirements through production, and bringing it all into a managed process.
To help us through this discussion of Application Lifecycle Management (ALM) and the future of ALM, we have with us Carey Schwaber, a senior analyst at Forrester Research. Welcome to the show, Carey.
Carey Schwaber: Thank you.
Gardner: We're also going to be talking with Brian Kilcourse. He is the CEO of the Retail Systems Alert Group, and a former senior vice-president and CIO of Longs Drug Stores. Thanks for joining, Brian.
Brian Kilcourse: Thanks, Dana.
Gardner: Also joining us, an executive from Borland Software, Marc Brown. He is the vice president of product marketing. Welcome, Marc.
Marc Brown: How are you?
Gardner: Doing well, thanks. We want to talk about the "professionalism" of development. Some people have defined software development as an art or a science -- sometimes even a dark art. And to me that means a lack of visibility and understanding. Many times the business side of the house in an organization that’s doing software is a little perplexed by the process. They see what goes in as requirements, and then they see what comes out. But they often don’t understand what takes place in between.
I want to start off with you, Marc. Tell us a little bit about ALM as a concept and what Borland Software, in particular, has been doing in terms of evolving this from mystery into clarity?
Brown: Dana, that’s a great question. What Borland has been doing over the last several years is really focusing on how to help organizations transform software delivery or software development into a more managed business process. We think this is critical. If you look at most businesses today, IT organizations are expected to have very managed processes for their supply-chain systems and for their human resources systems, but when it comes to software delivery or software development, as you mentioned, there is this sense that software is some sort of an art.
We would really like to demystify this and put some rigor to the process that individuals and organizations leverage and use around software delivery. This will allow organizations to get the same predictability when they are doing software as when they are doing the other aspects of the IT organization. So our focus is really about helping organizations improve the way they do software, leveraging some core solution areas and processes -- but also providing more holistic insight of what’s going on inside of the application lifecycle.
Gardner: In January of 2007, you came out with a new take on ALM. You call it "Open ALM." I am assuming that that is opposed to "closed." What is it that’s different about Open ALM from what folks may have been used to?
Brown: Well, getting back to helping organizations with software development, it's Borland’s assertion that we need to do it in the context of how organizations themselves have developed or invested their own technology stack over time. So for us the way that we can help organizations apply more management and process-rigor to the application lifecycle is to give them insight into what’s going on. We do that through providing metrics and measurements, but in the context of their technologies, their processes, and their platforms. That is versus proposing a new solution that causes them to do a rip-and-replace across each of the vertical slices of the software development lifecycle.
Gardner: It sounds like an attempt to give the developers what they want, which is more choice over tools, new technologies -- perhaps even open-source technologies. And at the same time you give the business more visibility into the ongoing production and refinement of software. Is that a fair characterization?
Brown: It sure is. What we are all about with Open ALM is providing a platform that provides the practitioners of ALM the tools, processes, and choices they need or are skilled in, and then provide the transparency across that lifecycle to be able to collect the metrics necessary for the management team to actually manage those resources more predictably.
Gardner: Okay. My sense is that there are more options for companies when it comes to the tools and the utilities that they bring into the software development process. Let’s take a look at the state of the art of ALM. Carey Schwaber, can you give us a bit of an overview about ALM? And am I correct in assuming that there are more parts and therefore the potential for more complexity?
Schwaber: You're right. There certainly are. ALM isn't just about developers. It’s really about all the roles that come together to ensure that software meets business requirements -- from business analyst, to the architect, to the developer, to the project manager, the tester.
It just goes on and on. And it feels like every year we end up with more specialized development teams than we had the year before. Specialization is great, because it means that we have more skilled people doing jobs, but it also means that we have more functional silos. ALM is really about making sure that every one of those silos is united, that people really are marching toward the same goal -- to the same drumbeat. ALM is about helping them do that by coordinating all of their efforts.
Gardner: Are there some mega trends going on? It seems to me that offshoring, globalization, outsourcing, and business process management (BPM) add yet another layer of complexity.
Schwaber: There aren't many trends that you can’t tie back to a greater need for ALM, where we have so many things going on that are increasing the degree to which our software is componentized. SOA is just one way in which our software is more componentized. Dynamic applications are also leading toward more componentized software, and that really means that we have more pieces to manage.
So in addition to functional silos, we've also got technology silos where we have a front-end in .NET, a back-end in Java, and maybe we're using a BPM tool to create the entire composite application. There are just so many ways that this gets more and more complex. Then, in addition to managing roles, you also have to manage all of these different components and their interdependencies.
Gardner: A major component of ALM is managing complexity. You came out with a report in August of 2006 that coined the term "ALM 2.O." What did you mean by that?
Schwaber: That’s actually about something that we see as a shift in the ALM marketplace. In the past, vendors have collected ALM solutions over time by acquiring support for each role. They’d acquire a tool that supported business analysts in the work that they do. Then, they’d acquire a tool that supported testers or test leads and the work that they do. They’d integrate the two, but that integration never ended up being very good. That integration is where ALM comes in. ALM lives in coordinating the work that the tester, the business analyst, and all the other roles involved really accomplish, to make sure that software meets business needs.
What we have seen is a trend where vendors are stopping the accumulation of piece-parts of ALM solutions, and starting to design platforms for ALM that really integrate any tool that the company happens to be using with something over the platform that provides ALM almost as a service to the tools. People have the option of choosing a tool for their business analysts from one vendor, a development environment from another vendor, and a testing tool from a third vendor. They are plugging into the same ALM platform, knowing that they'll all work together to ensure that those roles are in harmony -- even if the vendors that produced the tools that support them are not in harmony.
Gardner: So even if we take a platform approach to ALM, it sounds like what you are saying is that heterogeneity -- when it comes to the moving parts of application and software development -- is no longer necessarily a liability, but if managed properly, can become an asset.
Schwaber: That is definitely one of the goals of ALM 2.0, to assume that integrating lots of different functional silos shouldn’t require that we go to a single vendor, because that’s not always possible. There may be a best-of-breed tool in a certain area that happens to be from a vendor that doesn’t have great support for the rest of the lifecycle. So the vision with ALM 2.0 is that you shouldn’t have to make that trade-off. You should be able to choose best-of-breed and integration.
Gardner: I assume then that this also affects people, IT, and process. How would an enterprise that buys into this vision get started? Do you have to attack this from all angles, or is there a more pragmatic starting point?
Schwaber: Hopefully the vendors will make it easy on you and you won’t have to buy everything in one fell swoop. The whole idea is that if you purchase one tool from a vendor that has an ALM 2.0 platform, the platform essentially comes with that. Any tools that happen to plug in to that are ones that also enable better and more flexible ALM, where the platform provides services like workflow, collaboration, reporting, and analytics. Maybe even some more infrastructure things like identity management or licensing could be in the platform, and those would be available to any tools that wanted to consume them and were designed to consume them.
Gardner: Let’s go to Brian Kilcourse. Brian, you have been in the field as a CIO. Is ALM 2.0 vision-creep or is this real-world, in terms of how you want to approach software development?
Kilcourse: It sounds very real-world to me. As most CIOs have done, I spent untold amounts of money trying to turn the software development process from an artistic activity to an engineering activity. There were a bunch of good reasons for that. One of them is that commercial computing is now over 60 years old. And one would think, at this point, that we would have figured out a way to commoditize it and make it more reliable.
But it still remains, even after this long period of time, that software development is easily the most unreliable part of the whole value delivery equation that the IT department brings to the organization. So in broad-brush strokes, it makes great sense. The other thing that is important to underline, as Carey already mentioned, is that people like me have already spent a lot of money on tools. And just because there’s a new and better definition of how to approach those tools doesn’t mean that I am going to throw everything away.
Organizations that had quite a bit of time to get these tools embedded into their practices may have silos of expertise that aren’t going to be easily displaced. All of these things argue against stopping your business while you figure out a better way to develop software. What is important is that we desperately need a way to be able to track a development from the initial conception of the requirements, all the way through to delivery, production, and beyond.
There has to be a way to do that, and it has to be an overarching process that we can observe, measure, and report on. To that end it requires that all of these tools, whatever they are, be kept in sync, so that we can understand it and we can make it evident to the business -- so that the business can know that they are getting the right value for the right dollars. That’s always one of the biggest challenges that any CIO has -- how to show value.
Gardner: I suppose there’s been a kind of tension between sufficient openness and sufficient integration, and that they play off of one another. Is there anything about the state of the art now, where reaching this balance between sufficient openness and the ability to integrate and manage, comes into some sort of a harmonic convergence? Is there anything different about ALM today?
Kilcourse: The fact that we are talking about ALM 2.0 is a big step in the right direction. In our business applications we need to be able to integrate at the information level and the data level, even if they are different code sets or physically different databases. From the business perspective we need to come up with one coherent answer to any kind of a business question. No matter what the toolsets are, we have one way to see them from a business perspective. I think that’s very encouraging.
We know from our business application stack that this is possible. So if it’s possible for the business, why isn’t it possible for the IT organization? You can call this a "cobbler’s children" problem. Why don’t we have for ourselves what we promise to deliver to our business associates?
Gardner: Let’s take that back to Marc Brown at Borland. I assume that your goals with Open ALM are similar to the goals envisioned in ALM 2.0, and that you want to help CIOs get that visibility to demonstrate value. Do you see something new and different in the marketplace now about reaching this balance between openness and integration?
Brown: You know, I do. To extend what we were just talking about, one of the core differences that organizations are talking about today versus 10 years ago is that in the past we talked a lot about making sure we had optimized role-based solutions. We talked a lot about supporting specific activities and specific roles in a lifecycle. What we are finding today when we talk about application lifecycle, and I think Carey brought this up, the real critical piece is understanding the core processes that drive the overall lifecycle activities and assets between the individuals that make up a software delivery team.
So for Borland one of the unique aspects in the way we are approaching this is that we are really focused on the process-driven integration from a technology perspective. We're really looking at the individual processes, such as portfolio and project management, where requirements definition management, understanding those processes, bringing the technologies to bear to support those processes, and providing the integrations between those individuals supports the horizontal software processes.
The other aspect is understanding that we need to do this, not just in a constrained set of tools that Borland brings to market, but also in the context of the tools that customers want to use and leverage. That means Borland technologies, other third-party technologies, and open-source technologies.
Gardner: I suppose one of the hurdles to getting this visibility in the past was that a lot of these components, tools, and testing environments have very different technologies and formats for how they apply and transfer data. What is it that Borland has done with Open ALM to allow the majority of these parts to work together? Is this about building modules and components? What does it take to get these things to actually be herded, if you can use the analogy of trying to herd cats?
Brown: The starting point is understanding that we need to deliver a platform based on an ALM meta-model, something that we can utilize and leverage to define all the various activities and assets that flow through the application lifecycle. Then we need to provide a set of core services that will use that meta-model and will support add-ons that are lacking today. One of the critical things is providing more comprehensive ALM-centric metrics and measurements that span the lifecycle -- versus being very vertically focused for a particular role and job. A lot of this is based on having an ALM data description that represents all the activities and data that are going to be passed through a lifecycle.
Gardner: So there’s an immediate tactical benefit of getting the data from the various parts, and there’s a larger strategic value of then analyzing that data, because you've got it in the holistic process-driven environment, a common environment. What sort of data and metrics do you expect companies to be able to derive from this, and how can they instantiate that back into the process?
Brown: The critical thing that businesses will be able to do is be able to demystify what software development really is. It's about removing the "black box," and having data consolidation or aggregation so they can in fact measure what’s going on. Then they can determine what areas of the processes are working, what areas potentially are bottlenecks or are deficiencies. They can utilize the data that’s being collected across the ALM, and filter that out to the broader business intelligence activities that the IT business is doing to see what’s actually working, and what’s not working, within the IT organization.
Gardner: We're going to be able to give non-IT people some real visibility into timetables, quality assurance curves, dates for completion, and that sort of thing, which to me seems essential. If you are putting a new product or a new service in the market, you are going to be ramping up your marketing, ramping up inventory and supply chain, and are going to be looking into manufacturing, if that's the basis of the product. You really need to coordinate all these things with development, and that has been haphazard.
Am I reading more into this, or do you really plan to be able to give non-IT people these dials, and this kind of a dashboard by which to run their entire business -- but with greater visibility?
Brown: That is exactly what we are proposing. Borland is very committed right now on Open ALM to deliver a platform that allows organizations to leverage their own configured processes and technologies to gain the insights necessary to really start having confidence in what they are doing. That confidence is going to be increased by providing them the tools and technologies so they can track, measure, and improve their processes.
Gardner: Let's take it back to Carey Schwaber. Carey, in your analysis of the market is there a potential for a significant productivity boost by bringing visibility into software development and activities into the larger context of business development and go-to-market campaigns?
Schwaber: I think there is. And there is a great degree of redundancy that happens to a lot of development efforts that have already been accomplished, or just redundancy of documentation. Even when it’s not redundancy, the problem is that people are pursuing different goals -- when you have testers who are testing against out-of-date requirements, and the business analyst wants them testing against the newer requirements. We've got the problem of an overlap of efforts. Then we've got the problem of misaligned efforts. Together those really eat away your productivity and waste precious development dollars.
There are a couple of ways you can use better ALM practices to improve productivity. The first is to get numbers about what you are really doing today to measure how often these things are happening. That is the first step that you need to take before you can take remedial action. The second one is just making sure that you have people working off of common data, that there is one way to represent the truth -- not just about one part of the lifecycle, but the entire lifecycle. You have to have the appropriate correlation between those disparate parts.
Gardner: Brian Kilcourse, to your point about CIOs trying to demonstrate value in real terms -- to be viewed as a productivity center and not a cost center -- do you think that this visibility into application development can give you, as a CIO, the tools you need to go to the CEO and say, “Here is what we are going to do, and when we are going to do it.”
Kilcourse: Certainly, if you as a CIO can map specific IT activities back to the business requirements that drive them, you have a much stronger set of metrics to indicate your alignment to the business than you have otherwise.
There is a huge disconnect between the front of a development process, which is always driven by business requirements, and the back-end of the process, which is always post-production maintenance. Between those two spaces there are a lot of things that go on. Somewhere along the line, in what I characterize as the business technology handoff, there is a big disconnect. Even with the best intentions, because of the complexity of the technology solutions available, the business really does lose track of what those guys down in IT are doing. The ability to overcome that chasm would go a long way toward solving the historical distrust between the two organizations.
Gardner: Do you sense that there are any particular vertical industries or even types of development projects that would benefit from an approach like Open ALM better or first? Where is the low-hanging fruit for this?
Kilcourse: That’s a great question. No business that I am aware of starts from scratch, either with a technology group or with the business that it supports. So any business that is trying to infuse the business process with the information asset in new ways is a candidate for this. I focus a lot on retail. And I can tell you from my experience in retail that those organizations are ripe for this kind of capability. There is a tremendous amount of distrust between the executive side of the house and the IT side of the house in that particular industry. I see it in other industries as well. But even in such obviously highly correlated industries like financial services there is still a tremendous room for improvement.
Gardner: Do you think Open ALM makes more sense for those organizations that are in fast-moving markets? Retail, of course, is like that because they have to anticipate, sometimes months in advance, the desires of a culture or a human fashion-driven impetus to buy. And then they have to act on that. Do you think that for those companies that are involved with fast time-to-market that this would be particularly important?
Kilcourse: Certainly fast time-to-market causes fast marketplace changes. The problem in IT across so many factors is that the IT organization cannot respond quickly enough to changes in the business environment. That's not particular to retail. It happens everywhere. To the extent that you can eliminate the friction that exists in the delivery process within the IT organization -- so that the company actually is getting the maximum amount of traction for their investment dollars -- it's going to help.
Carey pointed out, and I thought it was a really good point, that there is a lot of wasted activity that goes on because of rework and focusing on the wrong requirements that might not have the biggest benefit -- but might be the thorniest problem that somebody faces. We don’t always have visibility into that. We find out only at the end when we tally up the score and find out where the dollars really went and why we had to go to Phase 2, Phase 3 and Phase 9 of a project, because we couldn’t get it all done in the first shot.
The ability to focus IT energy where it really matters most to the business is a big goal of most CIOs that I know.
Gardner: Carey, back to you. Do you concur that the fast time-to-market is a major impetus? If so, what other ones do you see in terms of where common views of practices and processes for application development are super-important?
Schwaber: I agree that fast time-to-market or any time-to-market pressure is definitely a reason you would need to have your ducks better aligned up front. But I don’t know any companies that don’t want to do a better job of satisfying their business customer’s demands for the same software in less time. That's a pretty universal desire, no matter whether you have a lot of time-to-market pressure in your industry or not.
So, I would say that we all want more for less. On top of that, I would add compliance requirements, where you need to confirm that the software you are developing does what the business wants it to, so that you know that you are producing accurate financial reports, or even that you have some kind of internal compliance requirement.
You know you are looking to get toward Capability Maturity Model® Integration (CMMI) Level 2 or Level 3, and you want some proof that you are actually going to do that. ALM capabilities can really help you in that area. So those kind of pressures really matter. But any time we get away from the old halcyon ideal of the business customer telling the developer what to write, and then the developer immediately implementing it, we have opportunities for miscommunication. The more people, geographies, and technologies we involve, the more complex it all gets, and the more we really need help keeping track of all the dependencies between the things that we are doing. That is really describing any project these days.
Gardner: Of course, software seems to be playing a larger role in how companies operate. The technology, in a sense, becomes the company.
Schwaber: How many business processes are there that aren’t automated by IT, either today, or plan to be automated by IT within the next five years? Business processes that we can’t even imagine will be embodied in software eventually.
Gardner: Let’s get back to Marc Brown. Marc, at Borland you have come out with this Open ALM approach and you have had a lot of experience in development over the years. Do you have any metrics? Do you have any sense of what the pay-off here is through some of your existing customers -- maybe some beta examples? Do you have any typically "blank" percentage of savings? What are the initial payoffs from embracing Open ALM?
Brown: We certainly have seen the benefits with many organizations, which see the value in a number of ways. First, many organizations, because they are trying to improve their overall process, are attacking their deficiencies incrementally. We've got some organizations that have found their key issue today is poor requirements definition and management. They simply can't get requirements written accurately and in a way that they are testable up-front. This creates a huge amount of rework downstream.
We've got some really good examples where we have gone in and helped organizations improve their requirements definition and management process, and we found really dramatic improvements. On one occasion, an organization was able to achieve a 66 percent improvement just on the analysis side -- when they were going through, looking at a legacy system, trying to define the "as-is" business processes, and then taking that work and collaborating with the business stake-holders to construct the "to-be" business process. That was typically taking the organization anywhere from 12 to 20 weeks. They saw a 66 percent decrease in that time by leveraging not only the process guidance we were giving them, but also other technologies that we could apply to that area of the process.
Gardner: So that’s a substantial opportunity, and that was only, I suppose, a partial embrace of Open ALM.
Brown: Yeah, and that’s the way a lot of people are looking at this. We are going out and helping organizations first of all pinpoint their largest areas of deficit or gap. That could fall into any of the four critical solution areas that we're helping organizations around project and portfolio management, or, as I mentioned, requirements definition and management, or lifecycle quality management. We are helping them understand where they have gaps or deficiencies today, and then incrementally improving that over time to embrace Open ALM as an incremental philosophy and approach.
Gardner: How has this so far impacted distributed types of development, where the organization has a number of development centers around the world, where perhaps you are outsourcing, and your outsourcing organizations are spread around the world? What’s the potential payback for those sorts of highly distributed development activities?
Brown: The real benefit we are seeing, and we will see more of this over time, is through the increased visibility. Again one of the biggest problems with organizations that are outsourcing today is the inability to aggregate or consolidate data from the outsourcer, the supplier, and the vendor, and to bring that together into a view, to have confidence that what’s happening from the outsourcer aligns with the overall business goals and original project plans. With our ability to help overlay our platform to bring together both the outsourcer’s technologies and data -- and then bring that together with the internal data -- we are able to bridge the gaps that they are having today, so that they have more confidence in the data they are seeing.
Gardner: How about Services Oriented Architecture (SOA)? It seems to me that as you break things down into services -- if we eradicate more of the silos around runtime environments -- we are at the same time knocking down silos in design-time. We might be able to get into some sort of a virtuous cycle, whereby we can adjust development to suit what’s going on in the field, which then is able to adapt to business requirements. That seems to be a big pay-off from SOA.
Let me throw that out to the crowd. What do you think is going to be the impact of SOA on development?
Brown: I'll take the first crack at this, Dana. I do think that SOA will certainly provide a lot of benefits, because it is one of the first practical approaches to help organizations realize the benefits of reuse. It's something that a lot of organizations had talked about time and time again. But there has been a lack of a common infrastructure or communications to bridge how that really happens over time. Many organizations simply said, “Look, my project’s not budgeted to create reusable code. We've got tight deadlines, and we have got a lot of work to do, and I am not going to have the time to do it in a reusable fashion.”
SOA gives people a good framework for how to actually structure applications to provide interoperability over time. I think this is a good approach for organizations to finally see the benefits of reuse, but it requires a lot of management and due-diligence when they are developing and deploying particular components. Because as they develop new versions or new components to supplement existing ones, they have to have more visibility in the usage levels, on who is using what, and so on.
Gardner: How about you, Brian? How do you see the evolution and maturation of ALM and the burgeoning ramp-up to SOA working either together, or perhaps at odds?
Kilcourse: Actually I don’t see them at odds at all. Because, first of all, SOA is an architectural concept, whereas Open ALM is a process concept or process model. In my company we just finished a piece of research on SOA and retail. What we found out is, if I could characterize something as a curiosity-understanding ratio, there is a lot of curiosity and very little understanding of what SOA really means in terms of how you get from "here" to "there."
As it relates to ALM -- going back to the original discussion that ALM covers everything from requirements all the way through post-production -- the notion of SOA breaking things down into reusable components or objects, business rules or metadata that can be redeployed almost at will as the business needs change, is a very powerful notion, especially in an environment such as the one that I service, where the business environment changes quite dramatically.
The challenge, of course, is taking something as broad as a business requirements and breaking them down into tiny service-level objects that then can be understood and implemented by the IT organization. If you don’t have some way to map that to the business requirements, you could have a worse bowl of spaghetti than you have now. In that context, these things are very tightly interwoven.
Gardner: How about you Carey? A last word on SOA and ALM?
Schwaber: Well, a lot of the great words have already been taken. But what I would add is that SOA introduces more dependencies among development projects then we are used to. It really requires us to have some way of coordinating our efforts across projects. In the past, projects often used completely different technologies for managing their lifecycles.
So this is yet another impetus for us to have a better way of connecting disparate tools from different vendors that use different technologies. Otherwise we end up not communicating the right data at the right time about services, about service levels, about service quality -- and we end up chasing our tails, trying to figure out what it is we have to do to build services that other people can reuse in effective ways that map to the business processes we are looking to automate.
Gardner: I suppose that quality and quality assurance are important when we go into these more componentized services. It seems to me that history has borne out that quality comes from getting it right the first time, and that really means business requirements.
Schwaber: SOA really does make quality that much more of an issue. We aren’t that good at it for basic, monolithic applications. Imagine how bad we’ll be at it with SOA?
I really see SOA giving us an opportunity to do better, because in every service a defect is propagated to every single application that consumes that service. But if the service is high-quality, that quality level is propagated, too. Essentially we have a mandate to do a much better job on quality in our services because the stakes are so much higher. We really need to bulletproof services that are built for reuse.
Gardner: Marc, to you now. As the stakes are getting higher, Borland has identified an important initiative. What is it that puts Borland into position to lead in this segment? Is it because of your heritage, acquisitions, the position you’ve taken on openness, or is it because of a "secret sauce?" What is it about Borland that makes you able to rise to this challenge?
Brown: It’s a couple of things. First, Borland in its overall business strategy is completely focused on helping organizations transform the way they do software, and we are not promoting any particular type of platform or development environment. We are all about helping people understand how to manage the actual processes that govern ALM. I think we have got a little bit of a secret sauce, because we are somewhat neutral from the platform or development-environment perspective. There are other vendors in this space who certainly have specific ties with a particular platform or development environment.
One thing that really distinguishes us from the others in the game is the fact that we are really focused on helping customers solve their true pains, which is giving them the metrics and measurements they need to be more successful at software. And at the same time, we support their current investments and future investments. So for us we’ve got full focus on ALM, and we are committed to supporting the platforms, the development environments, and the processes that organizations use today -- and those that they are going to use in the future.
Gardner: Great. Well, thanks very much. This has been a BriefingsDirect podcast discussion, a sponsored podcast about Application Lifecycle Management and the evolution of software development into a managed business process.
We’ve been joined by Carey Schwaber, a senior analyst at Forrester Research. Thanks, Carey.
Schwaber: My pleasure.
Gardner: Brian Kilcourse is the CEO of Retail Systems Alert Group, and a former senior vice president and CIO at Longs Drug Stores. Thanks, Brian.
Kilcourse: Thanks for having me.
Gardner: And Marc Brown is the vice president of product marketing at Borland. Thanks, Marc.
Brown: Thank you.
Gardner: This is Dana Gardner, your host and moderator, and the principal analyst at Interarbor Solutions. Thanks for listening.
Podcast Sponsor: Borland Software.
Listen to the podcast here.
Transcript of Dana Gardner’s BriefingsDirect podcast on Open ALM and ALM 2.0. Copyright Interarbor Solutions, LLC, 2005-2007. All rights reserved.
Listen to the podcast here.
Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you're listening to BriefingsDirect. Today, a sponsored podcast discussion about the development of software as a managed business process, about seeking to gain more insight, more data and metrics, and more overall visibility into application development -- regardless of the parts, the components, the platforms, or the legacy. It’s really about gaining an operational integrity view of development, from requirements through production, and bringing it all into a managed process.
To help us through this discussion of Application Lifecycle Management (ALM) and the future of ALM, we have with us Carey Schwaber, a senior analyst at Forrester Research. Welcome to the show, Carey.
Carey Schwaber: Thank you.
Gardner: We're also going to be talking with Brian Kilcourse. He is the CEO of the Retail Systems Alert Group, and a former senior vice-president and CIO of Longs Drug Stores. Thanks for joining, Brian.
Brian Kilcourse: Thanks, Dana.
Gardner: Also joining us, an executive from Borland Software, Marc Brown. He is the vice president of product marketing. Welcome, Marc.
Marc Brown: How are you?
Gardner: Doing well, thanks. We want to talk about the "professionalism" of development. Some people have defined software development as an art or a science -- sometimes even a dark art. And to me that means a lack of visibility and understanding. Many times the business side of the house in an organization that’s doing software is a little perplexed by the process. They see what goes in as requirements, and then they see what comes out. But they often don’t understand what takes place in between.
I want to start off with you, Marc. Tell us a little bit about ALM as a concept and what Borland Software, in particular, has been doing in terms of evolving this from mystery into clarity?
Brown: Dana, that’s a great question. What Borland has been doing over the last several years is really focusing on how to help organizations transform software delivery or software development into a more managed business process. We think this is critical. If you look at most businesses today, IT organizations are expected to have very managed processes for their supply-chain systems and for their human resources systems, but when it comes to software delivery or software development, as you mentioned, there is this sense that software is some sort of an art.
We would really like to demystify this and put some rigor to the process that individuals and organizations leverage and use around software delivery. This will allow organizations to get the same predictability when they are doing software as when they are doing the other aspects of the IT organization. So our focus is really about helping organizations improve the way they do software, leveraging some core solution areas and processes -- but also providing more holistic insight of what’s going on inside of the application lifecycle.
Gardner: In January of 2007, you came out with a new take on ALM. You call it "Open ALM." I am assuming that that is opposed to "closed." What is it that’s different about Open ALM from what folks may have been used to?
Brown: Well, getting back to helping organizations with software development, it's Borland’s assertion that we need to do it in the context of how organizations themselves have developed or invested their own technology stack over time. So for us the way that we can help organizations apply more management and process-rigor to the application lifecycle is to give them insight into what’s going on. We do that through providing metrics and measurements, but in the context of their technologies, their processes, and their platforms. That is versus proposing a new solution that causes them to do a rip-and-replace across each of the vertical slices of the software development lifecycle.
Gardner: It sounds like an attempt to give the developers what they want, which is more choice over tools, new technologies -- perhaps even open-source technologies. And at the same time you give the business more visibility into the ongoing production and refinement of software. Is that a fair characterization?
Brown: It sure is. What we are all about with Open ALM is providing a platform that provides the practitioners of ALM the tools, processes, and choices they need or are skilled in, and then provide the transparency across that lifecycle to be able to collect the metrics necessary for the management team to actually manage those resources more predictably.
Gardner: Okay. My sense is that there are more options for companies when it comes to the tools and the utilities that they bring into the software development process. Let’s take a look at the state of the art of ALM. Carey Schwaber, can you give us a bit of an overview about ALM? And am I correct in assuming that there are more parts and therefore the potential for more complexity?
Schwaber: You're right. There certainly are. ALM isn't just about developers. It’s really about all the roles that come together to ensure that software meets business requirements -- from business analyst, to the architect, to the developer, to the project manager, the tester.
It just goes on and on. And it feels like every year we end up with more specialized development teams than we had the year before. Specialization is great, because it means that we have more skilled people doing jobs, but it also means that we have more functional silos. ALM is really about making sure that every one of those silos is united, that people really are marching toward the same goal -- to the same drumbeat. ALM is about helping them do that by coordinating all of their efforts.
Gardner: Are there some mega trends going on? It seems to me that offshoring, globalization, outsourcing, and business process management (BPM) add yet another layer of complexity.
Schwaber: There aren't many trends that you can’t tie back to a greater need for ALM, where we have so many things going on that are increasing the degree to which our software is componentized. SOA is just one way in which our software is more componentized. Dynamic applications are also leading toward more componentized software, and that really means that we have more pieces to manage.
So in addition to functional silos, we've also got technology silos where we have a front-end in .NET, a back-end in Java, and maybe we're using a BPM tool to create the entire composite application. There are just so many ways that this gets more and more complex. Then, in addition to managing roles, you also have to manage all of these different components and their interdependencies.
Gardner: A major component of ALM is managing complexity. You came out with a report in August of 2006 that coined the term "ALM 2.O." What did you mean by that?
Schwaber: That’s actually about something that we see as a shift in the ALM marketplace. In the past, vendors have collected ALM solutions over time by acquiring support for each role. They’d acquire a tool that supported business analysts in the work that they do. Then, they’d acquire a tool that supported testers or test leads and the work that they do. They’d integrate the two, but that integration never ended up being very good. That integration is where ALM comes in. ALM lives in coordinating the work that the tester, the business analyst, and all the other roles involved really accomplish, to make sure that software meets business needs.
What we have seen is a trend where vendors are stopping the accumulation of piece-parts of ALM solutions, and starting to design platforms for ALM that really integrate any tool that the company happens to be using with something over the platform that provides ALM almost as a service to the tools. People have the option of choosing a tool for their business analysts from one vendor, a development environment from another vendor, and a testing tool from a third vendor. They are plugging into the same ALM platform, knowing that they'll all work together to ensure that those roles are in harmony -- even if the vendors that produced the tools that support them are not in harmony.
Gardner: So even if we take a platform approach to ALM, it sounds like what you are saying is that heterogeneity -- when it comes to the moving parts of application and software development -- is no longer necessarily a liability, but if managed properly, can become an asset.
Schwaber: That is definitely one of the goals of ALM 2.0, to assume that integrating lots of different functional silos shouldn’t require that we go to a single vendor, because that’s not always possible. There may be a best-of-breed tool in a certain area that happens to be from a vendor that doesn’t have great support for the rest of the lifecycle. So the vision with ALM 2.0 is that you shouldn’t have to make that trade-off. You should be able to choose best-of-breed and integration.
Gardner: I assume then that this also affects people, IT, and process. How would an enterprise that buys into this vision get started? Do you have to attack this from all angles, or is there a more pragmatic starting point?
Schwaber: Hopefully the vendors will make it easy on you and you won’t have to buy everything in one fell swoop. The whole idea is that if you purchase one tool from a vendor that has an ALM 2.0 platform, the platform essentially comes with that. Any tools that happen to plug in to that are ones that also enable better and more flexible ALM, where the platform provides services like workflow, collaboration, reporting, and analytics. Maybe even some more infrastructure things like identity management or licensing could be in the platform, and those would be available to any tools that wanted to consume them and were designed to consume them.
Gardner: Let’s go to Brian Kilcourse. Brian, you have been in the field as a CIO. Is ALM 2.0 vision-creep or is this real-world, in terms of how you want to approach software development?
Kilcourse: It sounds very real-world to me. As most CIOs have done, I spent untold amounts of money trying to turn the software development process from an artistic activity to an engineering activity. There were a bunch of good reasons for that. One of them is that commercial computing is now over 60 years old. And one would think, at this point, that we would have figured out a way to commoditize it and make it more reliable.
But it still remains, even after this long period of time, that software development is easily the most unreliable part of the whole value delivery equation that the IT department brings to the organization. So in broad-brush strokes, it makes great sense. The other thing that is important to underline, as Carey already mentioned, is that people like me have already spent a lot of money on tools. And just because there’s a new and better definition of how to approach those tools doesn’t mean that I am going to throw everything away.
Organizations that had quite a bit of time to get these tools embedded into their practices may have silos of expertise that aren’t going to be easily displaced. All of these things argue against stopping your business while you figure out a better way to develop software. What is important is that we desperately need a way to be able to track a development from the initial conception of the requirements, all the way through to delivery, production, and beyond.
There has to be a way to do that, and it has to be an overarching process that we can observe, measure, and report on. To that end it requires that all of these tools, whatever they are, be kept in sync, so that we can understand it and we can make it evident to the business -- so that the business can know that they are getting the right value for the right dollars. That’s always one of the biggest challenges that any CIO has -- how to show value.
Gardner: I suppose there’s been a kind of tension between sufficient openness and sufficient integration, and that they play off of one another. Is there anything about the state of the art now, where reaching this balance between sufficient openness and the ability to integrate and manage, comes into some sort of a harmonic convergence? Is there anything different about ALM today?
Kilcourse: The fact that we are talking about ALM 2.0 is a big step in the right direction. In our business applications we need to be able to integrate at the information level and the data level, even if they are different code sets or physically different databases. From the business perspective we need to come up with one coherent answer to any kind of a business question. No matter what the toolsets are, we have one way to see them from a business perspective. I think that’s very encouraging.
We know from our business application stack that this is possible. So if it’s possible for the business, why isn’t it possible for the IT organization? You can call this a "cobbler’s children" problem. Why don’t we have for ourselves what we promise to deliver to our business associates?
Gardner: Let’s take that back to Marc Brown at Borland. I assume that your goals with Open ALM are similar to the goals envisioned in ALM 2.0, and that you want to help CIOs get that visibility to demonstrate value. Do you see something new and different in the marketplace now about reaching this balance between openness and integration?
Brown: You know, I do. To extend what we were just talking about, one of the core differences that organizations are talking about today versus 10 years ago is that in the past we talked a lot about making sure we had optimized role-based solutions. We talked a lot about supporting specific activities and specific roles in a lifecycle. What we are finding today when we talk about application lifecycle, and I think Carey brought this up, the real critical piece is understanding the core processes that drive the overall lifecycle activities and assets between the individuals that make up a software delivery team.
So for Borland one of the unique aspects in the way we are approaching this is that we are really focused on the process-driven integration from a technology perspective. We're really looking at the individual processes, such as portfolio and project management, where requirements definition management, understanding those processes, bringing the technologies to bear to support those processes, and providing the integrations between those individuals supports the horizontal software processes.
The other aspect is understanding that we need to do this, not just in a constrained set of tools that Borland brings to market, but also in the context of the tools that customers want to use and leverage. That means Borland technologies, other third-party technologies, and open-source technologies.
Gardner: I suppose one of the hurdles to getting this visibility in the past was that a lot of these components, tools, and testing environments have very different technologies and formats for how they apply and transfer data. What is it that Borland has done with Open ALM to allow the majority of these parts to work together? Is this about building modules and components? What does it take to get these things to actually be herded, if you can use the analogy of trying to herd cats?
Brown: The starting point is understanding that we need to deliver a platform based on an ALM meta-model, something that we can utilize and leverage to define all the various activities and assets that flow through the application lifecycle. Then we need to provide a set of core services that will use that meta-model and will support add-ons that are lacking today. One of the critical things is providing more comprehensive ALM-centric metrics and measurements that span the lifecycle -- versus being very vertically focused for a particular role and job. A lot of this is based on having an ALM data description that represents all the activities and data that are going to be passed through a lifecycle.
Gardner: So there’s an immediate tactical benefit of getting the data from the various parts, and there’s a larger strategic value of then analyzing that data, because you've got it in the holistic process-driven environment, a common environment. What sort of data and metrics do you expect companies to be able to derive from this, and how can they instantiate that back into the process?
Brown: The critical thing that businesses will be able to do is be able to demystify what software development really is. It's about removing the "black box," and having data consolidation or aggregation so they can in fact measure what’s going on. Then they can determine what areas of the processes are working, what areas potentially are bottlenecks or are deficiencies. They can utilize the data that’s being collected across the ALM, and filter that out to the broader business intelligence activities that the IT business is doing to see what’s actually working, and what’s not working, within the IT organization.
Gardner: We're going to be able to give non-IT people some real visibility into timetables, quality assurance curves, dates for completion, and that sort of thing, which to me seems essential. If you are putting a new product or a new service in the market, you are going to be ramping up your marketing, ramping up inventory and supply chain, and are going to be looking into manufacturing, if that's the basis of the product. You really need to coordinate all these things with development, and that has been haphazard.
Am I reading more into this, or do you really plan to be able to give non-IT people these dials, and this kind of a dashboard by which to run their entire business -- but with greater visibility?
Brown: That is exactly what we are proposing. Borland is very committed right now on Open ALM to deliver a platform that allows organizations to leverage their own configured processes and technologies to gain the insights necessary to really start having confidence in what they are doing. That confidence is going to be increased by providing them the tools and technologies so they can track, measure, and improve their processes.
Gardner: Let's take it back to Carey Schwaber. Carey, in your analysis of the market is there a potential for a significant productivity boost by bringing visibility into software development and activities into the larger context of business development and go-to-market campaigns?
Schwaber: I think there is. And there is a great degree of redundancy that happens to a lot of development efforts that have already been accomplished, or just redundancy of documentation. Even when it’s not redundancy, the problem is that people are pursuing different goals -- when you have testers who are testing against out-of-date requirements, and the business analyst wants them testing against the newer requirements. We've got the problem of an overlap of efforts. Then we've got the problem of misaligned efforts. Together those really eat away your productivity and waste precious development dollars.
There are a couple of ways you can use better ALM practices to improve productivity. The first is to get numbers about what you are really doing today to measure how often these things are happening. That is the first step that you need to take before you can take remedial action. The second one is just making sure that you have people working off of common data, that there is one way to represent the truth -- not just about one part of the lifecycle, but the entire lifecycle. You have to have the appropriate correlation between those disparate parts.
Gardner: Brian Kilcourse, to your point about CIOs trying to demonstrate value in real terms -- to be viewed as a productivity center and not a cost center -- do you think that this visibility into application development can give you, as a CIO, the tools you need to go to the CEO and say, “Here is what we are going to do, and when we are going to do it.”
Kilcourse: Certainly, if you as a CIO can map specific IT activities back to the business requirements that drive them, you have a much stronger set of metrics to indicate your alignment to the business than you have otherwise.
There is a huge disconnect between the front of a development process, which is always driven by business requirements, and the back-end of the process, which is always post-production maintenance. Between those two spaces there are a lot of things that go on. Somewhere along the line, in what I characterize as the business technology handoff, there is a big disconnect. Even with the best intentions, because of the complexity of the technology solutions available, the business really does lose track of what those guys down in IT are doing. The ability to overcome that chasm would go a long way toward solving the historical distrust between the two organizations.
Gardner: Do you sense that there are any particular vertical industries or even types of development projects that would benefit from an approach like Open ALM better or first? Where is the low-hanging fruit for this?
Kilcourse: That’s a great question. No business that I am aware of starts from scratch, either with a technology group or with the business that it supports. So any business that is trying to infuse the business process with the information asset in new ways is a candidate for this. I focus a lot on retail. And I can tell you from my experience in retail that those organizations are ripe for this kind of capability. There is a tremendous amount of distrust between the executive side of the house and the IT side of the house in that particular industry. I see it in other industries as well. But even in such obviously highly correlated industries like financial services there is still a tremendous room for improvement.
Gardner: Do you think Open ALM makes more sense for those organizations that are in fast-moving markets? Retail, of course, is like that because they have to anticipate, sometimes months in advance, the desires of a culture or a human fashion-driven impetus to buy. And then they have to act on that. Do you think that for those companies that are involved with fast time-to-market that this would be particularly important?
Kilcourse: Certainly fast time-to-market causes fast marketplace changes. The problem in IT across so many factors is that the IT organization cannot respond quickly enough to changes in the business environment. That's not particular to retail. It happens everywhere. To the extent that you can eliminate the friction that exists in the delivery process within the IT organization -- so that the company actually is getting the maximum amount of traction for their investment dollars -- it's going to help.
Carey pointed out, and I thought it was a really good point, that there is a lot of wasted activity that goes on because of rework and focusing on the wrong requirements that might not have the biggest benefit -- but might be the thorniest problem that somebody faces. We don’t always have visibility into that. We find out only at the end when we tally up the score and find out where the dollars really went and why we had to go to Phase 2, Phase 3 and Phase 9 of a project, because we couldn’t get it all done in the first shot.
The ability to focus IT energy where it really matters most to the business is a big goal of most CIOs that I know.
Gardner: Carey, back to you. Do you concur that the fast time-to-market is a major impetus? If so, what other ones do you see in terms of where common views of practices and processes for application development are super-important?
Schwaber: I agree that fast time-to-market or any time-to-market pressure is definitely a reason you would need to have your ducks better aligned up front. But I don’t know any companies that don’t want to do a better job of satisfying their business customer’s demands for the same software in less time. That's a pretty universal desire, no matter whether you have a lot of time-to-market pressure in your industry or not.
So, I would say that we all want more for less. On top of that, I would add compliance requirements, where you need to confirm that the software you are developing does what the business wants it to, so that you know that you are producing accurate financial reports, or even that you have some kind of internal compliance requirement.
You know you are looking to get toward Capability Maturity Model® Integration (CMMI) Level 2 or Level 3, and you want some proof that you are actually going to do that. ALM capabilities can really help you in that area. So those kind of pressures really matter. But any time we get away from the old halcyon ideal of the business customer telling the developer what to write, and then the developer immediately implementing it, we have opportunities for miscommunication. The more people, geographies, and technologies we involve, the more complex it all gets, and the more we really need help keeping track of all the dependencies between the things that we are doing. That is really describing any project these days.
Gardner: Of course, software seems to be playing a larger role in how companies operate. The technology, in a sense, becomes the company.
Schwaber: How many business processes are there that aren’t automated by IT, either today, or plan to be automated by IT within the next five years? Business processes that we can’t even imagine will be embodied in software eventually.
Gardner: Let’s get back to Marc Brown. Marc, at Borland you have come out with this Open ALM approach and you have had a lot of experience in development over the years. Do you have any metrics? Do you have any sense of what the pay-off here is through some of your existing customers -- maybe some beta examples? Do you have any typically "blank" percentage of savings? What are the initial payoffs from embracing Open ALM?
Brown: We certainly have seen the benefits with many organizations, which see the value in a number of ways. First, many organizations, because they are trying to improve their overall process, are attacking their deficiencies incrementally. We've got some organizations that have found their key issue today is poor requirements definition and management. They simply can't get requirements written accurately and in a way that they are testable up-front. This creates a huge amount of rework downstream.
We've got some really good examples where we have gone in and helped organizations improve their requirements definition and management process, and we found really dramatic improvements. On one occasion, an organization was able to achieve a 66 percent improvement just on the analysis side -- when they were going through, looking at a legacy system, trying to define the "as-is" business processes, and then taking that work and collaborating with the business stake-holders to construct the "to-be" business process. That was typically taking the organization anywhere from 12 to 20 weeks. They saw a 66 percent decrease in that time by leveraging not only the process guidance we were giving them, but also other technologies that we could apply to that area of the process.
Gardner: So that’s a substantial opportunity, and that was only, I suppose, a partial embrace of Open ALM.
Brown: Yeah, and that’s the way a lot of people are looking at this. We are going out and helping organizations first of all pinpoint their largest areas of deficit or gap. That could fall into any of the four critical solution areas that we're helping organizations around project and portfolio management, or, as I mentioned, requirements definition and management, or lifecycle quality management. We are helping them understand where they have gaps or deficiencies today, and then incrementally improving that over time to embrace Open ALM as an incremental philosophy and approach.
Gardner: How has this so far impacted distributed types of development, where the organization has a number of development centers around the world, where perhaps you are outsourcing, and your outsourcing organizations are spread around the world? What’s the potential payback for those sorts of highly distributed development activities?
Brown: The real benefit we are seeing, and we will see more of this over time, is through the increased visibility. Again one of the biggest problems with organizations that are outsourcing today is the inability to aggregate or consolidate data from the outsourcer, the supplier, and the vendor, and to bring that together into a view, to have confidence that what’s happening from the outsourcer aligns with the overall business goals and original project plans. With our ability to help overlay our platform to bring together both the outsourcer’s technologies and data -- and then bring that together with the internal data -- we are able to bridge the gaps that they are having today, so that they have more confidence in the data they are seeing.
Gardner: How about Services Oriented Architecture (SOA)? It seems to me that as you break things down into services -- if we eradicate more of the silos around runtime environments -- we are at the same time knocking down silos in design-time. We might be able to get into some sort of a virtuous cycle, whereby we can adjust development to suit what’s going on in the field, which then is able to adapt to business requirements. That seems to be a big pay-off from SOA.
Let me throw that out to the crowd. What do you think is going to be the impact of SOA on development?
Brown: I'll take the first crack at this, Dana. I do think that SOA will certainly provide a lot of benefits, because it is one of the first practical approaches to help organizations realize the benefits of reuse. It's something that a lot of organizations had talked about time and time again. But there has been a lack of a common infrastructure or communications to bridge how that really happens over time. Many organizations simply said, “Look, my project’s not budgeted to create reusable code. We've got tight deadlines, and we have got a lot of work to do, and I am not going to have the time to do it in a reusable fashion.”
SOA gives people a good framework for how to actually structure applications to provide interoperability over time. I think this is a good approach for organizations to finally see the benefits of reuse, but it requires a lot of management and due-diligence when they are developing and deploying particular components. Because as they develop new versions or new components to supplement existing ones, they have to have more visibility in the usage levels, on who is using what, and so on.
Gardner: How about you, Brian? How do you see the evolution and maturation of ALM and the burgeoning ramp-up to SOA working either together, or perhaps at odds?
Kilcourse: Actually I don’t see them at odds at all. Because, first of all, SOA is an architectural concept, whereas Open ALM is a process concept or process model. In my company we just finished a piece of research on SOA and retail. What we found out is, if I could characterize something as a curiosity-understanding ratio, there is a lot of curiosity and very little understanding of what SOA really means in terms of how you get from "here" to "there."
As it relates to ALM -- going back to the original discussion that ALM covers everything from requirements all the way through post-production -- the notion of SOA breaking things down into reusable components or objects, business rules or metadata that can be redeployed almost at will as the business needs change, is a very powerful notion, especially in an environment such as the one that I service, where the business environment changes quite dramatically.
The challenge, of course, is taking something as broad as a business requirements and breaking them down into tiny service-level objects that then can be understood and implemented by the IT organization. If you don’t have some way to map that to the business requirements, you could have a worse bowl of spaghetti than you have now. In that context, these things are very tightly interwoven.
Gardner: How about you Carey? A last word on SOA and ALM?
Schwaber: Well, a lot of the great words have already been taken. But what I would add is that SOA introduces more dependencies among development projects then we are used to. It really requires us to have some way of coordinating our efforts across projects. In the past, projects often used completely different technologies for managing their lifecycles.
So this is yet another impetus for us to have a better way of connecting disparate tools from different vendors that use different technologies. Otherwise we end up not communicating the right data at the right time about services, about service levels, about service quality -- and we end up chasing our tails, trying to figure out what it is we have to do to build services that other people can reuse in effective ways that map to the business processes we are looking to automate.
Gardner: I suppose that quality and quality assurance are important when we go into these more componentized services. It seems to me that history has borne out that quality comes from getting it right the first time, and that really means business requirements.
Schwaber: SOA really does make quality that much more of an issue. We aren’t that good at it for basic, monolithic applications. Imagine how bad we’ll be at it with SOA?
I really see SOA giving us an opportunity to do better, because in every service a defect is propagated to every single application that consumes that service. But if the service is high-quality, that quality level is propagated, too. Essentially we have a mandate to do a much better job on quality in our services because the stakes are so much higher. We really need to bulletproof services that are built for reuse.
Gardner: Marc, to you now. As the stakes are getting higher, Borland has identified an important initiative. What is it that puts Borland into position to lead in this segment? Is it because of your heritage, acquisitions, the position you’ve taken on openness, or is it because of a "secret sauce?" What is it about Borland that makes you able to rise to this challenge?
Brown: It’s a couple of things. First, Borland in its overall business strategy is completely focused on helping organizations transform the way they do software, and we are not promoting any particular type of platform or development environment. We are all about helping people understand how to manage the actual processes that govern ALM. I think we have got a little bit of a secret sauce, because we are somewhat neutral from the platform or development-environment perspective. There are other vendors in this space who certainly have specific ties with a particular platform or development environment.
One thing that really distinguishes us from the others in the game is the fact that we are really focused on helping customers solve their true pains, which is giving them the metrics and measurements they need to be more successful at software. And at the same time, we support their current investments and future investments. So for us we’ve got full focus on ALM, and we are committed to supporting the platforms, the development environments, and the processes that organizations use today -- and those that they are going to use in the future.
Gardner: Great. Well, thanks very much. This has been a BriefingsDirect podcast discussion, a sponsored podcast about Application Lifecycle Management and the evolution of software development into a managed business process.
We’ve been joined by Carey Schwaber, a senior analyst at Forrester Research. Thanks, Carey.
Schwaber: My pleasure.
Gardner: Brian Kilcourse is the CEO of Retail Systems Alert Group, and a former senior vice president and CIO at Longs Drug Stores. Thanks, Brian.
Kilcourse: Thanks for having me.
Gardner: And Marc Brown is the vice president of product marketing at Borland. Thanks, Marc.
Brown: Thank you.
Gardner: This is Dana Gardner, your host and moderator, and the principal analyst at Interarbor Solutions. Thanks for listening.
Podcast Sponsor: Borland Software.
Listen to the podcast here.
Transcript of Dana Gardner’s BriefingsDirect podcast on Open ALM and ALM 2.0. Copyright Interarbor Solutions, LLC, 2005-2007. All rights reserved.
Labels:
Borland,
BriefingsDirect,
Dana Gardner,
development,
Interarbor Solutions,
SOA,
software,
technology
Monday, April 23, 2007
BriefingsDirect SOA Insights Analysts on SOA Mashups and the Oracle-Hyperion Deal
Edited transcript of weekly BriefingsDirect[TM/SM] SOA Insights Edition, recorded March 2, 2007.
Listen to the podcast here. If you'd like to learn more about BriefingsDirect B2B informational podcasts, or to become a sponsor of this or other B2B podcasts, contact Interarbor Solutions at 603-528-2435.
Dana Gardner: Hello, and welcome to the latest BriefingsDirect SOA Insights Edition, Vol. 13, a weekly discussion and dissection of Services Oriented Architecture (SOA) related news and events with a panel of industry analysts and guests. I am your host and moderator Dana Gardner, principal analyst at Interarbor Solutions, ZDNet blogger and Redmond Developer News magazine columnist.
Our panel this week -- and that is the week of Feb. 26 2007 -- consists of Steve Garone, a former IDC group vice president, founder of the AlignIT Group, and an independent industry analyst. Welcome back to the show, Steve.
Steve Garone: Thanks, Dana. Great to be here.
Gardner: Also, joining us once again, Joe McKendrick, a research consultant, columnist at Database Trends and a blogger at ZDNet and ebizQ. Welcome back, Joe.
Joe McKendrick: Good morning, Dana.
Gardner: Also joining us, Tony Baer, principal at OnStrategies, and blogger at Sandhill.com and ebizQ. Thanks for joining, Tony
Tony Baer: Hi, Dana.
Gardner: And also once again, joining us is Jim Kobielus. He is a principal analyst at Current Analysis.
Jim Kobielus: Hi, Dana. Hi, everybody
Gardner: Joining us for the first time, and we welcome him, Dave Linthicum. He is CEO at the Linthicum Group, an SOA advisory consulting firm. Dave also writes the Real World SOA blog for InfoWorld and is the host of the SOA Report podcast, now in its third year. He is also a software as a service (SaaS) blogger for Intelligent Enterprise, and has a column on SOA topics for Web Services Journal. Welcome, Dave.
Dave Linthicum: It is great to be here.
Gardner: We are going to have a couple of meaty, beefy topics today on the SOA and, interestingly enough, Enterprise 2.0 arena. We are going to be discussing and defining the concept around "mashup governance." We are also going to discuss some merger and acquisition news this week, with a deal announced between Hyperion and Oracle, whereby Oracle will acquire Hyperion for $3.3 billion.
First off, let's go to this subject of "mashup governance." Dave, I believe you defined this to a certain extent in a recent blog, and I wanted to give you the opportunity to help us understand what you mean by "mashup governance" -- and why it’s important in an Enterprise 2.0 environment, and perhaps what the larger implications may be for SOA.
Linthicum: Sure. Thank you very much. That was a feature article, by the way, that InfoWorld sponsored, and it’s still up on their website. It basically talked about how mashups and SOA are coming together, since they are mashing up. As people are becoming very active in creating these ad-hoc applications within the enterprise, using their core systems as well as things like Google Maps and the Google APIs, some of the things that are being sent up by Yahoo!, Salesforce.com, and all these other things that are mashable. There's a vacuum and a need to create a governance infrastructure to not only monitor-track, but also learn to use them as a legitimate resource within the enterprise.
Right now, there doesn’t seem to be a lot of thinking or products in that space. The mashup seems to be very much like a Wild West, almost like rapid application development (RAD) was 15 years ago. As people are mashing these things up, the SOA guys, the enterprise architecture guys within these organizations are coming behind them and trying to figure out how to control it.
Gardner: An element of control to an otherwise ad hoc and loosey-goosey approach to creating Web services-based UIs and portal interfaces?
Linthicum: That’s absolutely right. Ultimately these things can become legitimate and very valuable applications within the enterprise. I have a client, for example, that has done a really good job in mashing up their existing sales tracking system, inventory control system, and also delivery system with the Google Maps API. Of course everybody and their brother uses that as a mashup example, but it's extremely valuable.
We are able to not only provide maps to do the best routing for delivery, but also Google Maps right now has traffic reports. So, they can give these to the truck drivers and delivery agents at beginning of the day, and productivity has gone up 25 percent. Over a year, that is going to save them more than $1.5 million. And, that’s just a simple mashup that was done in a week by a junior developer there. Now, they are trying to legitimize that and put it back into their SOA project, as well as other external API’s. They are in there trying to figure it out.
Gardner: So perhaps through this notion of combining what is available on an internal basis -- either as a Web service or moving toward SOA -- the enterprises can also start tapping into what is available on the Web, perhaps even through a software-as-a-service relationship or license, and put together the best of internal data content process as well as some of these assets coming off of the Web, whether it is a map, an API, or even some communications, groupware, or messaging types of functions.
Linthicum: I think you put it best that I have ever heard it. Absolutely. That’s the way it’s coming forward, as we are building these SOAs within these enterprises today. We have the added value of being able to see these remote services, deal with these remote APIs, and bring that value into the organization -- and that’s typically free stuff. So, we are using applications that we are gaining access to, either through a subscription basis in the case of Salesforce.com -- and they are, by the way, hugely into the mashups that are coming down the pipe -- free services that we are getting from Google, or even services that cost very little.
Putting those together with the existing enterprise systems breathes new life into them, and we can basically do a lot of things faster and get applications up and running much faster than we could in the past. Ultimately, there is a tremendous amount of value for people who are using the applications within these environments. Typically, it’s the mid-market or the mid-sized companies that are doing this.
Gardner: Or even department levels in larger companies that don’t need to go through IT to do this, right?
Linthicum: That’s right. Absolutely. That’s how Salesforce.com got started. In other words, people were buying Saleforce.com with their credit cards and expensing it, and they were wiring it around IT. We are seeing the same movement here. It's happening at the grassroots level within the department and it's moving up strategically within the IT hierarchy.
Gardner: Okay, so it sounds straightforward: a good productivity boost, moving toward the paradigm of mashable services. Why do we need governance?
Linthicum: Well, you really need a rudimentary notion of governance when you deal with any kind of application or service that works within the organization. Governance is a loaded word. If you go to the Enterprise Architecture Conference -- and I am speaking at it the end of this month in New Orleans -- they consider governance as a management practice. It’s running around knocking people on their heads, if they are not using the correct operating systems, databases, those sorts of things. In the SOA world, as Joe McKendrick can tell you, it's about a technical infrastructure to monitor-control the use of services. Not only is it about control, but it is about productivity. I can find services. I can leverage services, and they are managed and controlled on my behalf. So, I know I am not using something that’s going to hurt me.
The same thing needs to occur within the mashup environment. For mashing up, there are lots of services that we don’t control or that exist outside on the Internet. It's extremely important that we monitor these services in a governance environment, that we catalogue them, understand when they are changed, and have security systems around them, so they don’t end up hurting productivity or our existing IT infrastructure. We don’t want to take one step forward and two steps back.
Gardner: I read your blog in response to this, Jim Kobielus, and you seem to think that bringing too much governance to this might short-circuit its value -- that it’s the loosey-goosey, ad-hoc nature that brings innovation and productivity. Do you think that what we think of as traditional SOA governance is too rigid and strict and requires some interaction with IT? Or are we talking about some other kind of governance, when it comes to mashups?
Kobielus: Well, Dave made that same point in his article, which is that the whole notion of mashups is half-way to anarchy, as it were, creative anarchy. In other words, empowering end-users, subject-matter experts, or those who simply have a great idea. They typically slap together something from found resources, both internal and external, and provision it out so that others can use it -- the creative synthesis.
This implies that governance in the command-and-control sense of the term might strangle the loosey-goosey that laid the golden egg. So, there is that danger of over-structuring the design-time side of mashups to the point where it becomes yet another professional discipline that needs to be rigidly controlled. You want to encourage creativity, but you don’t want the mashers to color too far outside the lines.
Dave hit the important points here. When you look at mashup governance, you consider both the design-time governance and the run-time governance. Both are very important. In other words, if these mashups are business assets, then yes, there needs to be a degree of control, oversight, or monitoring. At the design-time level, how do you empower the end-users, the creative people, and those who are motivated to build these mashups without alienating them by saying, "Well, you've got to go to a three-week course, you've got to use these tools, and you've got to read this book and follow these exact procedures in order to mashup something that you want to do?" That would have clearly stifled creativity.
I did a special section on SOA for Network World back in late 2005. I talked to lots of best practice or use cases of SOA governance on design time, and the ones that I found most interesting were companies like Standard Life Assurance of Scotland. What they do is provide typical command-and-control governance on design time, but they also provide and disseminate through the development teams a standard SOA development framework, a set of tools and templates, that their developers are instructed to use. It's simply the broad framework within which they will then develop SOA applications.
What I am getting at here is that when you are dealing with the end users who build the mashups, you need to think in terms of, “Okay. Tell them in your organization that we want you to very much be creative in putting things together, but here is a tool, an environment, or enabling technology that you can use to quickly get up to speed and begin to do mashing up of various resources. We, the organization that employs you, want you, and strongly urge you, to use these particular tools if you wish your mashups to be used far and wide within the organization.
"If you wish to freelance it internally, go ahead, but doesn’t mean we are necessarily going to publish out those mashups so that anybody can see them. It means we are not necessarily going to support those mashups over time. So, you may build something really cool and stick it out there, but nobody will use it and ultimately it won’t be supported. Ultimately, it will be a failure, unless you use this general framework that we are providing."
Gardner: I think we need to re-examine some of these definitions. I'm not sure what we are talking about with mashup governance is either "run time" or "design time." It strikes me as "aggregation time." Perhaps we don’t even need to use existing governance and/or even federate to existing governance. Perhaps it's something in the spirit of Web 2.0 and Enterprise 2.0, as simple as a wiki that everyone can see and contribute to, saying, “Here is how we are going to do our mashups for this particular process."
Let’s say, it is a transportation process, "Here are the outside services that we think are worthwhile. Here are the APIs, and here is a quick tutorial on how to bring them into this UI." Wouldn’t that be sufficient? Let us take that over to Steve Garone.
Garone: I am going to push back on that a little bit. What we are wrestling with here is achieving a balance between encouraging creativity and creating new and interesting functionality that can benefit business, and keeping things under control. The best way to look at that balance is to understand what the true risks are.
The way I see it, there are several major areas. The first has to do with what I call external liability, meaning that if you, for example, publish a mashup to a customer base that has a piece of functionality you got off the web, and for some reason that has wrong information and does the customers some harm, who is responsible for that? How are you going to control whether that happens or not? The second has to do with what I call internal risk, which has to do with making available to the outside world information that is sensitive to your organization. In that case, a little more than what you described is going to be necessary, and can also leverage some of the governance infrastructure that people are building generally and relative to SOA.
Gardner: So, you are thinking that these mashups would be available not only to an internal constituency in the organization but across its users, its visitors, and the public?
Garone: Absolutely. Well, I think they can be, and I think there will be organizations and groups within organizations who will want to do that, driven primarily by the business opportunities that it can afford.
Gardner: But, if this is the general public accessing some of these mashups, wouldn’t the risk that they would take accessing the individual services on the web on their own be sufficient? Why would you need to be concerned about liability or other risk issues when these are already publicly facing APIs and services and so forth?
Garone: Conceptually, you wouldn’t, but we all know that in this world anybody can sue for anything, and the reality is that if I go to a company’s website and use a function that incorporates something that they grabbed off the web, and it does me harm, the first place I am going to look is the site that I went to in the first place.
Gardner: Well, you might have stumbled upon the category here that will warm the cockles of many lawyers’ hearts -- mashup risk and assessment.
Garone: Exactly. And, it's one of the problems that governance in general attempts to solve. So, it is relevant here. My bottom-line point is that achieving balance is going to involve some careful consideration of what the true risks are. Maybe resolving that involves a combination of the kinds of solution that you just talked about in some cases. In other cases, they are going to have to leverage the governance infrastructure that exists in other areas within a company.
Gardner: Your point is well taken. This is business, it is serious, and it needs to be considered and vetted seriously -- if it is going to be something that you are using for your internal employees’ use, as well as if it becomes public-facing. How would you come down on this, Joe McKendrick? Do you see the balance between something as unstructured as a blog or wiki being sufficient, or do we need to bake this into IT, get policies and governance, and take six years to get a best practices manifesto on it?
Garone: I did not recommend that, Dana.
Gardner: I know. I'm going from one extreme to the other.
McKendrick: If we do it in two years, that would be fine. But what I’d love to know is, what exactly is the difference between a mashup and a composite application that we have been addressing these past few years within the SOA sphere? The composite application is a service-level application or component that draws in data from various sources, usually internal to the organization, and presents that through a dashboard, a portal, or some type of an environment. It could be drawn from eight mainframes running across the organization.
Obviously, the governance that we have been working so hard on in recent years to achieve in SOA is being applied very thoroughly to the idea of composite applications. Now, what is the difference between that and a mashup? Other than the fact that mashups may be introducing external sources of data, I really don’t see a difference. Therefore, it may be inconsistent to "let a thousand flowers bloom" on the mashup side and have these strict controls on the composite application as we have defined in recent years.
Linthicum: The reality is that there is no difference. You are correct, Joe, and I point that out in the article as well. There are really two kinds of mashups out there: the visual mashups, which are what we are seeing today, where people are taking basically all of these interface APIs and using the notions of AJAX and other rich, dynamic clients, and then binding them together to form something that is new.
The emerging mashups are non-visual. It's basically analogous, and is not exactly the same, as traditional composite applications that are -- if you can call them traditional -- in the SOA realm today. They have to be controlled, managed, governed, and developed in much the same way.
Kobielus: There is a difference here. I agree with what Dave just said that mashups are not qualitatively different from composite apps, but there is a sort of difference in emphasis, in the sense that a mashup is regarded as being more of a user-centric paradigm. The end-user is empowered to mash these things up from found resources.
It relates to this notion that I am developing for a piece on user-centric identity as a theme in the identity management space. The whole Web 2.0 paradigm is user-centric -- users reaching out to each other and building communities, and sharing the files and so on. Mashing up stuff and then posting that all to their personal sites is very much a user-centric paradigm.
There's another observation I want to make. I agree that the intellectual property lawyers are starting to salivate by mashups invading their clients or encroaching on their clients’ rights. Actionable mashups are good from a litigator’s point of view. In terms of governance then, organizations need to define different mashup realms that they will allow. There might be intra-mashes within their Intranet -- "Hey, employee, you can mash up all manner of internal resources that we own to your heart’s delight. We will allow intra-mashes, even extra-mashes within the extranet, with our trusted partners. You can mash up some of their resources as well, whatever they choose to expose within the extranet. And then, in terms of inter-mash or Internet wide mashing, we’ll allow some of it. You can mash Google. You can mash the other stuff of the folks who are more than happy to let you mash. But, as an organization, your employer, we will monitor and block and keep you from mashing up stuff that conceivably we might be sued for."
Gardner: So you could take six years and require a manifesto. Thank you, Jim Kobielus. Tony Baer, let's take it to you. Do you see this as a problem in terms of the governance, or should we keep it loosey-goosey? Should we not get into the structure, and do you think that -- to Jim’s point -- a mashup is conceptually different from a composite application because of the user-centric, user-driven, keep-IT-out-of-it aspect?
Baer: We've got a couple of questions there. I’ll deal first with the technical one, which is that composite apps and mashups are basically trying to do the same thing, but they're doing it in different ways. Composite apps, at least as I've understood the definition, came out of an SOA environment. That implies some structure there, whereas mashups essentially merged to Web 2.0 with the emergence of AJAX-style programming, which lets anybody do anything anywhere with this very loosely structured scripting language. There are practically no standards in terms of any type of vocabulary.
So, there is a bit of a "Wild West" atmosphere there. As somebody else said, you really need to take a two-tiered approach. On one hand, you don’t want to stifle the base of innovation, a kind of a skunk works approach. Having a walled garden there, where you're not going to be doing any damage to the outside but you are going to promote collaboration internally, probably makes some sense. On the other hand, even if the information did not originate from your site, if you're retransmitting it there is going to be some implication that you are endorsing it, at least by virtue of it coming under your logo or your website.
Gardner: Yeah, the perception of the user is going to be on you, regardless of the origins of the service.
Baer: Exactly. So, you need a tiered approach. I was taking a note here earlier. You really need to exert control on the sources of information. Therefore, for the types of information that are exposed internally -- for example something from an internal financial statement -- you need to start applying some of the rules that you've already developed around internal databases. Different classes of users have a right to know and to see it and, in some cases, some read-write privileges.
You need to apply similar types of principles at the source of information. Therefore, if I have access to this, this means implicitly that I can then mash it up, but you have to really govern it at the original point of access to that information, at least with regard to internal information. With external information, it probably needs to go to the same type to clearance that you would exert for anything that goes out on the corporate website, the external website.
Gardner: So, your existing policies and access privileges, your federated ID management brought up into a policy level, that will all play into this and it could help mitigate this concern around the right balance.
Baer: Well, put it this way, it’s a step toward that direction.
Gardner: I want to offer another possibility here. I was thinking about the adage that nobody was fired for using IBM, which was a common saying not that long ago. What if we were to take that same mentality and apply it here -- that if you're going to do mashups, make sure they are Windows Live mashups, or Google mashup services for mashup; or maybe Salesforce.com? So, is there is an opportunity on the service provider side to come up with a trusted set of brands that the IT people and the loosey-goosey ad-hoc mashup developers could agree on to use widely? They could all rally around a particular set of de-facto industry standard services? That would be perhaps the balance we're looking for.
What do you think about that, Steve?
Garone: That can certainly be a realistic part of how it’s done, and it gets back to something someone mentioned earlier about composite applications. We talked about the similarities and the differences. One of the differences I see is when I think back to when people started building applications from software components. There was a flood of products put on the market to manage that process in terms of cataloguing and putting into libraries trusted components with descriptions and APIs that conform to standards, to try to sort of reign in people’s ability to go all over the place and pick software from the sky to build into an application that could be used in a business context.
What you're saying sort of conforms to that, in that you come up with a trusted set of applications or a trusted set of vendors or sources from where you can get application functionality, and an attempt to enforce that.
Gardner: It strikes me that this is a slippery slope, if people start using mashups. That includes the more defined and traditional developer using it through governance and vetting it properly with command and control, as well as across a spectrum of project-level, third-party developers, and even into department-level folks who are not developers per se. The slippery slope is that, suddenly more of the functionality of what we consider an application would be coming through these mashups and services, and perhaps increasingly from outside the organization.
Therefore, the people who are providing the current set of internal services and/or traditional application functionality need to be thinking, "Shouldn’t I be out there on the wire with a trusted set?" We're already seeing Microsoft move in this direction with its Windows Live. We're seeing Google now putting packaging around business-level functionality for services. Salesforce.com is building an ecology, not only of its own services, but creating the opportunity for many others to get involved -- you could call them SaaS ISV’s, I suppose.
And I don’t think it’s beyond the realm of guesswork that Oracle and SAP might need to come up with similar levels of business application services that create what would be used as mashups that can be trusted to be used in conjunction with their more on-premises, traditional business applications. Does anyone else see any likelihood in this sort of a progression? I’ll throw it out.
Linthicum: There's a huge likelihood of that coming up. People are moving to use interface-based applications through software-as-a-service. All you have to do is look at the sales of Salesforce.com to monitor how that thing is exploding. And, they are migrating over to leveraging services to basically mix-and-match things at a more granular level, instead of taking the whole application interface and leveraging those within your enterprise. This is what I call "outside-in" services. I wrote about that three years ago.
People are going to focus on that going forward, because it just makes sense from an economic standpoint that we leverage the best-of-breed services, which typically aren’t going to be built within our firewall. We don’t want to pay for those services to be built, but they're going to be built by the larger guys like Salesforce.com, Google, and Microsoft. It's going to be a slow evolution over time, but I think we are going to hit that inflection point, where suddenly people see the value. It’s very much like we saw the value in the web in the early '90s -- that it really makes sense not only to distribute content that way, but distribute functional application behavior that way.
Gardner: Thanks, Dave. Any contrarians out there? Does anyone think that this back-to-the-future, in terms of the major players stepping up and providing best-of-breed services, is not likely?
Kobielus: Well, I think it's likely. But the fact is that, given the accessibility of this technology, it will encourage independence to startups, and provide unique new services too that may fall between the cracks. It’s the classic long tail here.
Gardner: I’ll be contrarian in this, because I don’t think that these sets of players, with the possible exception of Google and Salesforce.com, are going to be interested in having this occur sooner. They would rather have it come later, because their on-premises, licensed software businesses are far more profitable, and it gives them a more entrenched position with the client and the account than these mashups. Those can be switched in or out quite easily, and are either free or monetized through advertising or in a subscription fee format that is still not nearly as profitable for them in the long run as an on-premises, licensed affair.
Does this notion of the business model, rather than the technology model or the branding model, change anyone’s opinion on the speed in which this happens? Do we need to have a small group of interlopers that comes in and actually forces the hands of the larger players into this mode?
Garone: Dana, I’ll take that. There clearly has to be a business reason for these major players to do it, and the two that I see are, one, that the functionality that they're making lots of money off of is suddenly available as a mashup at little or no cost, in which case they have got to deal with that. The other is to be able to add interesting functionality to their existing products in order to be more competitive with the other enterprise app players out there. Other than that, you're right. There has to be a stimulus from the business standpoint to get them to actually jump into this.
Gardner: Any other thoughts on the pressure in the marketplace and in terms of business and cost?
Linthicum: If they don’t do it, somebody else is going to come up and do it for them. Look at the pressures that Salesforce.com has put on the CRM players in the marketplace. It’s a similar type of market transition. Salesforce.com was never an internal enterprise player, and yet look at their revenues in contrast to the other CRM guys that are out there. The same thing is going to occur in this space. They are either going to step up and provide the new model, or they're just going to get stomped as people run over them to get to the players that will do it.
Gardner: Yeah, Dave, I agree, especially with Google. They’ve got a market cap of $144 billion, and a portion of that market cap depends on how well Google can sell business services to businesses. That’s going to put pressure on the traditional players, right?
Linthicum: Yeah. Google is moving aggressively in that space, and I think they're going to not only provide their own services, but they're going to broker services that they validate and basically recast.
Gardner: And that’s governance isn’t it?
Linthicum: It is going to be governance. You are going to see some aggregators out there. Right now, you’re seeing guys like StrikeIron, which is a small company, but they aggregate services. They are basically a brokerage house for services they control, validate, and make sure they are not malicious. Then, you rent the services from them, and they in turn pay the service provider for providing the service. I think Google is going to go for the same model.
Gardner: It’s about trust ultimately, right?
Linthicum: It’s about trust ultimately. If I were a consultant with an organization and my career was dependent on this thing being a success, I'd be more likely to trust StrikeIron and Google than some kind of a one-off player who has a single service which is maintained in someone’s garage.
Gardner: So that notion of a cottage industry for some little developer out there creating their own widget probably still isn’t going to happen, huh?
Linthicum: It will. What’s going to happen is that they are going to do so through brokerage -- guys like Google. I don’t think Google is going to take a whole lot of money. They're going to take the normal pennies per transaction, and you will see millionaires that are made in a few months -- people who are able to send up killer services that Google and guys like StrikeIron are able to broker out to those who are setting up SOAs. Then, suddenly, they are going to find themselves a hit, very much like we’re seeing the Web 2.0 hits today.
Gardner: We have Google AdWords and AdSense. So, soon we should have "ServicesSense"?
Linthicum: Right, and everybody in that space, whether they say it or not, is building that in the back room right now. They know that’s coming.
Baer: I was just going to add that StrikeIron really has an interesting business model. I have spoken with Bob Brauer, the CEO of StrikeIron, several times. Their message is that there is going to be this marketplace out there. They are looking at SOA and services, perhaps Web 2.0 and mashups may come into play as well, but it’s a notion that rather having corporations worry about building their own internal functionality, they can go out to some kind of marketplace and get the best deal for the functions they need and the types of services they need. Your typical corporation may be run on a combination of internally built services and externally brokered services.
Linthicum: When I was CTO at Grand Central, we had a few companies that were run entirely on external services -- these new startups. They did all their accounting, their sales management, and everything else through external services. That’s probably too much for the larger Global 2000 to bite off right now, but there is going to be a functional changeover. As time goes on, they are going to use more external services than ever before.
Gardner: "Free" is a compelling rationale. All you have to do is look at a little text ad associated with the service and that page for the service and the provisioning and governance of the service becomes fairly compelling, right?
Linthicum: Absolutely.
Gardner: Well, thanks very much. That was an interesting discourse on this whole notion of mashups, SOA, and how it might evolve in the marketplace. For the last 10 minutes today, let’s discuss the deal announced this week whereby Oracle is going to acquire Hyperion for $3.3 billion, bringing the possibility of more analytics and business dashboard functionality into the growing Oracle stable. I believe this must be their tenth or twelfth acquisition since 2002.
Jim Kobielus, you’re data-centric in your studies and research. Does the fit between Hyperion and Oracle make sense to you?
Kobielus: It makes sense knowing Oracle. First of all, because [Oracle Chairman and CEO] Larry Ellison has been very willing in the past to grab huge amounts of market share by buying direct competitors like PeopleSoft, Siebel, and so forth, and managing multiple competing brands under the same umbrella -- and he is doing it here. A lot of the announcement from Oracle regarding this acquisition glossed over the fact that there are huge overlaps between Oracle’s existing product lines and Hyperion’s in pretty much every category, including the core area that Hyperion is best known for, which is financial analytics or Corporate Performance Management (CPM). Oracle itself provides CPM products for CFOs that do planning, budgeting, consolidation, the whole thing.
Hyperion is a big business intelligence (BI) vendor as well, and Oracle has just released an upgrade to its BI suite. You can go down the line. They compete in master data management (MDM) and data integration, and so forth. The thing that Oracle is buying here first and foremost is market share to keep on catapulting itself up into one of the unchallenged best-of-breed players in business intelligence, CPM and so forth. Oracle bought the number one player in that particular strategic niche, financial CPM , which is really the core of CPM -- the CFOs managing the money and the profitability.
It’s a great move for Oracle, and it definitely was an inevitable move. There will be continuing consolidation between the best-of-breed, pure-play data management players, such as Hyperion and a few others in this space, which are Business Objects and Cognos. They will increasingly be acquired by the leading SOA vendors. Look at the SOA vendors right now that don’t have strong BI or strong CPM, and look at the pure-plays that have those tools. The SOA vendors that definitely need to make some strategic fill-in acquisitions are IBM, Microsoft to a lesser degree, BEA definitely, and a few others, possibly webMethods. And, look at the leading candidates. In terms of CPM and BI and a comprehensive offering, they are down to three: Business Objects, Cognos, and SAS.
Now, SAS's Jim Goodnight has been doing it for over 30 years. It’s a great company, growing fast, with very loyal customers. Those product lines are very private, very stubbornly private, and I think they want to stay that way. So, I don’t think they are on the blocks in terms of being an acquisition candidate. But Business Objects and Cognos definitely are in play. So, it’s just a matter of time before both of those vendors are scooped up by some of the leading SOA vendors.
Gardner: So, Oracle has created a little bit of an auction atmosphere? Joe McKendrick, what's your take on this? You’re also a data personage.
McKendrick: Either Neil Macehiter or Neil Ward-Dutton, one of the Neil’s, mentioned on a couple of occasions that Oracle really isn’t playing up its database strengths. Lately, a lot of the activity, a lot of its announcements, and a lot of its acquisitions have been focused on the fusion, the middleware. And this [Hyperion buy] is definitely a play to its strength in the database market. Jim made some great observations, and there are a lot of overlaps. My sense is that Oracle is buying a huge, prominent customer base as part of the acquisition.
Gardner: Even though there is overlap in customer base and in some functionality, isn’t there the ability to integrate on an analytics basis by extracting value from data, rather than providing the data services themselves and/or a business application set? Doesn’t that make for an integrated approach that they could bring these two perhaps overlapping product categories together easier in this category, than they would either in database or business applications?
Garone: Yeah, Dana, I think that’s correct; and I also agree that this is less about database and more about middleware and fusion and building up that software stack. Oracle has clearly got an eye on doing that. This kind of an acquisition in the short term is always a double-edged sword, for Oracle especially. If any of you have been to some of their events as an analyst, you've seen what they’ve gone through in convincing the analyst community that they're going be able to both support all the customer bases of representing the products they acquire and integrate things well into their stack ...
Gardner: And they did seem to do a pretty good job at that between J.D. Edwards, PeopleSoft and Siebel, right? There wasn’t the big brouhaha in the installed base that some people were expecting.
Garone: Right. And that turned out to be true in those cases. It remains to be seen, of course, what will happen here, but it’s always a short-term hurdle that Oracle has to get over, both in terms of perception as well as the actual integration process and business model process. Again, this is really very promising, if Oracle pulls it off. But to me it’s really about their bigger picture of taking what they call Fusion middleware out beyond just middleware to the applications themselves, and essentially creating an entire integrated stack of software.
Gardner: How about you, Dave Linthicum? Do you believe that these services and analytics and creating business insight into operations are an essential part of SOA, as Jim Kobielus believes?
Linthicum: Absolutely. In fact, if you look at my stack, which is actually on Wikipedia right now, one of the things I have on top is Business Activity Monitoring (BAM) and analysis, because once you have those points of service -- both the behavioral visibility and also information visibility into all these different points, and you create these abstraction layers on top of it -- you have a great opportunity to actually monitor your business in real-time. And you have the ability not only to monitor it in real-time, but you can actually go back historically to see how what you are doing now relates to what you did in the past.
A lot of businesses can benefit from that. It's key technology. Oracle did the right thing strategically, and I think this stuff is going to be a necessity going forward for SOA, and it’s a necessity for business going forward as well. It’s one of the things where, if you look at the business, it’s just so huge, but you just don’t hear about it anymore.
Gardner: So, we're saying that the feedback loop becomes more essential for SOA, and that these BI tools are essential ingredients in creating a near real-time feedback loop, as well as a historical perspective feedback opportunity to then fine-tune your SOA, perhaps through a policy-driven governance capability?
Linthicum: Right. Fine-tune your SOA by fine-tuning your processes. I can imagine the potential here. I can see not only the health of my business, but also how my business produced things in the past, or how things were done in the past and how that relates to what I'm doing right now. I even have a rules engine, which is part of my SOA to make adjustments automatically to things that I know will have a positive effect on my business processes. You can get this automatic state which is hugely valuable for these large-product-intensive companies.
Gardner: The last word from Tony Baer, Do you see the analytics as important as some of our other guests?
Baer: Well, put it this way. Analytics is the necessary icing on the cake. All the other pieces tell you what you are doing, and the classic question of analytics tells you why. A lot of folks look at this as an extension to the database business. I see this as an extension of the applications business.
SAP, for example, has had BI for a number of years. Oracle has had some limited analytics starting back with the acquisition almost a decade ago of, I think it was, IRI Express as an OLAP database. Now, that was merely an extension to the database business, but if you look at how this is really going to end up playing out, it’s not that customers are looking for another database to just slice-and-dice their data. They are looking for a way to look at their business processes, which are represented through their application stacks and think, "How are we doing?" So, it’s a logical add-on to that.
In terms of worries about or concerns about a customer -- I guess customers getting dissatisfied when Oracle comes in -- the fact is that in ERP, just as in database, it’s a foundation buy. The fact is that regardless of what your personal feelings are about Larry Ellison, that technology is entrenched in the organization. The pain of migrating from it is greater than just sticking with it. Oracle has also improved its track record in terms of trying to be a little more customer friendly. It still has plenty of work to do. So, in the long run, I don’t see a lot of migration here, and I see this as being a very logical add-on in the apps business.
Gardner: Yeah, I agree. Strategically, this has a lot to do with the business applications. Do you think that this puts significant pressure on SAP?
Baer: It puts some pressure on SAP. I wouldn’t be surprised to see them make the play for one or the other two big ones. I also expect IBM to play in there, because even though IBM says it’s not in the apps business, the fact is that they do have products like master data management.
Gardner: And a lot of BI, too.
Baer: Exactly. And actually what's really ironic about all this is that years ago, IBM and Hyperion had actually had a very close relationship, and it was bordering almost on acquisition. I'm surprised that IBM actually never went the last mile and acquired them. It would make sense for them to make a move with one of the other players today.
Gardner: Interesting. Well, thanks very much. I want to go through our group for today, and we appreciate all your input. There’s Steve Garone, Joe McKendrick, Jim Kobielus, Tony Baer, and Dave Linthicum. We appreciate your joining us. I hope you come back. This is Dana Gardner, your producer, host and moderator here at BriefingsDirect SOA Insights Edition. Please come back and join us again next week.
If any of our listeners are interested in learning more about BriefingsDirect B2B informational podcasts or to become a sponsor of this or other B2B podcasts, please fill free to contact Interarbor Solutions at 603-528-2435.
Listen to the podcast here.
Transcript of Dana Gardner’s BriefingsDirect SOA Insights Edition, Vol. 13. Copyright Interarbor Solutions, LLC, 2005-2007. All rights reserved.
Listen to the podcast here. If you'd like to learn more about BriefingsDirect B2B informational podcasts, or to become a sponsor of this or other B2B podcasts, contact Interarbor Solutions at 603-528-2435.
Dana Gardner: Hello, and welcome to the latest BriefingsDirect SOA Insights Edition, Vol. 13, a weekly discussion and dissection of Services Oriented Architecture (SOA) related news and events with a panel of industry analysts and guests. I am your host and moderator Dana Gardner, principal analyst at Interarbor Solutions, ZDNet blogger and Redmond Developer News magazine columnist.
Our panel this week -- and that is the week of Feb. 26 2007 -- consists of Steve Garone, a former IDC group vice president, founder of the AlignIT Group, and an independent industry analyst. Welcome back to the show, Steve.
Steve Garone: Thanks, Dana. Great to be here.
Gardner: Also, joining us once again, Joe McKendrick, a research consultant, columnist at Database Trends and a blogger at ZDNet and ebizQ. Welcome back, Joe.
Joe McKendrick: Good morning, Dana.
Gardner: Also joining us, Tony Baer, principal at OnStrategies, and blogger at Sandhill.com and ebizQ. Thanks for joining, Tony
Tony Baer: Hi, Dana.
Gardner: And also once again, joining us is Jim Kobielus. He is a principal analyst at Current Analysis.
Jim Kobielus: Hi, Dana. Hi, everybody
Gardner: Joining us for the first time, and we welcome him, Dave Linthicum. He is CEO at the Linthicum Group, an SOA advisory consulting firm. Dave also writes the Real World SOA blog for InfoWorld and is the host of the SOA Report podcast, now in its third year. He is also a software as a service (SaaS) blogger for Intelligent Enterprise, and has a column on SOA topics for Web Services Journal. Welcome, Dave.
Dave Linthicum: It is great to be here.
Gardner: We are going to have a couple of meaty, beefy topics today on the SOA and, interestingly enough, Enterprise 2.0 arena. We are going to be discussing and defining the concept around "mashup governance." We are also going to discuss some merger and acquisition news this week, with a deal announced between Hyperion and Oracle, whereby Oracle will acquire Hyperion for $3.3 billion.
First off, let's go to this subject of "mashup governance." Dave, I believe you defined this to a certain extent in a recent blog, and I wanted to give you the opportunity to help us understand what you mean by "mashup governance" -- and why it’s important in an Enterprise 2.0 environment, and perhaps what the larger implications may be for SOA.
Linthicum: Sure. Thank you very much. That was a feature article, by the way, that InfoWorld sponsored, and it’s still up on their website. It basically talked about how mashups and SOA are coming together, since they are mashing up. As people are becoming very active in creating these ad-hoc applications within the enterprise, using their core systems as well as things like Google Maps and the Google APIs, some of the things that are being sent up by Yahoo!, Salesforce.com, and all these other things that are mashable. There's a vacuum and a need to create a governance infrastructure to not only monitor-track, but also learn to use them as a legitimate resource within the enterprise.
Right now, there doesn’t seem to be a lot of thinking or products in that space. The mashup seems to be very much like a Wild West, almost like rapid application development (RAD) was 15 years ago. As people are mashing these things up, the SOA guys, the enterprise architecture guys within these organizations are coming behind them and trying to figure out how to control it.
Gardner: An element of control to an otherwise ad hoc and loosey-goosey approach to creating Web services-based UIs and portal interfaces?
Linthicum: That’s absolutely right. Ultimately these things can become legitimate and very valuable applications within the enterprise. I have a client, for example, that has done a really good job in mashing up their existing sales tracking system, inventory control system, and also delivery system with the Google Maps API. Of course everybody and their brother uses that as a mashup example, but it's extremely valuable.
We are able to not only provide maps to do the best routing for delivery, but also Google Maps right now has traffic reports. So, they can give these to the truck drivers and delivery agents at beginning of the day, and productivity has gone up 25 percent. Over a year, that is going to save them more than $1.5 million. And, that’s just a simple mashup that was done in a week by a junior developer there. Now, they are trying to legitimize that and put it back into their SOA project, as well as other external API’s. They are in there trying to figure it out.
Gardner: So perhaps through this notion of combining what is available on an internal basis -- either as a Web service or moving toward SOA -- the enterprises can also start tapping into what is available on the Web, perhaps even through a software-as-a-service relationship or license, and put together the best of internal data content process as well as some of these assets coming off of the Web, whether it is a map, an API, or even some communications, groupware, or messaging types of functions.
Linthicum: I think you put it best that I have ever heard it. Absolutely. That’s the way it’s coming forward, as we are building these SOAs within these enterprises today. We have the added value of being able to see these remote services, deal with these remote APIs, and bring that value into the organization -- and that’s typically free stuff. So, we are using applications that we are gaining access to, either through a subscription basis in the case of Salesforce.com -- and they are, by the way, hugely into the mashups that are coming down the pipe -- free services that we are getting from Google, or even services that cost very little.
Putting those together with the existing enterprise systems breathes new life into them, and we can basically do a lot of things faster and get applications up and running much faster than we could in the past. Ultimately, there is a tremendous amount of value for people who are using the applications within these environments. Typically, it’s the mid-market or the mid-sized companies that are doing this.
Gardner: Or even department levels in larger companies that don’t need to go through IT to do this, right?
Linthicum: That’s right. Absolutely. That’s how Salesforce.com got started. In other words, people were buying Saleforce.com with their credit cards and expensing it, and they were wiring it around IT. We are seeing the same movement here. It's happening at the grassroots level within the department and it's moving up strategically within the IT hierarchy.
Gardner: Okay, so it sounds straightforward: a good productivity boost, moving toward the paradigm of mashable services. Why do we need governance?
Linthicum: Well, you really need a rudimentary notion of governance when you deal with any kind of application or service that works within the organization. Governance is a loaded word. If you go to the Enterprise Architecture Conference -- and I am speaking at it the end of this month in New Orleans -- they consider governance as a management practice. It’s running around knocking people on their heads, if they are not using the correct operating systems, databases, those sorts of things. In the SOA world, as Joe McKendrick can tell you, it's about a technical infrastructure to monitor-control the use of services. Not only is it about control, but it is about productivity. I can find services. I can leverage services, and they are managed and controlled on my behalf. So, I know I am not using something that’s going to hurt me.
The same thing needs to occur within the mashup environment. For mashing up, there are lots of services that we don’t control or that exist outside on the Internet. It's extremely important that we monitor these services in a governance environment, that we catalogue them, understand when they are changed, and have security systems around them, so they don’t end up hurting productivity or our existing IT infrastructure. We don’t want to take one step forward and two steps back.
Gardner: I read your blog in response to this, Jim Kobielus, and you seem to think that bringing too much governance to this might short-circuit its value -- that it’s the loosey-goosey, ad-hoc nature that brings innovation and productivity. Do you think that what we think of as traditional SOA governance is too rigid and strict and requires some interaction with IT? Or are we talking about some other kind of governance, when it comes to mashups?
Kobielus: Well, Dave made that same point in his article, which is that the whole notion of mashups is half-way to anarchy, as it were, creative anarchy. In other words, empowering end-users, subject-matter experts, or those who simply have a great idea. They typically slap together something from found resources, both internal and external, and provision it out so that others can use it -- the creative synthesis.
This implies that governance in the command-and-control sense of the term might strangle the loosey-goosey that laid the golden egg. So, there is that danger of over-structuring the design-time side of mashups to the point where it becomes yet another professional discipline that needs to be rigidly controlled. You want to encourage creativity, but you don’t want the mashers to color too far outside the lines.
Dave hit the important points here. When you look at mashup governance, you consider both the design-time governance and the run-time governance. Both are very important. In other words, if these mashups are business assets, then yes, there needs to be a degree of control, oversight, or monitoring. At the design-time level, how do you empower the end-users, the creative people, and those who are motivated to build these mashups without alienating them by saying, "Well, you've got to go to a three-week course, you've got to use these tools, and you've got to read this book and follow these exact procedures in order to mashup something that you want to do?" That would have clearly stifled creativity.
I did a special section on SOA for Network World back in late 2005. I talked to lots of best practice or use cases of SOA governance on design time, and the ones that I found most interesting were companies like Standard Life Assurance of Scotland. What they do is provide typical command-and-control governance on design time, but they also provide and disseminate through the development teams a standard SOA development framework, a set of tools and templates, that their developers are instructed to use. It's simply the broad framework within which they will then develop SOA applications.
What I am getting at here is that when you are dealing with the end users who build the mashups, you need to think in terms of, “Okay. Tell them in your organization that we want you to very much be creative in putting things together, but here is a tool, an environment, or enabling technology that you can use to quickly get up to speed and begin to do mashing up of various resources. We, the organization that employs you, want you, and strongly urge you, to use these particular tools if you wish your mashups to be used far and wide within the organization.
"If you wish to freelance it internally, go ahead, but doesn’t mean we are necessarily going to publish out those mashups so that anybody can see them. It means we are not necessarily going to support those mashups over time. So, you may build something really cool and stick it out there, but nobody will use it and ultimately it won’t be supported. Ultimately, it will be a failure, unless you use this general framework that we are providing."
Gardner: I think we need to re-examine some of these definitions. I'm not sure what we are talking about with mashup governance is either "run time" or "design time." It strikes me as "aggregation time." Perhaps we don’t even need to use existing governance and/or even federate to existing governance. Perhaps it's something in the spirit of Web 2.0 and Enterprise 2.0, as simple as a wiki that everyone can see and contribute to, saying, “Here is how we are going to do our mashups for this particular process."
Let’s say, it is a transportation process, "Here are the outside services that we think are worthwhile. Here are the APIs, and here is a quick tutorial on how to bring them into this UI." Wouldn’t that be sufficient? Let us take that over to Steve Garone.
Garone: I am going to push back on that a little bit. What we are wrestling with here is achieving a balance between encouraging creativity and creating new and interesting functionality that can benefit business, and keeping things under control. The best way to look at that balance is to understand what the true risks are.
The way I see it, there are several major areas. The first has to do with what I call external liability, meaning that if you, for example, publish a mashup to a customer base that has a piece of functionality you got off the web, and for some reason that has wrong information and does the customers some harm, who is responsible for that? How are you going to control whether that happens or not? The second has to do with what I call internal risk, which has to do with making available to the outside world information that is sensitive to your organization. In that case, a little more than what you described is going to be necessary, and can also leverage some of the governance infrastructure that people are building generally and relative to SOA.
Gardner: So, you are thinking that these mashups would be available not only to an internal constituency in the organization but across its users, its visitors, and the public?
Garone: Absolutely. Well, I think they can be, and I think there will be organizations and groups within organizations who will want to do that, driven primarily by the business opportunities that it can afford.
Gardner: But, if this is the general public accessing some of these mashups, wouldn’t the risk that they would take accessing the individual services on the web on their own be sufficient? Why would you need to be concerned about liability or other risk issues when these are already publicly facing APIs and services and so forth?
Garone: Conceptually, you wouldn’t, but we all know that in this world anybody can sue for anything, and the reality is that if I go to a company’s website and use a function that incorporates something that they grabbed off the web, and it does me harm, the first place I am going to look is the site that I went to in the first place.
Gardner: Well, you might have stumbled upon the category here that will warm the cockles of many lawyers’ hearts -- mashup risk and assessment.
Garone: Exactly. And, it's one of the problems that governance in general attempts to solve. So, it is relevant here. My bottom-line point is that achieving balance is going to involve some careful consideration of what the true risks are. Maybe resolving that involves a combination of the kinds of solution that you just talked about in some cases. In other cases, they are going to have to leverage the governance infrastructure that exists in other areas within a company.
Gardner: Your point is well taken. This is business, it is serious, and it needs to be considered and vetted seriously -- if it is going to be something that you are using for your internal employees’ use, as well as if it becomes public-facing. How would you come down on this, Joe McKendrick? Do you see the balance between something as unstructured as a blog or wiki being sufficient, or do we need to bake this into IT, get policies and governance, and take six years to get a best practices manifesto on it?
Garone: I did not recommend that, Dana.
Gardner: I know. I'm going from one extreme to the other.
McKendrick: If we do it in two years, that would be fine. But what I’d love to know is, what exactly is the difference between a mashup and a composite application that we have been addressing these past few years within the SOA sphere? The composite application is a service-level application or component that draws in data from various sources, usually internal to the organization, and presents that through a dashboard, a portal, or some type of an environment. It could be drawn from eight mainframes running across the organization.
Obviously, the governance that we have been working so hard on in recent years to achieve in SOA is being applied very thoroughly to the idea of composite applications. Now, what is the difference between that and a mashup? Other than the fact that mashups may be introducing external sources of data, I really don’t see a difference. Therefore, it may be inconsistent to "let a thousand flowers bloom" on the mashup side and have these strict controls on the composite application as we have defined in recent years.
Linthicum: The reality is that there is no difference. You are correct, Joe, and I point that out in the article as well. There are really two kinds of mashups out there: the visual mashups, which are what we are seeing today, where people are taking basically all of these interface APIs and using the notions of AJAX and other rich, dynamic clients, and then binding them together to form something that is new.
The emerging mashups are non-visual. It's basically analogous, and is not exactly the same, as traditional composite applications that are -- if you can call them traditional -- in the SOA realm today. They have to be controlled, managed, governed, and developed in much the same way.
Kobielus: There is a difference here. I agree with what Dave just said that mashups are not qualitatively different from composite apps, but there is a sort of difference in emphasis, in the sense that a mashup is regarded as being more of a user-centric paradigm. The end-user is empowered to mash these things up from found resources.
It relates to this notion that I am developing for a piece on user-centric identity as a theme in the identity management space. The whole Web 2.0 paradigm is user-centric -- users reaching out to each other and building communities, and sharing the files and so on. Mashing up stuff and then posting that all to their personal sites is very much a user-centric paradigm.
There's another observation I want to make. I agree that the intellectual property lawyers are starting to salivate by mashups invading their clients or encroaching on their clients’ rights. Actionable mashups are good from a litigator’s point of view. In terms of governance then, organizations need to define different mashup realms that they will allow. There might be intra-mashes within their Intranet -- "Hey, employee, you can mash up all manner of internal resources that we own to your heart’s delight. We will allow intra-mashes, even extra-mashes within the extranet, with our trusted partners. You can mash up some of their resources as well, whatever they choose to expose within the extranet. And then, in terms of inter-mash or Internet wide mashing, we’ll allow some of it. You can mash Google. You can mash the other stuff of the folks who are more than happy to let you mash. But, as an organization, your employer, we will monitor and block and keep you from mashing up stuff that conceivably we might be sued for."
Gardner: So you could take six years and require a manifesto. Thank you, Jim Kobielus. Tony Baer, let's take it to you. Do you see this as a problem in terms of the governance, or should we keep it loosey-goosey? Should we not get into the structure, and do you think that -- to Jim’s point -- a mashup is conceptually different from a composite application because of the user-centric, user-driven, keep-IT-out-of-it aspect?
Baer: We've got a couple of questions there. I’ll deal first with the technical one, which is that composite apps and mashups are basically trying to do the same thing, but they're doing it in different ways. Composite apps, at least as I've understood the definition, came out of an SOA environment. That implies some structure there, whereas mashups essentially merged to Web 2.0 with the emergence of AJAX-style programming, which lets anybody do anything anywhere with this very loosely structured scripting language. There are practically no standards in terms of any type of vocabulary.
So, there is a bit of a "Wild West" atmosphere there. As somebody else said, you really need to take a two-tiered approach. On one hand, you don’t want to stifle the base of innovation, a kind of a skunk works approach. Having a walled garden there, where you're not going to be doing any damage to the outside but you are going to promote collaboration internally, probably makes some sense. On the other hand, even if the information did not originate from your site, if you're retransmitting it there is going to be some implication that you are endorsing it, at least by virtue of it coming under your logo or your website.
Gardner: Yeah, the perception of the user is going to be on you, regardless of the origins of the service.
Baer: Exactly. So, you need a tiered approach. I was taking a note here earlier. You really need to exert control on the sources of information. Therefore, for the types of information that are exposed internally -- for example something from an internal financial statement -- you need to start applying some of the rules that you've already developed around internal databases. Different classes of users have a right to know and to see it and, in some cases, some read-write privileges.
You need to apply similar types of principles at the source of information. Therefore, if I have access to this, this means implicitly that I can then mash it up, but you have to really govern it at the original point of access to that information, at least with regard to internal information. With external information, it probably needs to go to the same type to clearance that you would exert for anything that goes out on the corporate website, the external website.
Gardner: So, your existing policies and access privileges, your federated ID management brought up into a policy level, that will all play into this and it could help mitigate this concern around the right balance.
Baer: Well, put it this way, it’s a step toward that direction.
Gardner: I want to offer another possibility here. I was thinking about the adage that nobody was fired for using IBM, which was a common saying not that long ago. What if we were to take that same mentality and apply it here -- that if you're going to do mashups, make sure they are Windows Live mashups, or Google mashup services for mashup; or maybe Salesforce.com? So, is there is an opportunity on the service provider side to come up with a trusted set of brands that the IT people and the loosey-goosey ad-hoc mashup developers could agree on to use widely? They could all rally around a particular set of de-facto industry standard services? That would be perhaps the balance we're looking for.
What do you think about that, Steve?
Garone: That can certainly be a realistic part of how it’s done, and it gets back to something someone mentioned earlier about composite applications. We talked about the similarities and the differences. One of the differences I see is when I think back to when people started building applications from software components. There was a flood of products put on the market to manage that process in terms of cataloguing and putting into libraries trusted components with descriptions and APIs that conform to standards, to try to sort of reign in people’s ability to go all over the place and pick software from the sky to build into an application that could be used in a business context.
What you're saying sort of conforms to that, in that you come up with a trusted set of applications or a trusted set of vendors or sources from where you can get application functionality, and an attempt to enforce that.
Gardner: It strikes me that this is a slippery slope, if people start using mashups. That includes the more defined and traditional developer using it through governance and vetting it properly with command and control, as well as across a spectrum of project-level, third-party developers, and even into department-level folks who are not developers per se. The slippery slope is that, suddenly more of the functionality of what we consider an application would be coming through these mashups and services, and perhaps increasingly from outside the organization.
Therefore, the people who are providing the current set of internal services and/or traditional application functionality need to be thinking, "Shouldn’t I be out there on the wire with a trusted set?" We're already seeing Microsoft move in this direction with its Windows Live. We're seeing Google now putting packaging around business-level functionality for services. Salesforce.com is building an ecology, not only of its own services, but creating the opportunity for many others to get involved -- you could call them SaaS ISV’s, I suppose.
And I don’t think it’s beyond the realm of guesswork that Oracle and SAP might need to come up with similar levels of business application services that create what would be used as mashups that can be trusted to be used in conjunction with their more on-premises, traditional business applications. Does anyone else see any likelihood in this sort of a progression? I’ll throw it out.
Linthicum: There's a huge likelihood of that coming up. People are moving to use interface-based applications through software-as-a-service. All you have to do is look at the sales of Salesforce.com to monitor how that thing is exploding. And, they are migrating over to leveraging services to basically mix-and-match things at a more granular level, instead of taking the whole application interface and leveraging those within your enterprise. This is what I call "outside-in" services. I wrote about that three years ago.
People are going to focus on that going forward, because it just makes sense from an economic standpoint that we leverage the best-of-breed services, which typically aren’t going to be built within our firewall. We don’t want to pay for those services to be built, but they're going to be built by the larger guys like Salesforce.com, Google, and Microsoft. It's going to be a slow evolution over time, but I think we are going to hit that inflection point, where suddenly people see the value. It’s very much like we saw the value in the web in the early '90s -- that it really makes sense not only to distribute content that way, but distribute functional application behavior that way.
Gardner: Thanks, Dave. Any contrarians out there? Does anyone think that this back-to-the-future, in terms of the major players stepping up and providing best-of-breed services, is not likely?
Kobielus: Well, I think it's likely. But the fact is that, given the accessibility of this technology, it will encourage independence to startups, and provide unique new services too that may fall between the cracks. It’s the classic long tail here.
Gardner: I’ll be contrarian in this, because I don’t think that these sets of players, with the possible exception of Google and Salesforce.com, are going to be interested in having this occur sooner. They would rather have it come later, because their on-premises, licensed software businesses are far more profitable, and it gives them a more entrenched position with the client and the account than these mashups. Those can be switched in or out quite easily, and are either free or monetized through advertising or in a subscription fee format that is still not nearly as profitable for them in the long run as an on-premises, licensed affair.
Does this notion of the business model, rather than the technology model or the branding model, change anyone’s opinion on the speed in which this happens? Do we need to have a small group of interlopers that comes in and actually forces the hands of the larger players into this mode?
Garone: Dana, I’ll take that. There clearly has to be a business reason for these major players to do it, and the two that I see are, one, that the functionality that they're making lots of money off of is suddenly available as a mashup at little or no cost, in which case they have got to deal with that. The other is to be able to add interesting functionality to their existing products in order to be more competitive with the other enterprise app players out there. Other than that, you're right. There has to be a stimulus from the business standpoint to get them to actually jump into this.
Gardner: Any other thoughts on the pressure in the marketplace and in terms of business and cost?
Linthicum: If they don’t do it, somebody else is going to come up and do it for them. Look at the pressures that Salesforce.com has put on the CRM players in the marketplace. It’s a similar type of market transition. Salesforce.com was never an internal enterprise player, and yet look at their revenues in contrast to the other CRM guys that are out there. The same thing is going to occur in this space. They are either going to step up and provide the new model, or they're just going to get stomped as people run over them to get to the players that will do it.
Gardner: Yeah, Dave, I agree, especially with Google. They’ve got a market cap of $144 billion, and a portion of that market cap depends on how well Google can sell business services to businesses. That’s going to put pressure on the traditional players, right?
Linthicum: Yeah. Google is moving aggressively in that space, and I think they're going to not only provide their own services, but they're going to broker services that they validate and basically recast.
Gardner: And that’s governance isn’t it?
Linthicum: It is going to be governance. You are going to see some aggregators out there. Right now, you’re seeing guys like StrikeIron, which is a small company, but they aggregate services. They are basically a brokerage house for services they control, validate, and make sure they are not malicious. Then, you rent the services from them, and they in turn pay the service provider for providing the service. I think Google is going to go for the same model.
Gardner: It’s about trust ultimately, right?
Linthicum: It’s about trust ultimately. If I were a consultant with an organization and my career was dependent on this thing being a success, I'd be more likely to trust StrikeIron and Google than some kind of a one-off player who has a single service which is maintained in someone’s garage.
Gardner: So that notion of a cottage industry for some little developer out there creating their own widget probably still isn’t going to happen, huh?
Linthicum: It will. What’s going to happen is that they are going to do so through brokerage -- guys like Google. I don’t think Google is going to take a whole lot of money. They're going to take the normal pennies per transaction, and you will see millionaires that are made in a few months -- people who are able to send up killer services that Google and guys like StrikeIron are able to broker out to those who are setting up SOAs. Then, suddenly, they are going to find themselves a hit, very much like we’re seeing the Web 2.0 hits today.
Gardner: We have Google AdWords and AdSense. So, soon we should have "ServicesSense"?
Linthicum: Right, and everybody in that space, whether they say it or not, is building that in the back room right now. They know that’s coming.
Baer: I was just going to add that StrikeIron really has an interesting business model. I have spoken with Bob Brauer, the CEO of StrikeIron, several times. Their message is that there is going to be this marketplace out there. They are looking at SOA and services, perhaps Web 2.0 and mashups may come into play as well, but it’s a notion that rather having corporations worry about building their own internal functionality, they can go out to some kind of marketplace and get the best deal for the functions they need and the types of services they need. Your typical corporation may be run on a combination of internally built services and externally brokered services.
Linthicum: When I was CTO at Grand Central, we had a few companies that were run entirely on external services -- these new startups. They did all their accounting, their sales management, and everything else through external services. That’s probably too much for the larger Global 2000 to bite off right now, but there is going to be a functional changeover. As time goes on, they are going to use more external services than ever before.
Gardner: "Free" is a compelling rationale. All you have to do is look at a little text ad associated with the service and that page for the service and the provisioning and governance of the service becomes fairly compelling, right?
Linthicum: Absolutely.
Gardner: Well, thanks very much. That was an interesting discourse on this whole notion of mashups, SOA, and how it might evolve in the marketplace. For the last 10 minutes today, let’s discuss the deal announced this week whereby Oracle is going to acquire Hyperion for $3.3 billion, bringing the possibility of more analytics and business dashboard functionality into the growing Oracle stable. I believe this must be their tenth or twelfth acquisition since 2002.
Jim Kobielus, you’re data-centric in your studies and research. Does the fit between Hyperion and Oracle make sense to you?
Kobielus: It makes sense knowing Oracle. First of all, because [Oracle Chairman and CEO] Larry Ellison has been very willing in the past to grab huge amounts of market share by buying direct competitors like PeopleSoft, Siebel, and so forth, and managing multiple competing brands under the same umbrella -- and he is doing it here. A lot of the announcement from Oracle regarding this acquisition glossed over the fact that there are huge overlaps between Oracle’s existing product lines and Hyperion’s in pretty much every category, including the core area that Hyperion is best known for, which is financial analytics or Corporate Performance Management (CPM). Oracle itself provides CPM products for CFOs that do planning, budgeting, consolidation, the whole thing.
Hyperion is a big business intelligence (BI) vendor as well, and Oracle has just released an upgrade to its BI suite. You can go down the line. They compete in master data management (MDM) and data integration, and so forth. The thing that Oracle is buying here first and foremost is market share to keep on catapulting itself up into one of the unchallenged best-of-breed players in business intelligence, CPM and so forth. Oracle bought the number one player in that particular strategic niche, financial CPM , which is really the core of CPM -- the CFOs managing the money and the profitability.
It’s a great move for Oracle, and it definitely was an inevitable move. There will be continuing consolidation between the best-of-breed, pure-play data management players, such as Hyperion and a few others in this space, which are Business Objects and Cognos. They will increasingly be acquired by the leading SOA vendors. Look at the SOA vendors right now that don’t have strong BI or strong CPM, and look at the pure-plays that have those tools. The SOA vendors that definitely need to make some strategic fill-in acquisitions are IBM, Microsoft to a lesser degree, BEA definitely, and a few others, possibly webMethods. And, look at the leading candidates. In terms of CPM and BI and a comprehensive offering, they are down to three: Business Objects, Cognos, and SAS.
Now, SAS's Jim Goodnight has been doing it for over 30 years. It’s a great company, growing fast, with very loyal customers. Those product lines are very private, very stubbornly private, and I think they want to stay that way. So, I don’t think they are on the blocks in terms of being an acquisition candidate. But Business Objects and Cognos definitely are in play. So, it’s just a matter of time before both of those vendors are scooped up by some of the leading SOA vendors.
Gardner: So, Oracle has created a little bit of an auction atmosphere? Joe McKendrick, what's your take on this? You’re also a data personage.
McKendrick: Either Neil Macehiter or Neil Ward-Dutton, one of the Neil’s, mentioned on a couple of occasions that Oracle really isn’t playing up its database strengths. Lately, a lot of the activity, a lot of its announcements, and a lot of its acquisitions have been focused on the fusion, the middleware. And this [Hyperion buy] is definitely a play to its strength in the database market. Jim made some great observations, and there are a lot of overlaps. My sense is that Oracle is buying a huge, prominent customer base as part of the acquisition.
Gardner: Even though there is overlap in customer base and in some functionality, isn’t there the ability to integrate on an analytics basis by extracting value from data, rather than providing the data services themselves and/or a business application set? Doesn’t that make for an integrated approach that they could bring these two perhaps overlapping product categories together easier in this category, than they would either in database or business applications?
Garone: Yeah, Dana, I think that’s correct; and I also agree that this is less about database and more about middleware and fusion and building up that software stack. Oracle has clearly got an eye on doing that. This kind of an acquisition in the short term is always a double-edged sword, for Oracle especially. If any of you have been to some of their events as an analyst, you've seen what they’ve gone through in convincing the analyst community that they're going be able to both support all the customer bases of representing the products they acquire and integrate things well into their stack ...
Gardner: And they did seem to do a pretty good job at that between J.D. Edwards, PeopleSoft and Siebel, right? There wasn’t the big brouhaha in the installed base that some people were expecting.
Garone: Right. And that turned out to be true in those cases. It remains to be seen, of course, what will happen here, but it’s always a short-term hurdle that Oracle has to get over, both in terms of perception as well as the actual integration process and business model process. Again, this is really very promising, if Oracle pulls it off. But to me it’s really about their bigger picture of taking what they call Fusion middleware out beyond just middleware to the applications themselves, and essentially creating an entire integrated stack of software.
Gardner: How about you, Dave Linthicum? Do you believe that these services and analytics and creating business insight into operations are an essential part of SOA, as Jim Kobielus believes?
Linthicum: Absolutely. In fact, if you look at my stack, which is actually on Wikipedia right now, one of the things I have on top is Business Activity Monitoring (BAM) and analysis, because once you have those points of service -- both the behavioral visibility and also information visibility into all these different points, and you create these abstraction layers on top of it -- you have a great opportunity to actually monitor your business in real-time. And you have the ability not only to monitor it in real-time, but you can actually go back historically to see how what you are doing now relates to what you did in the past.
A lot of businesses can benefit from that. It's key technology. Oracle did the right thing strategically, and I think this stuff is going to be a necessity going forward for SOA, and it’s a necessity for business going forward as well. It’s one of the things where, if you look at the business, it’s just so huge, but you just don’t hear about it anymore.
Gardner: So, we're saying that the feedback loop becomes more essential for SOA, and that these BI tools are essential ingredients in creating a near real-time feedback loop, as well as a historical perspective feedback opportunity to then fine-tune your SOA, perhaps through a policy-driven governance capability?
Linthicum: Right. Fine-tune your SOA by fine-tuning your processes. I can imagine the potential here. I can see not only the health of my business, but also how my business produced things in the past, or how things were done in the past and how that relates to what I'm doing right now. I even have a rules engine, which is part of my SOA to make adjustments automatically to things that I know will have a positive effect on my business processes. You can get this automatic state which is hugely valuable for these large-product-intensive companies.
Gardner: The last word from Tony Baer, Do you see the analytics as important as some of our other guests?
Baer: Well, put it this way. Analytics is the necessary icing on the cake. All the other pieces tell you what you are doing, and the classic question of analytics tells you why. A lot of folks look at this as an extension to the database business. I see this as an extension of the applications business.
SAP, for example, has had BI for a number of years. Oracle has had some limited analytics starting back with the acquisition almost a decade ago of, I think it was, IRI Express as an OLAP database. Now, that was merely an extension to the database business, but if you look at how this is really going to end up playing out, it’s not that customers are looking for another database to just slice-and-dice their data. They are looking for a way to look at their business processes, which are represented through their application stacks and think, "How are we doing?" So, it’s a logical add-on to that.
In terms of worries about or concerns about a customer -- I guess customers getting dissatisfied when Oracle comes in -- the fact is that in ERP, just as in database, it’s a foundation buy. The fact is that regardless of what your personal feelings are about Larry Ellison, that technology is entrenched in the organization. The pain of migrating from it is greater than just sticking with it. Oracle has also improved its track record in terms of trying to be a little more customer friendly. It still has plenty of work to do. So, in the long run, I don’t see a lot of migration here, and I see this as being a very logical add-on in the apps business.
Gardner: Yeah, I agree. Strategically, this has a lot to do with the business applications. Do you think that this puts significant pressure on SAP?
Baer: It puts some pressure on SAP. I wouldn’t be surprised to see them make the play for one or the other two big ones. I also expect IBM to play in there, because even though IBM says it’s not in the apps business, the fact is that they do have products like master data management.
Gardner: And a lot of BI, too.
Baer: Exactly. And actually what's really ironic about all this is that years ago, IBM and Hyperion had actually had a very close relationship, and it was bordering almost on acquisition. I'm surprised that IBM actually never went the last mile and acquired them. It would make sense for them to make a move with one of the other players today.
Gardner: Interesting. Well, thanks very much. I want to go through our group for today, and we appreciate all your input. There’s Steve Garone, Joe McKendrick, Jim Kobielus, Tony Baer, and Dave Linthicum. We appreciate your joining us. I hope you come back. This is Dana Gardner, your producer, host and moderator here at BriefingsDirect SOA Insights Edition. Please come back and join us again next week.
If any of our listeners are interested in learning more about BriefingsDirect B2B informational podcasts or to become a sponsor of this or other B2B podcasts, please fill free to contact Interarbor Solutions at 603-528-2435.
Listen to the podcast here.
Transcript of Dana Gardner’s BriefingsDirect SOA Insights Edition, Vol. 13. Copyright Interarbor Solutions, LLC, 2005-2007. All rights reserved.
Labels:
Baer,
BriefingsDirect,
Dana Gardner,
development,
Garone,
IBM,
Interarbor,
Linthicum,
mashups,
SOA,
Web 2.0,
Web services
Subscribe to:
Posts (Atom)