Showing posts with label open source. Show all posts
Showing posts with label open source. Show all posts

Sunday, March 22, 2009

BriefingsDirect Analysts List Top 5 Ways to Cut Enterprise IT Costs Without Impacting Performance in Economic Downturn

Edited transcript of BriefingsDirect Analyst Insights Edition podcast, Vol. 38 on how businesses should react to the current economic realities and prepare themselves to emerge stronger.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com. Charter Sponsor: Active Endpoints. Also sponsored by TIBCO Software.

Special offer: Download a free, supported 30-day trial of Active Endpoint's ActiveVOS at www.activevos.com/insight.

Dana Gardner: Hello, and welcome to the latest BriefingsDirect Analyst Insights Edition, Volume 38. I'm your host and moderator, Dana Gardner, principal analyst at Interarbor Solutions.

This periodic discussion and dissection of IT infrastructure related news and events, with a panel of industry analysts and guests, comes to you with the help of our charter sponsor, Active Endpoints, maker of the ActiveVOS, visual orchestration system. We also come to you through the support of TIBCO Software.

Out topic this week of March 9, 2009 centers on the economics of IT. It's clear that the financial crisis has spawned a yawning global recession on a scale and at a velocity unlike anything seen since the 1930s. Yet, our businesses and our economy function much differently than they did in the 1930s. The large and intrinsic role of information technology (IT) is but one of the major differences. In fact, we haven't had a downturn like this since the advent of widespread IT.

So, how does IT adapt and adjust to the downturn? This is all virgin territory. Is IT to play a defensive role in helping to slash costs and reduce its own financial burden on the enterprise, as well as to play a role in propelling productivity forward despite these wrenching contractions?

Or, does IT help most on the offensive, in transforming businesses, or playing a larger role in support of business goals, with the larger IT budget and responsibility to go along with that? Does IT lead the way on how companies remake themselves and reinvent themselves during and after such an economic tumult?

We're asking our panel today to list the top five ways that IT can help reduce costs, while retaining full business -- or perhaps even additional business functionality. These are the top five best ways that IT can help play economic defense.

After we talk about defense, we're going to talk about offense. How does IT play the agent of change in how businesses operate and how they provide high value with high productivity to their entirely new customer base?

Join me in welcoming our analyst guests this week. Joe McKendrick, independent IT analyst and prolific blogger on service-oriented architecture (SOA), business intelligence (BI), and other major IT topics. Welcome back, Joe.

Joe McKendrick: Thanks, Dana. Glad to be here.

Gardner: We're also joined by Brad Shimmin, principal analyst at Current Analysis.

Brad Shimmin: Hello, Dana.

Gardner: Also, JP Morgenthal, independent analyst and IT consultant. Hi, JP.

JP Morgenthal: Hi. Thanks.

Gardner: We're also joined by Dave Kelly, founder and president of Upside Research, who joins us for the first time. Welcome, Dave.

Dave Kelly: Hey, Dana. Thanks for having me. It's great to be here.

Gardner: Let's go first to Joe McKendrick at the top of the list. Joe, let's hear your five ways that IT can help cut costs in enterprises during our tough times.

Previous downturns

McKendrick: First of all, I just want to comment. You said this is virgin territory for IT in terms of managing through downturns. We've seen in our economy some fairly significant downturns in the past -- the1981-82 period, 1990-91 period, and notably 2001-2002. Those were all major turning points for IT, and we can get into that later. I'll give you my five recommendations, and they're all things that have been buzzing around the industry.

First, SOA is a solution, and I think SOA is alive and well and thriving. SOA promotes reuse and developer productivity. SOA also provides a way to avoid major upgrades or the requirement for major initiatives in enterprise systems such as enterprise resource planning (ERP).

Second, virtualize all you can. Virtualization offers a method of consolidation. You can take all those large server rooms -- and some companies have thousands of servers -- and consolidate into more centralized systems. Virtualization paves the path to do that.

Third, cloud computing, of course. Cloud offers a way to tap into new sources of IT processing, applications, or IT data and allows you to pay for those new capabilities incrementally rather than making large capital investments.

The fourth is open source -- look to open-source solutions. There are open-source solutions all the way up the IT stack, from the operating system to middleware to applications. Open source provides a way to, if not replace your more commercial proprietary systems, then at least to implement new initiatives and move to new initiatives under the budget radar, so to speak. You don't need to get budget approval to establish or begin new initiatives.

Lastly, look at the Enterprise 2.0 space. Enterprise 2.0 offers an incredible way to collaborate and to tap into the intellectual capital throughout your organization. It offers a way to bring a lot of thinking and a lot of brainpower together to tackle problems.

Gardner: It sounds like you feel that IT has a lot of the tools necessary and a lot of the process change necessary. It's simply a matter of execution at this point.

McKendrick: Absolutely. All the ingredients are there. I've said before in this podcast that I know of startup companies that have invested less than $100 in IT infrastructure, thanks to initiatives such as cloud computing and open source. Other methodologies weigh in there as well.

Gardner: All right. Let's go to bachelor number two, Brad Shimmin. If you're dating IT efficiency, how are you going to get them off the mark?

Provide a wide pasture

Shimmin: Thanks, Dana. It's funny. Everything I have in my little list here really riffs off of all of Joe's excellent underlying fundamentals that was talking about there. I hope what I am going to give you guys are some not-too-obvious uses of the stuff that Joe's been talking about.

My first recommendation is to give your users a really wide pasture. There is an old saying that if you want to mend fewer fences, have a bigger field for your cattle to live in. I really believe that's true for IT.

You can see that in some experiments that have been going on with the whole BYOC -- Bring Your Own Computer -- programs that folks like Citrix and Microsoft have been engaging in. They give users a stipend to pick up their own notebook computer, bring that to work, and use a virtualized instance of their work environment on top of that computer.

That means IT no longer has to manage the device itself. They now just manage virtual image that resides on that machine. So, the idea that we've been seeing with mobile devices making a lot of headway, in terms of users buying and using their own inside IT, we'll see extend to desktops and laptops.

I'd just like to add that IT should forget about transparency and strive for IT participation. The days of the ivory tower with top-down knowledge held within secret golden keys behind locked doors within IT are gone. You have to have some faith in your users to manage their own environments and to take care of their own equipment, something they're more likely to do when it's their own and not the company's.

Gardner: So, a bit more like the bazaar, when it comes to how IT implements and operates?

Shimmin: Absolutely. You can't have this autocracy downward slope anymore to be efficient. That doesn't encourage efficiency.

The second thing I'd suggest is don't build large software anymore. Buy small software. As Joe mentioned, SOA is well entrenched now within both the enterprise and within the IT. Right now, you can buy either a software as a service (SaaS) or on-premise software that is open enough that it can connect with and work with other software packages. No longer do you need to build this entire monolithic application from the ground-up.

A perfect example of that is something like PayPal. This is a service, but there are on-premise renditions of this kind of idea that allow you to basically build up a monolithic application without having to build the whole thing yourself. Using pre-built packages, smaller packages that are point solutions like PayPal, lets you take advantage of their economies of scale, and lets you tread upon the credibility that they've developed, especially something that's good for consumer facing apps.

The third thing I'd suggest -- and this is in addition to that -- build inside but host outside. You shouldn't be afraid to build your own software, but you should be looking to host that software elsewhere.

A game changer

We've all seen both enterprises and enterprise IT vendors -- independent software vendors (ISVs) themselves like IBM, Oracle, and Microsoft, in particular -- leaping toward putting their software platforms on top of third-party cloud providers like Amazon EC2. That is the biggest game changer in everything we've been talking about here to date.

There's a vendor -- I can't say who it is, because they didn't tell I could talk about it -- who is a cloud and on-premise vendor for collaboration software. They have their own data centers and they've been moving toward shutting down the data centers and moving that into Amazon's EC2 environment. They went from these multi-multi thousand dollar bills they are paying every month, to literally a bill that you would get for such a cellphone service from Verizon or AT&T. It was a staggering saving they saw.

Gardner: A couple of hundred bucks a month?

Shimmin: Exactly. It's all because the economies are scaled through that shared environment.

The fourth thing I would want to say is "kill your email." You remember the "Kill your TV" bumper stickers we saw in the '90s. That should apply to email. It's seen its day and it really needs to go away. For every gigabyte you store, I think it's almost $500 per user per year, which is a lot of money.

If you're able to, cut that back by encouraging people to use alternatives to email, such as social networking tools. We're talking about IM, chat, project group-sharing spaces, using tools like Yammer inside the enterprise, SharePoint obviously, Clearspace -- which has just been renamed SBS, for some strange reason -- and Google Apps, That kind of stuff cuts down on email.

I don't know if you guys saw this, but in January, IBM fixed Lotus Notes so they no longer store duplicate emails, They were cutting down on the amount of storage their users required by something like 70 percent, which is staggering.

Gardner: So what was that, eliminating the multiple versions of any email, right?

Shimmin: It was the attachments, yes. If there was a duplicate attachment, they stored one for each note instead of saying, "Hey, it's the same file, let's just store one instance of it in a database." Fixing stuff like that is just great, but it points to how big a problem it is to have everything running around in email.

Gardner: You might as well just be throwing coal up into the sky, right?

Shimmin: Exactly. To add to that, we should really turn off our printers. By employing software like Wikis, blogs, and online collaboration tools from companies like Google and Zoho, we can get away from the notion of having to print everything. As we know, a typical organization kills 143 trees a year -- I think was the number I heard -- which is a staggering amount of waste, and there's a lot of cost to that.

Gardner: Perhaps the new bumper sticker should be "Email killed."

Open, but not severe

Shimmin: Printing and email killed, right. My last suggestion would be, as Joe was saying, to really go open, but we don't have to be severe about it. We don't have to junk Windows to leverage some cost savings. The biggest place you can see savings right now is by getting off of the heavy license burden software. I'm going to pick on Office right now.

Gardner: How many others do you have to pick from?

Shimmin: It's the big, fat cow that needs to be sacrificed. Paying $500-800 a year per user for that stuff is quite a bit, and the hardware cost is staggering as well, especially if you are upgrading everyone to Vista. If you leave everyone on XP and adopt open-source solutions like OpenOffice and StarOffice, that will go a long, long way toward saving money.

Why I'm down on printing is, the time is gone when we had really professional, beautiful-looking documents that required a tremendous amount of formatting and everything needed to be perfect within Microsoft Word, for example. What now counts is the information. It's same for 4,000-odd features in Excel. I'm sure none of us here have ever even explored a tenth of those.

Gardner: Maybe we should combine some of the things you and Joe have said. We should go to users and say, "You can use any word processor you want, but we're not going to give you any money," and see what they come up with.

Shimmin: You're going to find some users who require those 4,000 features and you are going to need to pay for that software, but giving everyone a mallet to crack a walnut is insane.

Gardner: I want to go back quickly to your email thing. Are you saying that we should stop using email for communication or that we should just bring email out to a cloud provider and do away with the on-premises client server email -- or both.

Shimmin: Thanks for saying that. Look at software or services like Microsoft Business Productivity Online Suite (BPOS). You can get Exchange Online now for something like $5 per month per user. That's pretty affordable. So, if you're going to use email, that's the way to go. You're talking about the same, or probably better, uptime than you're getting internally from a company like Microsoft with their 99.9 percent uptime that they're offering. It's not five 9s, but it's probably a lot better than what we have internally.

So, yeah. You should definitely explore that, if you're going to use email. In addition to that, if you can cut down on the importance of email within the organization by adopting software that allows users to move away from it as their central point of communication, that is going to save a lot of money as well.

Gardner: Or, they could just Twitter to each other and then put all the onus on the cost of maintaining all those Twitter servers.

Shimmin: Nobody wants to want to pay for that, though.

Gardner: Let's go to JP Morgenthal. I'm expecting "shock and awe" from you, JP. What's your top five?

Morgenthal: Shock and awe, with regard to my compadres' answers?

Gardner: Oh, yeah. Usually you have a good contrarian streak.

The devastation of open source

Morgenthal: I was biting my tongue, especially on the open source. I just went through an analysis, where the answer was go JBoss on Linux Apache. Even in that, I had given my alternative viewpoint that from a cost perspective, you can't compare that stack to running WebSphere, or WebLogic on Windows. Economically, if you compare the two, it doesn't make sense. I'm still irked by the devastation that open source has created upon the software industry as a whole.

Gardner: Alright. We can't just let that go. What do you mean, quickly?

Morgenthal: Actually, I blogged on this. Here's my analogy. Imagine tomorrow if Habitat for Humanity all of a sudden decided that it's going to build houses for wealthy people and then make money by charging maintenance and upkeep on the house. You have open source. The industry has been sacrificed for the ego and needs of a few against the whole of the industry and what it was creating.

Gardner: Okay. This is worth an entire episode. So, we're going to come back to this issue about open source. Is it good? Is it bad? Does it save money or not? But, for this show, let's stick to the top five ways to save IT, and we'll come back and do a whole show on open source.

Morgenthal: I'd like to, but I've got to give credit. I can't deny the point that as a whole, for businesses, again, those wealthy homeowners who are getting that Habitat for Humanity home, hey, it's a great deal. If somebody wants to dedicate their time to build you a free home, go for it, and then you can hire anybody you like to maintain that home. It's a gift from the gods.

Gardner: What are your top five?

Morgenthal: Vendor management is first. One thing I've been seeing a lot is how badly companies mismanage their vendor relationships. There is a lot of money in there, especially on the IT side -- telecom, software, and hardware. There's a lot of play, especially in an industry like this.

Get control over your vendor relationships. Stop letting these vendors run around, convincing end-users throughout your business that they should move in a particular direction or use a particular product. Force them to go through a set of gatekeepers and manage the access and the information they're bringing into the business. Make sure that it goes through an enterprise architecture group.

Gardner: It's a buyers market. You can negotiate. In fact, you can call them in and just say, "We want to scrap the old license and start new." Right?

Morgenthal: Well, there are legal boundaries to that, but certainly if they expect to have a long-term relationship with you through this downturn, they've got to play some ball.

With regard to outsourcing noncritical functions, I'll give you a great example where we combined an outsourced noncritical function with vendor management in a telco. Many companies have negotiated and managed their own Internet and telco communications facilities and capability. Today, there are so many more options for that.

It's a very complex area to navigate, and you should either hire a consultant who is an expert in the area to help you negotiate this fact, or you should look the scenario where you take as much bandwidth as you use on an average basis, and when you need excess bandwidth, team in the cloud. Go to the cloud for that excess bandwidth.

Gardner: Okay, number three.

Analyze utilization

Morgenthal: Utilization analysis. Many organizations don't have a good grasp on how much of their CPU, network, and bandwidth is utilized. There's a lot of open space in that utilization and it allows for compression. In compressing that utilization, you get back some overhead associated with that. That's a direct cost savings.

Another area that has been a big one for me is data quality. I've been trying to tell corporations for years that this is coming. When things are good, they've been able to push off the poor data quality issue, because they can rectify the situation by throwing bodies at it. But now they can't afford those bodies anymore. So, now they have bad data and they don't have the bodies to fix up the data on the front end.

Here is a really bad rock and hard place. If I were them, I'd get my house in order, invest the money, set it aside, get the data quality up and allow myself to operate more effectively without requiring extra labor on the front end to clean up the data on the back end.

Finally, it's a great time to explore desktop alternatives, because Windows and the desktop has been a de-facto standard, a great way to go -- when things are good. When you're trying to cut another half million, million, or two million out of your budget, all those licenses, all that desktop support, start to add up. They're small nickels and dimes that add up.

By looking at desktop alternatives, you may be able to find some solutions. A significant part of your workforce doesn't need all that capability and power. You can then look for different solutions like light-weight Linux or Ubuntu-type environments that provide just Web browsing and email, and maybe OpenOffice for some light-weight word processing. For a portion of your user base, it's all they need.

Gardner: Okay. Was that four or five?

Morgenthal: That's five -- vendor management, outsourcing, utilization analysis, data quality, and desktop alternatives.

Gardner: Excellent. Okay. Now, going to you, Dave Kelly, what's your top five?

Optimize, optimize, optimize

Kelly: Thanks, Dana, and it's great to come at the end. I don't always agree with JP, but I liked a lot of the points that he just made and they complement some of the ones that I am going to make, as well as the comments that Brad and Joe made.

My first point would be, optimize, optimize, optimize. There's no doubt that all the organizations, both on the business side and the IT side, are going to be doing more with less. I think we're going to be doing more with less than we have ever seen before, but that makes it a great opportunity to step back and look at specific systems and business processes.

You can start at the high level and go through business process management (BPM) type optimization and look at the business processes, but you can also just step it down a level. This addresses what some of the other analysts have said here. If you look at things like data-center optimization, there are tremendous opportunities for organizations to go into their existing data centers and IT processes to save money and defer capital investment.

You're talking about things like increasing the utilization of your storage systems. Many organizations run anywhere from 40 to 50 percent of storage utilization. If you can increase that and push off new investments in additional storage, you've got savings right there. The growth rate in storage over the past three to five years has been tremendous. This is a great opportunity for organizations to save money.

It also references what Brad said. You've got the same opportunity on the email side. If you look at your infrastructure on the data-center side or the storage side, you've got all this redundant data out there.

You can use applications. There are products from Symantec and other vendors that allow you to de-duplicate email systems and existing data. There are ways to reduce your backup footprint, so that you have fewer backup tapes required. Your processes will run quicker, with less maintenance and management. You can do single-instance archiving and data compression.

Gardner: Dave, it sounds like you're looking at some process re-engineering in the way that IT operates.

Kelly: You can certainly do that, but you don't even have to get to that process re-engineering aspect. You can just look at the existing processes and say, "How can I do individual components more efficiently." I guess it is process reengineering, but I think a lot of people associate process reengineering with a large front-to-back analysis of the process. You can just look at specific automated tasks and see how you can do more with less in those tasks.

There are a lot of opportunities there in terms of like data center optimization as well as other processes.

The next point is that while it's important to increase your IT efficiency, while reducing cost, don't forget about the people. Think about people power here. The most effective way to have an efficient IT organization is to have effective people in that IT organization.

Empower your people

There's a lot of stress going on in most companies these days. There are a lot of question about where organizations and businesses are going. As an IT manager, one thing you need to do is make sure that your people are empowered to feel good about where they're at. They need to not hunker down and go into this siege mentality during these difficult times, even if the budgets are getting cut and there's less opportunity for new systems or new technology challenges. They need to redirect that stress to discover how the IT organization can benefit the business and deal with these bad times.

You want to help motivate them through the crisis and work on a roadmap for better days, and map out, "Okay, after we get through this crisis, where are we going to be going from here?" There's an important opportunity in not forgetting about the people and trying to motivate them and provide a positive direction to use their energy and resources in.

Gardner: They don't want to get laid off these days, do they?

Kelly: No, they don't. Robert Half Technology recently surveyed 1,400 CIOs. It's pretty good news. About 80 percent of the CIOs expect to maintain current staffing levels through the first half of this year. That's not a very long lead-time at this point, but it's something. About 8 or 9 percent expected to actually hire. So everyone is cutting budgets, reducing capital expenditures, traveling less, trying to squeeze the money out of the budget, but maybe things will stay status quo for a while.

The third point echoes a little bit of what JP said on the vendor management side, as well as on using commercial software. Organizations use what they have or what they can get. Maybe it's a good time to step back and reevaluate the vendors. That speaks to JP's vendor management idea, and the infrastructure they have.

So, you may have investments in Oracle, IBM, or other platforms, and there may be opportunities to use free products that are bundled as part of those platforms, but that you may not be using.

For example, Oracle bundles Application Express, which is a rapid application development tool, as part of the database. I know organizations are using that to develop new applications. Instead of hiring consultants or staffing up, they're using existing people to use this free rapid application development tool to develop departmental applications or enterprise applications with this free platform that's provided as part of their infrastructure.

Of course, open source fits in here as well. I have a little question about the ability to absorb open source. Perhaps at the OpenOffice level, I think that's a great idea. At the infrastructure level and at the desktop level that can be a little bit more difficult.

The fourth point, and we've heard this before, is go green. Now is a great time to look at sustainability programs and try to analyze them in the context of your IT organization. Going green not only helps the environment, but it has a big impact, as you're looking at power usage in your data center with cooling and air conditioning cost. You can save money right there in the IT budget and other budgets going to virtualization and consolidating servers. Cutting any of those costs can also prevent future investment capital expenditures.

Again, as JP said about utilization, this is a great opportunity to look at how you're utilizing the different resources and how you can potentially cut your server cost.

Go to lunch

Last but not least, go to lunch. It's good to escape stressful environments, and it may be a good opportunity for IT to take the business stakeholders out to lunch, take a step back, and reevaluate priority. So, clear the decks and realign priorities to the new economic landscape. Given changes in the business and in the way that services and products are selling, this may be a time to reevaluate the priorities of IT projects, look at those projects, and determine which ones are most critical.

You may be able to reprioritize projects, slow some down, delay deployments, or reduce service levels. The end effect here is allowing you to focus on the most business critical operations and applications and services. That gives a business the most opportunity to pull out of this economic dive, as well as a chance to slow down and push off projects that may have had longer-term benefits.

For example, you may be able to reduce service levels or reduce the amount of time the help desk has to respond to a request. Take it from two hours to four hours and give them more time. You can potentially reduce your staffing levels, while still serving the business in a reasonable way. Or, lengthen the time that IT has after a disaster to get systems back up and operating. Of course, you've got to check that with business leaders and see if it's all right with them. So, those are my top five.

Gardner: Excellent, thank you. I agree that we're in a unique opportunity, because, for a number of companies, their load in the IT department is down, perhaps for the first time. We've been on a hockey-stick curve in many regards in the growth of data and the number of users, seats, and applications supported.

Companies aren't merging or acquiring right now. They're in kind of stasis. So, if your load is down in terms of headcount, data load, newer applications, now is an excellent time to make substantial strategic shifts in IT practices, as we've been describing, before that demand curve picks up again on the other side, which its bound to do. We just don't know when.

As the last panelist to go, of course, I am going to have some redundancy on what's been said before, but my first point is, now is the time for harsh triage. It is time to go in and kill the waste by selectively dumping the old that doesn't work. It's easiest to do triage now, when you've got a great economic rationale to do it. People will actually listen to you, and not have too much ability to whine, cry and get their way.

IT really needs to find where it's carrying its weight. It needs to identify the apps that aren't in vigorous use or aren't adding value, and either kill them outright or modernize them. Extract the logic and use it in a process, but not at the cost of supporting the entire stack or a Unix server below it.

IT needs to identify the energy hogs and the maintenance black holes inside their infrastructure and all the inventory that they are supporting. That means ripping out the outdated hardware. Outdated hardware robs from the future in order to pay for a diminishing return in the past. So, it's a double whammy in terms of being nonproductive and expensive.

You don't really need to spend big money to conduct these purges. It's really looking for the low-lying fruit and the obvious wasteful expenditures and practices. As others have said today, look for the obvious things that you're doing and never really gave much thought to. They are costing you money that you need to do the new things in order to grow. It's really applying a harsh cost-benefit analysis to what you are doing.

It would also make sense to reduce the number of development environments. If you're supporting 14 different tools and 5 major frameworks, it's really time to look at something like Eclipse, Microsoft, or OSGi and say, "Hey, we're going to really work toward more standardization around a handful of major development environments. We're going to look for more scripting and doing down and dirty web development when we can." That just makes more sense.

It's going to be harder to justify paying for small tribes of very highly qualified and important, but nonetheless not fully utilized, developers.

Look outside

It's also time to replace costly IT with outside services and alternatives that we have discussed. That would include, as Brad said, your email, your calendar, word processing, and some baseline productivity applications and consider where you can do them cheaper.

I do like the idea of saying to people, "You still need to do email and you need still to do word processing, but we no longer are going to support it. Go find an alternative and see how that works." It might be an interesting experiment at least for a small department level at first.

That means an emphasis on self-help, and in many aspects of IT it is possible. Empower the users. They want that power. They want to make choices. We don't need to just walk them down a blind path, tell them how to do mundane IT chores, and then pay an awful lot of money to have them doing it that way. Let's open up, as Brad said, the bazaar and stop being so much of a cathedral.

I suppose that means more use of SaaS and on-demand applications. They make particular sense in customer relationship management (CRM), sales force, and in human resources procurement and payroll. It's really looking to outsource baseline functionality that's not differentiating your organization. It's the same for everybody. Find the outsourcers that have done it well and efficiently and get it outside of your own company. Kill it, if you are doing it internally.

It's really like acting as a startup. You want to have low capital expenditures. You want to have low recurring costs. You want to be flexible. You want to empower your users. A lot of organizations need to think more like a startup, even if they are an older, established multinational corporation.

My second point is to create a parallel IT function that leverages cloud attributes. This focuses again on what Joe mentioned, on the value of virtualization and focusing on the process and workflows -- not getting caught up in how you do it, but what it ends up doing for you.

The constituent parts aren't as important as the end result. That means looking to standardize hardware, even if it's on-premises, and using grid, cloud, and modernized and consolidated data center utility best practices. Again, it's leveraging a lot of virtualization on standard low-cost hardware, and then focusing the value at a higher abstraction, at the process level.

It's standardizing more use of appliances and looking at open-source software. I also have to be a little bit of a contrarian to JP. I do think there's a role for open source in these operations, but we are going to save that for another day. That's a good topic.

This is another way of saying doing SOA, doing it on-premises, using cloud and compute fabric alternatives, and trying to look outside for where other people have created cloud environments that are also very efficient for those baseline functions that don't differentiate. That creates a parallel function in IT, but also looks outside.

I agree wholeheartedly with what's been said earlier about the client. It's time to cheapen, simplify, and mobilize the client tier. That means you can use mobile devices, netbooks, and smart phones to do more activities, to connect to back-end data and application sets and Web applications.

Focus on the server

It's time to stop spending money on the client. Spend it more on the server and get a higher return on that investment. That includes the use of virtual desktop infrastructure (VDI) and desktop-as-a-service (DaaS) types of activities. It means exploring Linux as an operating environment on the desktop, where that makes sense, and look at what the end users are doing with these clients.

If they're at a help desk and they're all using three or four applications in a browser, they don't need to have the equivalent of a supercomputer that's got the latest and greatest of everything. It's time to leverage browser-only workers. Find workers that can exist using only browsers and give them either low-cost hardware that's maybe three or four years old and can support a browser well or deliver that browser as an application through VDI. That's very possible as well.

It means centralizing more IT support, security, and governance at the data center. It even means reducing the number of data centers, because given the way networks are operating, we can do this across a wide area network (WAN). We can use acceleration, remote branch technologies, and virtual private networks (VPNs). We can deliver these applications to workers across continents and even across the globe, because we're not dealing with a B2C, we are dealing with a B2E -- that is, to your employees.

You can support the scale with fewer data centers and lower cost clients. It's a way to save a lot of money. Again, you're going to act like a modern startup. You're going to build the company based on what your needs are, not on what IT was 15 years ago.

My fourth point is that BI everywhere. Mine the value of the data that you've got already and the data that you are going to create. Put in the means to be able to assess where your IT spend makes sense. This is BI internal to IT, so BI for IT, but also IT enabling BI across more aspects of the business at large.

Know what the world is doing around you and what your supply chain is up to. It's time to join more types of data into your BI activities, not just your internal data. You might be able to actually rent data from a supplier, a partner or a third-party, bring that third-party data in, do a join, do your analysis, and then walk away. Then, maybe do it again in six months.

It's time to think about BI as leveraging IT to gain the analysis and insights, but looking in all directions -- internal, external, and within IT, but also across extended enterprise processes.

It's also good to start considering tapping social networks for their data, user graph data, and metadata, and using that as well for analysis. There are more and more people putting more and more information about themselves, their activities, and their preferences into these social networks.

That's a business asset, as far as I'm concerned. Your business should start leveraging the BI that's available at some of these social networks and join that with how you are looking at data from your internal business activities.

Take IT to the board level

Last, but not least, it's time for IT to be elevated to the board level. It means that the IT executive should be at the highest level of the business in terms of decision and strategy. The best way for IT to help companies is to know what those companies are facing strategically as soon as they're facing it, and to bring IT-based solutions knowledge to the rest of the board. IT can be used much more strategically at that level.

IT should be used for transformation and problem solving at the innovation and business-strategy level, not as an afterthought, not as a means to an end, but actually as part of what ends should be accomplished, and then focusing on the means.

That is, again, acting like a startup. If you talk to any startup company, they see IT as an important aspect of how they are going to create value, go to market cheaply, and behave as an agile entity.

That's the end of my five. Let's take the discussion for our last 10 minutes to how IT can work on the offense. I'll go first on this one. I think it's time to go green field. It's time to look at software as a differentiator.

The reason I bring this up is Marc Andreessen, who is starting a venture capital fund with Ben Horowitz. They were both at Opsware together and then at HP, after they sold. Andreesen told Charlie Rose recently that there is a tragic opportunity from our current economic environment. A number of companies are going to go under or they're going to be severely challenged. Let's take a bank, for example.

A bank is going to perhaps be in a situation where its assets are outstripped by its liabilities and there is no way out. But, using software, startups, and third-party services, as Andreessen said, you can start an Internet bank. It's not that difficult.

You want to be able to collect money, lend it out with low risk at a sufficient return, and, at the end of the day, have a balance sheet that stands on its own two feet. Creating an Internet bank, using software and using services combined from someone like PayPal and others makes a tremendous amount of sense, but that's just an example.

There are many other industries, where, if the old way of doing it is defunct, then it's time to come in and create an alternative. Internet software-based organizations can go out and find new business where the old companies have gone under. It doesn't necessarily mean it's all the software, but the business value is in how you coordinate buyers and sellers and efficiencies using software.

Take something like Zipcar. They're not in the automotive business, but they certainly allow people to gain the use of automobiles at a low price point.

I'd like to throw out to the crowd this idea of going software, going green field, creating Internet alternatives to traditional older companies. Who has any thoughts about that?

Morgenthal: On the surface there are some really good concepts there. What we need are state and federal governances and laws to catch up to these opportunities. A lot of people are unaware of the potential downside risks to letting the data out of your hands into a third-party candidate's hands. It's questionable whether it's protected under the Fourth Amendment, once you do that.

There are still some security risks that have yet to be addressed appropriately. So, we see some potential there for the future. I don't know what the future would look like. I just think that there is some definite required maturity that needs to occur.

Gardner: So, it's okay to act like a startup, but you still need to act like a grownup.

Morgenthal: Right.

Gardner: Any other thoughts on this notion of opportunity from tragedy in the business, and that IT is an important aspect of doing that?

Evolving enterprises

McKendrick: I agree with what you're saying entirely. You mentioned on a couple of occasions that large enterprises need to act like small businesses. About 20 years ago, the writer John Naisbitt was dead-on with the prediction that large enterprises are evolving into what he called confederations of entrepreneurs. Large companies need to think more entrepreneurially.

A part of that thinking will be not the splitting up, but the breaking down of large enterprises into more entrepreneurial units. IT will facilitate that with the Enterprise 2.0 and Web 2.0 paradigm, where end users can kind of shape their own destiny. You can build a business in the cloud. There is a need for architecture; and I preach that a lot, but smaller departments of large corporations can kind of set their own IT direction as well with the availability.

Gardner: We're almost out of time. Any other thoughts about how IT is on the offensive, rather than just the defensive in terms of helping companies weather the downturn?

Shimmin: I agree with what you guys have been saying about how companies can behave like startups. I'd like to turn it around a little bit and suggest that a small company can behave like a large company. If you have a data center investment already established, you shouldn't be bulldozing it tomorrow to save money. Perhaps there's money in "them thar hills" that can be had.

Look at the technologies we have today, the cloud-enablement companies that are springing up left and right, and the ability to federate information and to loosely coupled access methods to transact between applications. There's no reason that the whole idea that we saw with the SETI@home and the protein folding ideas can't be leveraged within the company's firewalls and data centers externalize. Maybe it's storage, maybe it's services, maybe it's an application or service that the company has created, that can be leveraged to make money. It's like the idea of a house putting a windmill in and then selling electricity back to the power grid.

Gardner: Last thoughts?

Kelly: I would add one or two quick points here. Going on the offense, one opportunity is to take advantage of the slowdown and look at those business processes that you haven't gotten to in a long time, because things have been so hectic over the past couple of years. It may be a great time to reengineer those using some of the new technologies that are out there, going to the cloud, doing some of the things we've already talked about.

The other option here is that it may be a good time to accelerate new technology adoption. Move to YouTube for video-based training, or use Amazon's Kindle for distributing repair manuals electronically. Look at what the options are out there that might allow you to remake some of these processes using new technologies and allow you to profit and perhaps even grow the business during these tough economic times.

Gardner: So economic pain becomes the mother of all invention.

Kelly: Exactly.

McKendrick: We've seen it happen before. Back in 1981-1982 was when we saw the PC revolution. The economy was in just as bad a shape, if not worse, than it is now. Unemployment was running close to 10 percent. The PC revolution just took off and boomed during that time. A whole new paradigm had evolved.

Gardner: Very good. Well, I would like to thank our panelists this week. We've been joined by Joe McKendrick, independent IT analyst and prolific blogger. Also, Brad Shimmin, principal analyst at Current Analysis; JP Morgenthal, independent analyst and IT consultant; and Dave Kelly, founder and president of Upside Research. Thanks to all. I think we've come up with a lot of very important and quite valuable insights and suggestions.

I'd also like to thank our charter sponsor for the BriefingsDirect Analyst Insights Edition podcast series, Active Endpoints, maker of the ActiveVOS visual orchestration system, as well as the support of TIBCO Software.

This is Dana Gardner, principal analyst at Interarbor Solutions. Thanks for listening, and come back next time.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com. Charter Sponsor: Active Endpoints. Also sponsored by TIBCO Software.

Special offer: Download a free, supported 30-day trial of Active Endpoint's ActiveVOS at www.activevos.com/insight.

Edited transcript of BriefingsDirect Analyst Insights Edition podcast, Vol. 38 on how businesses should react to the current economic realities and prepare themselves to emerge stronger. Copyright Interarbor Solutions, LLC, 2005-2009. All rights reserved.

Friday, February 13, 2009

Interview: Guillaume Nodet and Adrian Trenaman on Apache ServiceMix and Role of ESBs in OSS

Transcript of a BriefingsDirect podcast with Guillaume Nodet and Adrian Trenaman of Progress Software on directions and trends in SOA and open source infrastructure.

Listen to the podcast. Download the podcast. Find it on iTunes and Podcast.com. Learn more. Sponsor: Progress Software.

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you’re listening to BriefingsDirect. Today, a sponsored podcast discussion about open source, service-oriented architecture (SOA) developments, and trends.

We are going to catch up and get a refresher on some important open-source software projects in the Apache Software Foundation. We’ll be looking at the Apache ServiceMix enterprise service bus (ESB), and the toolkit, and we are going to talk with some thought leaders and community development leaders to assess the market for these products, particularly in the context of cloud computing, which is certainly getting a lot of attention these days.

We'll also look at the context around such technologies as OSGi and Java Business Integration (JBI). We want to also think about what this means for enterprise-caliber SOA, particularly leveraging open-source projects. [Access more FUSE Community podcasts.]

To help us sort out and better understand the open-source SOA landscape, we’re joined by Guillaume Nodet, software architect at Progress Software and vice president of Apache ServiceMix at Apache. Welcome to the show, Guillaume.

Guillaume Nodet: Thank you.

Gardner: We are also joined by Adrian Trenaman, distinguished consultant at Progress Software. Hey, Adrian.

Adrian Trenaman: Hey, Dana. How is it going?

Gardner: Good. Now, we are starting to see different patterns of adoption and use-case scenarios around SOA, and open-source projects. Counterpart offerings for certification and support, such as the FUSE offerings from Progress, are getting more traction in interesting ways. The role of ESBs, I think as we can safely say, it is expanding.

The role for management and policy and rules and roles is becoming much more essential, not on a case-by-case basis or tactical basis, but more from a holistic management overview of services and other aspects of IT development and deployment.

First I want to go to Guillaume. Give us a quick update on Apache ServiceMix, and how you see it being used in the market now?

Nodet: Apache ServiceMix is one of the top-level projects at the Apache Software Foundation. It was started back in 2005, and was graduated as a top-level project a year-and-a-half ago. ServiceMix is an open-source ESB and it's really a well-known ESB for several reasons, which we’ll come to later. It's really a full-featured ESB that is widely used in a whole range of companies from government to banking applications. There’s very wide use of ServiceMix.

Gardner: Tell us a little bit about your background, and how you became involved. How long have you been working on ServiceMix, and what led you up to getting involved?

Nodet: Back in 2004, I was working at a small company based in France, and we were looking for an ESB for internal purposes. I began to do some research on the open-source ESBs available at that time. I was involved in the Mule Project and I became a committer in my spare time, and had been one of the main committers for six months.

In the summer of 2005, my company was firing people for economical reasons, and I decided to take a break and leave the company. So I sent an email to James Strachan, who was just starting ServiceMix, and that's how I became involved. I was hired by LogicBlaze at the time, which has been acquired by IONA and now Progress.

Gardner: Tell us a little bit more about the context of the ServiceMix ESB in some of the other Apache Software Foundation projects, just so our listeners understand that this isn't necessarily standalone. It can be used, of course, standalone, but it fits into a bigger picture, when it comes to SOA infrastructure. Maybe you could just explain that landscape as it stands now.

The bigger picture

Nodet: ServiceMix is an ESB and reuses lots of other Apache projects. The main ones which ServiceMix reuse is Apache ActiveMQ which is a message broker so it is for the JMS backbone infrastructure. We also heavily use Apache CXF, which is a SOAP stack that integrates nicely in ServiceMix. One of the other projects that we use is Apache Camel, which is a sub-project of Apache ActiveMQ, is a message router, which is really efficient and it uses DSL to be able to configure routers very easily. So these are the three main projects that we use.

Of course, for ServiceMix 4.0, we are also using the Apache Felix OSGi Framework, and lots of other projects that we use throughout ServiceMix. There are really big ties between ServiceMix and the other projects. Another project that we can leverage in ServiceMix is Apache ODE, which is the business process execution language (BPEL) Engine.

Gardner: Now, it's not always easy to determine the number of implementations, particularly in production, for open-source projects and code. It's a bit easier when you have a commercial vendor. You can track their sales or revenues and you have a sense of what the market is doing.

Do you have any insight into what's been going on, in a larger trend around these SOA open-source projects in terms of implementation volumes? Are we still in test, are people in pilot, or are we seeing a bit more. And, what trends are there around actual production implementation? I'll throw that to either one, Adrian or Guillaume.

Trenaman: I’m happy to chip in there. We’ve seen, quite a lot of work in terms of real-world sales. So you started in ServiceMix, obviously. We have been using ServiceMix for some time with our customers, and we have seen it used and deployed, in anger, if you will. What's interesting for me is the number of different kinds of users out there, the different markets that it gets deployed in. We have had users in airline solutions, in retail, and extensive use in government situations as well.

We recently finished a project in mobile health, where we used ServiceMix to take information from a government health backbone, using HL7 formatted messages, and get that information onto the PDAs of the health-care officials like doctors and nurses. So this is a really, really interesting use case in the healthcare arena, where we’ve got ServiceMix in deployment.

It’s used in a number of cases as well for financial messaging. Recently, I was working with a customer, who hoped to use ServiceMix to route messages between central securities depositories, so they were using SWIFT messages over ServiceMix. We’re getting to see a really nice uptake of new users in new areas, but we also have lots of battle-hardened deployments now in production.

Gardner: One of the nice things about this trend towards adoption is that you often get more contributions back into the project. Maybe it would be good now to understand who is involved with Apache, who is really contributing, and who is filling out the feature sets and defining the requirements around ServiceMix. Guillaume, do you have any thoughts about who is really behind this in terms of the authoring and requirements?

From the community

Nodet: The main thing is that everything comes from the community at large. It’s mainly users asking how they can implement a given use case. Sometimes, we don't have everything set up to fulfill the use case in the easiest way. In such a case, we try to enhance ServiceMix to cover more use cases.

In terms of contributors, we have lots of people working for different companies. Most of them are IT companies who are working and implementing SOA architecture for one of their customers and they are using ServiceMix.

We have a number of individual contractors who do some consulting around ServiceMix and they are contributing back to the software. So, it's really a diverse community. Progress is, obviously, one of the big proponents of Apache ServiceMix. As you have said, we run our business using the FUSE family of projects.

So, it's really a very diverse community and with different people from different origins, from everywhere in the world. We have Italian guys, we have, obviously, US people, and we have a big committee.

Gardner: The JBI specification has been quite central to ServiceMix. If you could, give us an update on what JBI, as its own spec, has been up to, and what that means for ServiceMix, and ultimately FUSE. Furthermore, let's get into some of the OSGi developments. It has really become hot pretty quickly in the market. So what's up with JBI and OSGi?

Nodet: The JBI specification has been out since the beginning of 2005. It defines an architecture to build some ESBs in Java. The main thing is that the key concept is normalized exchanges. This means that you can deploy components on the JBI container, and all of these components will be able to work together without any problems because they share a common knowledge of exchanges, and the messages between components are implemented. This is really a key point.

Anyone can grab a third party component from outside ServiceMix. There are a number of examples of components that exist, and you can grab such a component and deploy it in ServiceMix and it will just work.

That's really one of the main points behind the JBI specification. It’s a Java centric specification. I mean that the implementation has to be done in Java, but ServiceMix allows a lot of different clients from other technologies to jump into the bus and exchange data with other components.

So one of the things that we use for that is a STOMP protocol, which is a text-based messaging protocol. We have lots of different implementations from Ruby, Python, JavaScript and lots of different languages that you can use to talk to the ServiceMix bus.

OSGi is a specification that is really old, about 10 years, at least. It was originally designed for embedded devices. During the past two years, we have seen a lot of traction in the enterprise market to push OSGi. The main thing is that the next major version of ServiceMix, which will be ServiceMix 4.0, is based on OSGi and reuses the OSGi benefit.

The main driver behind that was mainly to get around some weaknesses of the JBI specification mainly related to the JBI packaging and class loader architecture. OSGi is really a nice specification for that and we decided to use it for the next version of ServiceMix.

Gardner: Now, we tend to see a little bit of politics oftentimes in the market around specifications, standards, who supports them, whether there is a competing approach, and where that goes. We’ve seen a bit of that in the Java Community Process over the years. I wonder, Adrian, if you might be able to set the table, if you will, around where these specifications are and what some of the commercial interests are?

For example, I know that IBM is quite strong behind OSGi, and Oracle has backed it to an extent as well. These guys, obviously, have quite a bit clout in the market. Set the table on the vendors and the specification situation now.

Sticking with JBI 1.0

Trenaman: JBI is currently at version 1.0, or 1.11 actually. There is a JBI 2.0 expert group, and I believe they are working under JSR 312. So, I think there’s work going on to advance that specification.

However, if you look at what the vendors are doing -- be it Sun, Progress, or Red Hat through JBoss -- I think the vendors are all sticking with JBI 1.0 at the moment, making customers successful with that version of the spec and in anticipation of a new version of the spec. But, I believe it’s quite quiet. Guillaume, is that correct, for 2.0?

Nodet: Yes. I am part of the 2.0 expert group for JBI and the activity has been quite low recently. One main driver behind JBI 2.0 is to refocus on what I explained is the key point of the JBI 1.0 specification, which is the concept of normalized exchanges and the normalized message router.

The goal of the JBI 2.0 Expert Group I think is to refocus on that and making JBI play much more nicely with other specifications that somewhat are seen as opponents to JBI, like SCA, and also play more nicely with OSGi because ServiceMix is not the only JBI implementation that goes towards the OSGi way. So we want also to be sure that everything aligns correctly.

Gardner: Just so listeners can understand, what is it about OSGi that is valuable or beneficial as a container in an architectural approach, when used in conjunction with the SOA architectural component?

Trenaman: OSGi is the top of the art, in terms of deployment. It really is what we’ve all wanted for years. I’ve lost enough follicles on my head fixing class-path issues and that kind of class-path hell.

OSGi gives us a badly needed packaging system and a component-based modular deployment system for Java. It piles in some really neat features in terms of life cycle -- being able to start and shut down services, define dependencies between services and between deployment bundles, and also then to do versioning as well.

The ability to have multiple versions of the same service in the same JVM with no class-path conflicts is a massive success. What OSGi really does is clean up the air in terms of Java deployment and Java modularity. So, for me, it's an absolute no-brainer, and I have seen customers who have led the charge on this. This modular framework is not necessarily something that the industry is pushing on the consumers. The consumers are actually pulling us along.

I have worked with customers who have been using OSGi for the last year-and-a-half or two years, and they are making great strides in terms of making their application architecture clean and modular and very easy and flexible to deploy. So, I’ve seen a lot of goodness come out of OSGi and the enterprise. You mentioned politics earlier on, Dana, and the politics for me are interesting on the number of levels.

Here is my take on it. The first level is on the OSGi core platform, and what you’ve got there is a number of players who are all, in some sense I guess, competing to get the de-facto standard implementation or reference implementation. I think Eclipse Equinox emerges as the winner. They are now strongly backed by IBM.

The key players

And in the Apache Software Foundation you’ve got Felix. One of the other key players will be Knopplerfish OSGi, which is really Makewave, and they deliver Knopplerfish under a BSD-style license. So, we have some healthy competition there, but I guess in terms of feature build out Equinox seems to be the winner in that area.

That's one way of looking at it. The other thing is, if you look at your traditional app server vendors and what they are doing, IBM, Oracle, Red Hat, and Sun have all put OSGi, or are about to put OSGi, within their application servers. This is a massive movement.

I think it's interesting that OSGi is no longer a differentiator. It’s actually an important gatekeeper. You have to have it. This is a wave that the industry and that our customers are all riding, and I think they are very welcoming to it.

Politically, all of the app server vendors seem to be massively behind OSGi and supportive of it. The other area that maybe you alluded to is that in the broader Java community, there’s been a debate that's gone for some time now about JSR 277, which is the Java Community Process attempt at Java modules. The scene there is that JSR 277 overlaps massively with what OSGi intends to achieve, or rather has already achieved.

That starts getting messy all over again, because Java 7.0 will include JSR 277. So the future of Java seems to have hooked into this Java module specification, and not taking what would be the sensible choice, which would be to follow an OSGi based model, or at least to passionately embrace OSGi and weave it in a very nice way into JSR 277.

So, there is still some distance to go there on that debate over which one actually gets accepted and gets embraced by the community. I think the happiest conclusion for that is where JSR 277 really does embrace what OSGi has done, and actually, in a sense, builds support into the Java language for OSGi.

Gardner: Clearly, the momentum around OSGi has been substantial. I’ve been amazed at how far this has come so quickly.

Trenaman: Exactly.

Gardner: Now, IONA now within Progress Software, is in this not just for “peace on earth and good will toward men.” With the latest FUSE version being 4.0, you have a certification, support, and enterprise-ready service value around the ServiceMix core. Is there something about OSGi that helps Progress in delivering this to market, given the modularity and the better control and management aspects? I am thinking, if I am in certification and enterprise-ready mode for these, that OSGi actually helps me, is that correct?

It's a community issue

Trenaman: My perspective on that would be that embracing of OSGi in FUSE is a community issue. It’s the community that's driven that and that's a part of ServiceMix. So, this is something that we in Progress now are quite happy to embrace and then take into FUSE.

For me, what the OSGi gives us is clearly a much better plug-in framework, into which we can drop value-added services and into which we can extend. I think the OSGi framework is great for that, as well as in terms of management, maybe moving toward grid computing. The stuff that we get from OSGi allowed us to be far more dynamic in the way we provision services.

Gardner: Great. Now, you mentioned the big “grid” word. A lot is being talked about these days in cloud computing, and there’s an interesting intersection here between open-source early adopters and the very technology savvy providers or companies and the cloud phenomenon.

We’ve seen some quite successful cloud implementations at such organizations as Google, Yahoo!, and Amazon, and we’re starting to see more with chat in the market from Microsoft and IBM that they are going to get into this as well.

These are the organizations that are looking for control, the ability to extend code and “roll their own.” That's where their value add is. What's the intersection between SOA, open-source infrastructure, and these cloud implementations? Then, we’ll talk about where these clouds might go in terms of enterprises themselves. Who wants to take the high view on the cloud and open-source SOA discussion?

Trenaman: A lot of SOA is down to simply "Good Design 101." The separation of the interface from the implementation is absolutely key, and then location independence, as well. You know, being able to access a service of some kind and actually not really care exactly where that is on the cloud, so that the whole infrastructure behind the service is transparent. You do not get to see it.

SOA brings some very nice concepts in terms of contract-first design and standard-based specification of interfaces, be they using WSDL or just plain old XML and REST -- or even XML and JMS.

I think the fact that we can now define in a well-understood way what these services are, and that allows us to get the data into and out of the cloud in a standardized way. I think that's massively important. That's one of the things that SOA brings to the cloud that becomes very important.

What open-source brings to cloud, apart from quality software against which to build massively distributed systems. What it brings is maybe a business model or a deployment model that actually suits the cloud.

I think of the traditional software licensing models for closed source where you are charging per CPU. When you look at massive cloud deployments with virtual machines on many different physical hardware boxes, those models just don't seem to work.

Gardner: A great deal of virtualization is taking place in these cloud infrastructures.

A natural approach

Trenaman: I think open source becomes a very natural and desirable approach in terms of the technologies that you are going to use in terms of accessing the cloud and actually implementing services on the cloud. Then, in order to get those services there in the first place, SOA is pivotal. The best practices and designs that we got from the years we have been doing SOA certainly come into play there.

Gardner: Let's move into this notion of a private cloud, which also requires us to understand a hybrid, or managing what takes place within a private, on-premises cloud infrastructure -- and then some of these other available services from other large consumer-facing and business-facing cloud providers.

Vendors and, in many cases, community development organizations are starting to salivate over this opportunity to provide the software, services, and support in helping enterprises create that more efficient, high availability, much more creative utilization range incumbent in a well-designed cloud infrastructure or grid or utility infrastructure.

Trenaman: Sure.

Gardner: It seems unlikely that an organization creating one of these clouds is going to go out and just buy it out of the box. It seems much more likely that, at least for the early adoption stages, this is going to be a great opportunity to be exerting your own special sauce as an internal IT organization, well versed in open-source community development projects and then delivering services back to your employees and your customers and your business partners in such a way that you can really reduce your total cost, gain agility, and gain more control.

Let's go to Guillaume. How do you see ServiceMix, in particular, playing in this movement, now that we are just starting to see the opening innings of private cloud infrastructure?

Nodet: ServiceMix has long been a way that you can distribute your SOA artifacts. ServiceMix is an ESB and by nature, it can be distributed, so it's really easy to start several instances of ServiceMix and make them seamlessly talk together in a high availability way.

The thing that you do not really see yet is all the management and all the monitoring stuff that is needed when you deploy in such an architecture. So ServiceMix can really be used readily to fulfill the core infrastructure.

ServiceMix itself does not aim at providing all the management tools that you could find from either commercial vendors or even open-source. So, on this particular topic, ServiceMix, backed by Progress, is bringing a lot of value to our customers. Progress now has the ability to provide such software.

Gardner: So, Progress has had quite a long history, several decades, in bringing enterprise development and deployment strategies, platforms, tools, a full solution. This seems to be a pretty good heritage combined with what community development can offer in starting to craft some of these solutions for private clouds and also to manage the boundaries, which I think is essential.

I can see an ESB really taking on a significantly larger role in managing the boundaries between and among different cloud implementations for integration, data portability, and transactional integration. Adrian anything to further add to that.

Dynamic provisioning

Trenaman: Certainly, you could always see the ESBs being sort of on the periphery of the cloud, getting data in and out. That's a clear use case. There is something a little sweeter, though, about ServiceMix, particularly ServiceMix 4, because it's absolutely geared for dynamic provisioning.

You can imagine having an instance of ServiceMix 4 that you know is maybe just an image that you are running on several virtual machines. The first thing it does is contact a grid controller and says, “Well, okay, what bundles do you want me to deploy?” That means we can actually have the grid controller farming out particular applications to the containers that are available.

If a container goes down, then the grid controller will restart applications or bundles on different computing resources. With OSGi at the core of ServiceMix, at the core of the ESB, that’s a step forward now in terms of dynamic provisioning and really like an autonomous competing infrastructure.

Nodet: Another thing I just want to add about ServiceMix 4, complementing what Adrian, just said is that ServiceMix split into several sub-projects. One of them is ServiceMix Kernel, which is an OSGi enhanced runtime that can be used for provisioning education, and this container is able to deploy virtually any kind of abstract. So, it can support Web applications, and it can support JBI abstracts, because the JBI container is reusing it, but you can really deploy anything that you want.

So, this piece of software can really be leveraged in cloud infrastructure by virtually deploying any application that you want. It could be plain Web services without using an ESB if you don’t have such a need. So it's really pervasive.

Gardner: We were quite early in this whole definition of what private cloud would or wouldn't be. Even the word “cloud,” of course, is quite nebulous nowadays.

I do see a huge opportunity here, given also the economic pressures that many organizations are going to be facing in the coming years. It's really essential to do more with less. As we move toward these cloud implementations, you certainly want to be able to recognize that it isn't defined. It's a work in progress, and having agility, flexibility, visibility into the code, understanding the origin for the code, and the licensing and so forth, I think is extremely important.

Trenaman: It’s massively important for anyone building the cloud, particularly a public cloud. That has got to be watched with total care.

Gardner: We’ve been talking about SOA infrastructure, getting some updates and refreshers on the ServiceMix and Apache Foundation approaches. talking to some community and thought leaders. We've learned a little bit more also about Progress Software and FUSE 4.0.

I’m very interested and excited about these cloud opportunities for developers to use as they already are. The uptake in Amazon Web Services for development activities and test-and-deploy scenarios and performance testing has been astonishing.

Microsoft is going to be right behind them with an appeal to developers to build on a Microsoft cloud. These are going to be ongoing and interesting, and so managing them is going to be critical to their success. A key differentiator from one enterprise to another it is how well they can take advantage of these, and manage the boundaries quite well.

I want to thank our participants. We have been joined by Guillaume Nodet. He is the software architect at Progress Software and vice president of Apache ServiceMix. Thank you, Guillaume, we really appreciate your input.

Nodet: No problem. I am glad that we have been able to do this.

Gardner: We have also been joined by Adrian Trenaman. He is distinguish consultant at Progress Software. Great to have you with us, Adrian.

Trenaman: It's a pleasure.

Gardner: This is Dana Gardner, principal analyst at Interarbor Solutions. I want to thank our sponsor for today's podcast, Progress Software. We’re coming to you through the BriefingsDirect Network. Thanks for listening and come back next time.

Listen to the podcast. Download the podcast. Find it on iTunes and Podcast.com. Learn more. Sponsor: Progress Software.

Transcript of a BriefingsDirect podcast with Guillaume Nodet and Adrian Trenaman of Progress Software on directions and trends in SOA and open source. Copyright Interarbor Solutions, LLC, 2005-2009. All rights reserved.

Thursday, June 05, 2008

Apache CXF: Where it's Been and What the Future Holds for Web Services Frameworks

Transcript of BriefingsDirect podcast on IONA Apache CXF and open-source Web services frameworks.

Listen to the podcast. Sponsor: IONA Technologies

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you’re listening to BriefingsDirect.

Today, a sponsored podcast discussion about Apache CXF, an open-source Web services framework that recently emerged from incubation into a full project. We are going to be discussing where CXF is, what are the next steps, how it is being used, what the market is accepting from open-source Web services and service-oriented architecture (SOA) infrastructure, and then, lastly, a road map of where CXF might be headed next.

Joining us to help us understand more about CXF, is Dan Kulp, a principal engineer who has been deeply involved with CXF for a number of years. He works at IONA Technologies. Welcome back to the show, Dan.

Dan Kulp: Thank you, it's good to be here.

Gardner: We are also joined by Raven Zachary, the open-source research director at The 451 Group. Welcome to the show, Raven.

Raven Zachary: Thank you.

Gardner: And we are joined by Benson Margulies, the CTO of Basis Technology. Welcome, Benson.

Benson Margulies: Thank you, good day.

Gardner: Let's start with you, Benson. Tell us a little bit about Basis Technology. I want to hear more about your company, because I understand you are a CXF user.

Margulies: Basis is about a 50-person company in what we call linguistic technologies. We build software components that do things like make high-quality, full-text search possible in languages such as Arabic and Chinese -- or do things like tag names and text, which is part of information retrieval.

We have customers in the commercial and government spaces and we wound up getting interested in CXF for two different reasons. One is that some of our customers have been asking us over time to provide some of our components for integration into a SOA, rather than through a direct application programming interface (API), or some sort of chewing gum and baling wire approach. So, we were looking for a friendly framework for this purpose, and CXF proved to be such.

The other reason is that, for our own internal purposes, we had developed a code generator that could read a Web-service description file WSDL and produce a client for that in JavaScript that could be loaded into a browser and tied back to a Web service. Having built it, we suddenly felt that we would like some help maintaining it. We went looking for an open-source framework to which we could contribute it, and CXF proved to be a friendly place for that too.

Over a period of time, to make a long story short, I wound up as a CXF committer. So, Basis is now both a corporate user of CXF as a delivery vehicle for our product, and also I am a committer focused on this JavaScript stuff.

Gardner: Great. You used the word "friendly" a couple of times. Let's go to Raven Zachary. Raven, why do people who go to open-source code and projects view it as friendly? What's this "friendly" business?

Zachary: Well, there are different motivations for participating in an open-source community. Generally, when you look at why businesses participate, they have a common problem among a peer set. It could be an underlying technology that they don't consider strategic. There are benefits and strength in numbers here, where companies pool together resources to work together on a common problem.

I think that for individual developers, they see it as a chance to do creative problem-solving in off hours, being involved in the team project. Maybe they want to build up their current opportunities of expertise in another area.

In the case of a CXF, it certainly has been driven heavily by IONA and its acquisition of LogicBlaze, but you had other individuals and companies involved -- Red Hat, BEA, folks from Amazon and IBM, and Benson from Basis, who is here talking about his participation. The value of this opportunity for many different commercial entities is coming together to solve a common set of problems.

Gardner: Let's go to Dan Kulp. Dan, tell us a little bit about CXF and its current iteration. You emerged from incubation not that long ago. Why don't you give our listeners, for those who are not familiar with it, a little bit of the lineage, the history of how CXF came together, and a little bit about the current state of affairs in terms of its Apache condition or position?

Kulp: CXF was basically a merger of the Celtix project that we had at ObjectWeb, which was IONA sponsored. We had lot of IONA engineers producing a framework there. There was also the XFire Project that was at Codehaus. Both of these projects were thinking about doing a 2.0 version, and there was a lot of overlap between the two. So, there was a decision between the two communities to pool the resources and produce a better 2.0 version of both XFire and Celtix

As part of that whole process of merging the communities, we decided to take it to Apache and work with the Apache communities as a well-respected open-source community.

So that's the long-term history of CXF. We spent about 20 months in the incubator at Apache. The incubator is where all the new projects come in. There are a couple of main points there, and one is the legal vetting of the code. Apache has very strong requirements about making sure all of the code is properly licensed, but is compatible with the Apache license, that the people that are contributing it to have done all of the legal requirements to make sure that the code meets those things. That's to protect the users of the Apache projects, which, from a company and user standpoint, is very important.

A lot of other projects don't do that type of legal requirement. So, there are always iffy statements around that. That was one important thing. Another very important part of the Apache incubator is building the community. One of the things they like to make sure is that any project that goes out of the incubator is in a very diverse community.

There are people representing a wide range of companies with a wide range of requirements, and the idea is to make sure that that community is long-term stable. If one company should suddenly be acquired by another company, just goes bankrupt and out of business, or whatever, the community is going to still be there in a healthy state. This is so that you can know that that the Apache project is a long-term thing not a short term.

Gardner: Could I pause there, and could you tell us who are the major contributors involved with CXF at this point?

Kulp: IONA is still heavily involved, as is Basis Technology, a couple of IBMers, as was mentioned earlier, and a couple of Red Hat people. There is one person who is now working for eBay who is contributing things, and there are a few people who I don't even know what company they work for. And that's a good thing. I don't really need to know. They have a lot of very good ideas, they are doing a lot of great work, and that's what's important about the community. It's not really that important, as long as the people are there participating.

Gardner: Okay. Things move quickly in this business. I wonder if any of our panelists recognize any shifts in the marketplace that have changed what may have been the optimum requirement set for a fully open-source Web-services framework from, say, two or three years ago, when these projects came together. What has shifted in the market? Does anyone have some thoughts on that?

Margulies: Well, Dan and Glen, who was another one of our contributors, and I, were having a lunch today and we were discussing the shift in the direction from old JAX-RPC framework to JAX-WS/JAXB, the current generation of SOA standards. That has very much become the driving factor behind the kits.

CXF gets a lot of attention, because it is a full open-source framework, which is completely committed to those standards and gives easy-to-use, relatively speaking, support for them and, as in many other areas, focuses on what the people in the outside world seem to want to use the kit for, as opposed to some particular theoretical idea than any of ours about what they ought to want to use it for.

Gardner: Thank you, Benson. Anyone else?

Kulp: Yes, one of the big things that comes to mind when this question comes up is, is the whole "code first" mentality. Several years ago, in order to do Web services, you had to know a lot of stuff about WSDL, extensible markup language (XML) schema. You had to know a lot of XMLisms. When you started talking about interop with other Web Services stacks, it was really a big deal, because these toolkits exposed all of this raw stuff to you.

Apache CXF has is a fairly different approach of making the code-first aspect a primary thing that you can think about. So, a lot of these more junior level developers can pick up and start working with Web services very quickly and very easily, without having to learn a lot of these more technical details.

Gardner: Now, SOA is a concept, a methodology, and an approach to computing, but there are a number of different infrastructure components that come together in various flexible ways, depending on the end user's concepts and direction. Tell us a little bit about how CXF fits into this, Dan, within other SOA infrastructure projects, like ServiceMix, Camel, ActiveMQ. Give us the larger SOA view, the role CXF plays in that. Then, I am going to ask you how that relates to IONA and FUSE?

Kulp: Obviously, nowadays, if you are doing any type of SOA stuff, you really need some sort of Web-service stack. There are applications written for ServiceMix and JBI that don't do any type of SOAP calls or anything like that, but those are becoming fewer and farther between. Part of what our Web services bring is the ability to go outside of your little container and talk to other services that are available, or even within your company or maybe with a business partner or something like that.

A lot of these projects, like Camel and ServiceMix, require some sort of Web-services stack, and they've basically come to CXF as a very easy-to-use and very embeddable service stack that they are using to meet their Web-services needs.

Gardner: Alright, so it fits into other Apache projects and code infrastructure bases, but as you say "plug-in-able," this probably makes it quite relevant and interesting for a lot of other users where Web-services stack is required. Can you name a couple of likely scenarios for that?

Kulp: It's actually kind of fascinating, and one of the neatest things about working in an open-source project is seeing where it pops up. Obviously, with open-source people, anybody can just kind of grab it and start using it without really telling you, "Hey, I'm using this," until suddenly they come to you one day saying, "Hey, isn't this neat?"

One of the examples of that is Groovy Web service. Groovy is another dynamic language built in Java that allows you to do dynamic things. I'm not a big Groovy user, but they actually had some requirements to be able to use Groovy to talk to some Web services, and they immediately started working with CXF.

They liked what they saw, and they hit a few bugs, which was expected, but they contributed back to CXF community. I kept getting bug reports from people, but was wondering what they were doing. It turns out that Groovy's Web-services stack is now based on CXF. That's type of thing is very fascinating from my standpoint, just to see that that type of stuff developed.

Margulies: I should point out that there has been a tendency in some of the older Web-service platforms to make the platform into a rather heavy monolithic item. There's a presumption that what you do for a living with a Web service is stand up a service on the Web in one place. One of CXF's advantages is what you want to do is deliver to some third party a stack that they put up containing your stuff that interacts with all of their existing stuff in a nice light-weight fashion. CXF is un-intrusive in that regard.

Gardner: And, just as a level-set reality check, over to Raven. Tell me a little bit about how this mix-and-match thing is working among and between the third parties, but also among and between commercial and open source, the infrastructure components.

Zachary: The whole Apache model is mix and match, when you are talking about not only a licensing scheme. The Apache license, is a little easier for commercial vendors to digest, modify, and add in, compared to the GPL, but also I think it's the inherent nature of the underlying infrastructure technologies.

When you deploy an application, especially using open source, it tends to be several dozen distinct components that are being deployed. This is especially true in Java apps, where you have a lot of components or frameworks that are bundled into an application. So, you would certainly see CXF being deployed alongside of other technologies to make that work. Things like ServiceMix or Camel, as you mentioned, ActiveMQ, Tomcat, certainly Apache Web Server, these sorts of technologies, are the instrument to which these services are exposed.

Gardner: Now, let's juxtapose this to the FUSE set. This is a commercially supported, certified, and tested SOA and Web-services component set. The FUSE services framework is derived from CXF. Dan, tell us a little bit about what is going on with FUSE and how has that now benefited from CXF moving from incubation into full Apache?

Kulp: As you mentioned, the FUSE services framework is basically a re-branded version of Apache CXF. If you go into a lot of these big customers, like banks or any of the major type of customers, and they deploy an application, they want to have some level of support agreement with somebody that says if a bug is found or a problem crops up, can they get somebody on the phone and get a bug fixed relatively quickly.

That's what the FUSE product line is basically all about. It's all open-source, and anybody can download and use the stuff, but you may not get the same level of support from the Apache community, as you do with the FUSE product.

The Apache communities are pretty much all volunteer-type people. Pretty much everybody is working on whatever their own agenda is, but they have their own expertise. So, they may not even have time, and they may be out on leave or on vacation or something like that. Getting the commercial-level of support from the Apache community can sometimes be a hard sell for a lot of these corporations, and that's why what FUSE really brings is a support agreement. You know that there is somebody there to call when there is a problem.

It's a two-way relationship. Obviously, if any of those customers come back with bugs and stuff, the IONA people will fix them and get them pushed into both Apache and FUSE. So, the bugs and stuff get fixed, but the other thing that IONA gets from this is that there's a lot of ideas in the Apache communities that we may not have thought of ourselves.

One good example of this is that JavaScript thing that Benson mentioned earlier. That's not something IONA really would have thought of at the beginning, but this is something that we can give back to our customers saying, "Hey, isn't this a neat idea?" So, there are a lot of benefits coming from the other people that aren't IONA in these communities actually providing new features and new ideas for the IONA customers.

Gardner: Okay, you came off incubation in April, is that correct?

Kulp: Yes.

Gardner: Tell us about what's going on now. What's the next step, now that it's out of that. Is this sort of a maintenance period, and when will we start to think about seeing additional requirements and functionality coming in?

Kulp: There are two parts to that question. Raven and I graduated, and we were ready to push out 2.1. Apache CXF 2.1 was released about a week after we graduated, and it brought forth a whole bunch of new functionality. The JavaScript was one of them. A whole new tooling was another thing, also CORBA binding, and there is a whole bunch of new stuff, some REST-based APIs. So, 2.1 was a major step forward, compared to the 2.0 version that was ready last August, I believe.

Right now, there are basically two tracks of stuff going on. There are obviously a lot of bug fixes. One of the things about graduating is that there are a lot of people who don't really understand what the incubator is about, and so they weren't looking in the incubator.

The incubator has nothing to do with the quality of the code. It has more to do with the state of the community, but people see the word "incubator" and just say, "No, I'm not going to touch that." But, now that they we're graduated, there are a lot more people looking at it, which is good. We're getting a lot more input from users. There are a lot of people submitting other ideas. So, there is a certain track of people just trying to get some bug fixes and getting some support in place for those other people.

Gardner: I am impressed that you say "bug fixes" and not "refinement." That's very upfront of you.

Kulp: Well, a lot of it is refinement too, and, to be honest, there is a bit of documentation refinement that is going on as well, because with new people using it, there are new expectations. Their old toolkits may have done things one way, and the documentation may not reflect well enough, "Okay, if you did it this way in the old toolkit, this is how you do the same thing in CXF."

Margulies: If I could pipe up with a sociological issue here with open source which says, it's a lot easier to motivate someone to run in and diagnose a defect or a missing feature in the code and make the fix than to get the additional motivation to go over to the "doc" side and think through, "How the heck are we going to explain this, and who needs to have it explained to them." We're really lucky, in fact. We have at least one person in the community who almost entirely focuses on improving the doc as opposed to the code.

Gardner: Okay. So, we're into this maturity move. We've got a lot more people poking at it and using it. We're going to benefit from that community involvement. We've mentioned a couple of things that struck me a little earlier -- the Groovy experience and JavaScript. I guess there's this perception by many whom I've talked to that Web services is interesting, but there's a certain interest level too in moving into more dynamic languages, the use of RESTful for getting out to clients, and thinking about Web services in a broader context.

So, first let's go to Benson. Tell us why this JavaScript element was important to you and where you think the kind of mindset is in the field around Web services and traditional Web services-star specifications and standards?

Margulies: We went here originally, because while we built these components to go into the middle of things, we have to show them off to people, who just want to see the naked functionality. So, we built a certain amount of demo functionality as Web applications, with things from Web pages. And, the whole staff was saying, "Oh gosh, first we have to write a JSP page, and then we have to write some Beans, and then we have to package it all up, and then we have to deploy it."

It got really tiresome. So we went looking for a much thinner technology for taking our core functionality and making it visible. It dawned on us that perhaps you could just call a Web service from a browser.

Historically, there's been such a mentality in the broad community because you "couldn't possibly do that." "Those Web service, XML messages, they are so complicated." "Oh, we could never do that." And, several of the dynamic languages, SOAP, or Web-service kits that have shown up from time to time in the community were really weak. They barely worked, because they're very old versions of the Web-service universe. As Web-service standards have moved into stronger XML, they got left behind.

So, not knowing any better, we went ahead and built a code generator for JavaScript that could actually talk to a JAX-WS Web service, and I think that's an important direction for things to go. REST is a great thing. It allows very simple clients to get some data in and out of Web services, but, people are building really big complicated applications and dynamic languages these days, things like Ruby. For Web services to succeed in that environment, we need more of what we just did with the JavaScript. We need first class citizenship for dynamic languages as clients and even servers of Web services.

Gardner: Let's take it over to Raven. Tell us, from the analyst perspective, what you see going on mentality wise and mindshare wise with Web-services specs, and do you think that there's a sort of "match made in heaven" here between something like CXF and some of these dynamic languages?

Zachary: Well, looking back on the history of CXF being the merging of two initiatives -- Celtix from IONA and XFire from Codehaus -- and spending last few years in the incubator, and now coming out of the incubator in April, bringing together those two initiatives is very telling in terms of the stronger initiative, based on the basis of two existing open-source initiatives.

I like the fact that in CXF they are looking at a variety of protocols. It's not just one implementation of Web services. There's SOAP, REST, CORBA, other technologies, and then a number of transports, not just HTTP. The fact is that when you talk to enterprises, there's not a one-size-fits-all implementation for Web services. You need to really look at services, exposing them through a variety of technologies.

I like that approach. It really matches the needs of a larger variety of enterprise organizations and just it's a specific technology implementation of Web services. I mean, that's the approach that you're going to see from open-source projects in the space. The ones that provide the greatest diversity of protocols and transports are going to do quite well.

Gardner: Dan, you've probably heard this. Some of the folks who are doing more development with dynamic languages and who are trying to move toward light-weight webby applications have kind of an attitude going on with Web-services specs. Have you noticed that and what do you think is up with that? Has that perhaps prevented some of them from looking at CXF in evaluating it?

Kulp: Yeah, in a way, it has prevented them, but Web Services are pretty much everywhere now. So, even though they may not really agree with some of Web-service ideas, for their own user base to be able to grow, they have to start thinking about how do we solve that problem, because the fact is that they are there.

Now, going forward, REST is obviously a big word. So, whatever toolkit you're looking at you need to be able to talk REST as well, and CXF is doing a little bit there. If you go back, there's CORBA stuff that needs to be talked to. With CXF, you don't just get the SOAP part of SOA, you get some of these additional technologies that can help you solve a wider range of problems. That's very important to certain people, especially if you're trying to grow a user base.

Gardner: Alright, so you've obviously benefited, the community has benefited from Benson and Basis Technology offering in what they did with JavaScript. I assume you'll be interested in committers to further that across more languages and more technologies?

Kulp: Oh, definitely. One of the nicest things about working in Apache projects is that it's an ongoing effort to try to keep the community growing and getting new ideas. As you get more people in, they have different viewpoints, different experiences, and all that can contribute to producing new ideas and new technologies, and making it easier to solve a different set of problems.

I always encourage people that, if they're looking in the CXF code, and they hit a bug, it's great if we see them submit a patch for that, because that shows that they're actually digging in there. Eventually, they may say, "Okay, I kind of like how you did that, but wouldn't it be neat if you could just do this?" And then maybe they submit some ideas around that and become a committer. It's always a great thing to see that go forward.

Gardner: Let's go around the table one last time and try to predict the future when it comes to open-source Apache projects, this webby application environment, and the larger undertaking of SOA. Dan, any prophecies about what we might expect in the CXF community over, say, the next 12 months?

Kulp: Obviously, there's going to be this ongoing track of refinements and fixes. One of nice things of the CXF community is that we're very committed to supporting our existing users and making sure that any bug fixes or bugs that they encounter get fixed in a relatively timely manner. CFX has a very good history of doing very frequent patch releases to get fixes out there. So, that's an ongoing thing that should remain in place and it's a benefit to the communities and to the users.

Beyond that, there's a whole bunch of other ideas that we're working on and fleshing out. The code first stuff that I mentioned earlier, we have a bunch of other ideas about how to make code-first even better.

There are certain tool kits that you kind of have to delve down into either configuration or WSDL documents to accomplish what you want. It would be nice if you could just embed some annotations on your code, or something like that, to accomplish some of that stuff. We're going to be moving some of those ideas forward.

There's also a whole bunch of Web-services standards such as WS-I and WS-SecureConversation that we don't support today, but we are going to be working on to make sure that they are supported. As customers or users start demanding other WS technologies, we'll start considering them, as well. Obviously, if new people come along, they'll have other great ideas, and we would welcome those as well.

Gardner: Alright. Raven Zachary, what do you see as some of the trends that we should expect in Open Source infrastructure particularly around SOA and Web services interoperability over, say, the next 12 months?

Zachary: We've had for the last decade or so a number of very successful open-source infrastructure initiatives. Certainly, Apache Web Server Linux as an operating system and the application middleware stack -- Tomcat, Geronimo, JBoss -- have done very well. Open source has been a great opportunity for these technologies to advance, and we're still going to see commercial innovation in the space. But, I think the majority of the software infrastructure will be based on open standards and open source over time and then you'll see commercialization occur around the services side for that.

We're just starting to see the emergence of open-source Web services to a large extent and I think you're going to see projects coming out of the Apache Software Foundation leading that charge as other areas of the software infrastructure have been filled out.

When you look at growth opportunities, back in 2001, JBoss app server was a single-digit market share, compared to the leading technologies at the time, WebSphere from IBM and WebLogic from BEA. In the course of four years, that technology went from single-digit market share to actually being the number one deployed Java app server in the market. I think it doesn't take much time for a technology like CXF to capture the market opportunity.

So, watch this space. I think this technology and other technologies like it, have a very bright future.

Gardner: I was impressed and I wrote a blog recently about this emerging from incubation. I got some really high numbers, which indicated some significant interest.

Last, I am going to Benson at Basis Technology as a user and a committer. How do you expect that you'll be using something like CXF in your implementations over the next 12 months?

Margulies: Well, we're looking at a particular problem, which is coming up with a high-performance Web-service interface to some of our functions, where you put a document and you get some results out. That's quite challenging, because documents are sort of heavyweight, large objects, and the toolkits have not been wildly helpful on this.

So, I've scratched some of the necessary services on CXF and I expect to be digging deeper. The other thing I put in as a comment as a committer is that one of the most important things we're going to see is a user support community.

Long before you get to the point where someone is a possible committer on the program, there is the fact that the users help each other in using the product and using the package, and that's a critical success factor. That community of people who read the mailing list just pitch in and help those newbies find their way from one end to the other.

Gardner: Well, great. Thank you so much. I think we've caught up with CXF, and have quite a bit to look forward to over the coming quarters and months. I want to thank our panel. We've been joined by Dan Kulp, principal engineer at IONA Technologies; Raven Zachary, open source research director for The 451 Group; and Benson Margulies, the CTO at Basis Technology. Thank, everyone.

Kulp: You're very welcome.

Zachary: Thank you.

Margulies: Thank you.

Gardner: This is Dana Gardner, principal analyst at Interarbor Solutions. You have been listening to a sponsored BriefingsDirect Podcast on Apache CXF. Thanks and come back next time.

Listen to the podcast. Sponsor: IONA Technologies.

Transcript of BriefingsDirect podcast on IONA Apache CXF and open-source frameworks. Copyright Interarbor Solutions, LLC, 2005-2008. All rights reserved.