Tuesday, February 17, 2009

Cloud Computing, Enterprise Architecture Align to Make Each More Useful to Other, Say Experts

Transcript of a podcast with industry practitioners and thought leaders at The Open Group's Enterprise Cloud Computing Conference in San Diego.

Listen to the podcast. Download the podcast. Find it on iTunes and Podcast.com. Learn more. Sponsor: The Open Group.

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions and you're listening to BriefingsDirect. Today, we welcome our listeners to a sponsored podcast discussion coming to you from The Open Group's Enterprise Cloud Computing Conference in San Diego, February, 2009.

Our topic for this podcast, part of a series on events and major topics at the conference, centers on cloud computing and its intersection with enterprise architecture. You might consider this a discussion about real-world cloud computing, because this subject has been often discussed across a wide variety of topics, with many different claims, and perhaps a large degree of hype.

We're going to be talking with a few folks who will bring cloud and its potential into alignment with what real enterprises do and will be expecting to do, in terms of savings and productivity in the coming years.

Here to help us sort through cloud computing in enterprise architecture, is Lauren States, vice president in IBM's Software Group; Russ Daniels, vice president and CTO Cloud Services Strategy at Hewlett-Packard (HP), and David Linthicum, founder of Blue Mountain Labs. Welcome to you all.

There's an early-adopter benefit in some technologies. I expect that that might be the case with cloud computing as well. But, in order for us to assert where cloud computing makes the most sense, I think it's important to establish what problem we're trying to solve.

Why don't we start with you, Dave? What are the IT problems that cloud computing is designed for, or is being hyped to solve?

Dave Linthicum: Thank you very much, Dana. Cloud computing is really about sharing resources. If you get down to the essence of the value of cloud computing, it's about the ability to leverage resources much more effectively than we did in the past. So, number one, it's really designed to simplify the architectures that we are dealing with.

Most enterprises out there have very complex, convoluted, and inefficient architectures. Cloud provides us with the ability to change those architectures around any business needs, as the business needs change, and expand and contract those architectures as the business needs require.

Gardner: What problem are we solving from your perspective, Russ?

Russ Daniels: Hi, Dana. For most enterprises today, most of what they are really interested in is exactly what was described. It's a question of, "How can I source infrastructure in a way that's more flexible, that allows me to move more quickly, and that allows me potentially to have more variable costs?"

Maybe I can provision internally for my average demand rather than peak demand, and then be able to take advantage of external capacity to handle those peaks more cost effectively.

We think all that's really quite important. There's something else that's going on that people tend to talk about as cloud, which has different implications and takes advantage of some of that same flexible infrastructure, but allows us to go after different problems.

Most enterprises today are trying to figure out, "How can I improve my efficiency? Rather than having capacity dedicated to each of the application workloads that I need to deliver to the business, can I flexibly bind the pools of resources, whether they are in my data center or in somebody else's?"

Gardner: Okay. Let's rephrase the question slightly for you, Lauren. What business problems are we solving with cloud computing?

Agile response

Lauren States: Thank you very much, Dana. I agree with both what Dave and Russ said. I think the business problem that we're trying to solve is how we can make IT respond to business in a more agile way. The opportunity that we have here is to think about, how to industrialize IT and create an IT services supply chain.

The combination of the technologies available today, the approaches that we're using in the underlying architecture, plus our collective experience gives us a chance to use cloud computing to realize the value of IT to an organization. We can stop having these conversations about, what is the additional cost that you are bringing in, why is IT separate, and why is it such a burden, and really integrate IT with business.

Gardner: As I mentioned, we also want to put this in the context of enterprise architecture. For organizations that see some potential in the cloud models that are emerging, are looking at new ways to develop software and source their services, and where they are located, and deployed in their production facilities, there probably also needs to be some preparation. Jumping in too soon might have some downside as well. Given we are in a tough economy, economics is very much top of mind.

When it comes to enterprise architecture, what do you need to do or have in place, in order to put yourself at an advantage or be in a good position to take advantage of cloud? Let's start with you, Dave.

Linthicum: Number one, you need to assess your existing architecture. Cloud computing is not going to be a mechanism to fix architecture. It’s a mechanism as a solution pattern for architecture. So, you need to do a self-assessment as to what's working, and what's not working within your own enterprise, before you start tossing things outside of the firewall onto the platform in the cloud.

Number two, once you do that, you need to have a good data-level understanding, process-level understanding, and a service-level understanding of the domain. Then, try to figure out exactly which processes, services, information are good candidates for cloud computing.

One of the things I found out implementing this within my clients is that not everything is applicable for cloud computing. In fact, 50 percent of the applications that I look at are not good candidates for cloud. You need to consider that in the context of hype.

Gardner: Lauren, from your perspective, what organizational management, technical underpinnings, and foundations might put you on a better position to leverage cloud?

States: Just building on what Dave said, I have a couple of thoughts. First, I completely agree that you have to have an aspirational view of where you are trying to go. And, you have to have a good understanding of your current environment, including simple things like knowing all the things in your environment and their relationship to each other. Lay out the architecture and develop the roadmap and the steps that you need to take to achieve cloud computing.

The other aspect that's really important is the organizational governance and culture part of it, which is true for anything. It's particularly true for us in IT, because sometimes we see the promise of the technology, but we forget about people.

In clients I've been working with, there have been discussions around, "How does this affect operations? Can we change processes? What about the work flows? Will people accept the changes in their jobs? Will the organization be able to absorb the technology? "

Enterprise architecture is robust enough to combine not only the technology but the business processes, the best practices, and methodologies required to make this further journey to take advantage of what technology has to offer.

The right environment

Gardner: Let's flip the question over a little bit at this point and look at what would not be a good environment to start embarking on cloud. Is there something you should not do or, if you are lacking something, perhaps be leery of in using clouds? Russ?

Daniels: It's very easy to start with technology and then try to view the technology itself as a solution. It's probably not the best place to start. It's a whole lot more useful if you start with the business concern. What are you trying to accomplish for the business? Then, select from the various models the best way to meet those kinds of needs.

When you think about the concept of, "I want to be able to get the economies of the cloud -- there is this new model that allows me to deliver compute capacity at much lower cost," we think that it's important to understand where those economics really come from and what underlies them. It's not simply that you can pay for infrastructure on demand, but it has a lot to do with the way the software workload itself is designed.

There's a huge economic value you can get, if the software can take advantage of horizontal scaling -- if you can add compute capacity easily in a commodity environment to be able to meet demand, and then remove the capacity and use it for another purpose when the demand subsides.

This is a real important problem. We know how to do that well for certain workloads. Search is a great example. It scales horizontally very effectively. The reason is that search is pretty tolerant of stale data. If some of the information on some of the nodes is slightly out of date, it doesn't really matter. You'll still get the right answer.

If you look at other types of workloads, high degrees of transactionality are critical. When you take an item out of inventory, you really only get to do that once. When you try to scale those things horizontally, you have real issues with the possibility of a node failure or causing a lock not to be released. That then creates some nasty back-operational process that has to be implemented correctly by your IT organization for everything to work.

It's the balance between what are the problems we are trying to solve and how well this particular architectural patterns match up to those. Every IT organization has to keep that in mind.

Gardner: While there's been quite a bit of hype around cloud, there is also a fair amount of naysaying about it here at the TOGAF 9 launch and the practitioners conference for The Open Group.

I've spoken to several people who really don't have a lot of favorable impressions of cloud. They seem to think that this is a way of dodging the IT department and perhaps bringing more complication and a lack of governance, which could then spin out of control and make things even worse.

So, what are some best practices that we could establish at this early juncture of how to approach cloud and bring it into some alignment not only with the business, but with the existing IT services and infrastructure? I guess this is our architecture question. Dave?

Set the policy

Linthicum: The first thing you need to do is to create, publish, and widely distribute the policy on cloud computing. Someone needs to figure out what it is, the value that it’s going to have for the particular enterprise, and the vision or the strategy or the approach that they need to leverage to get there.

The next thing you do is publish policies around cloud computing. Lots of my clients are building what I call rogue clouds. In other words, without any kind of sponsorship from the IT department, they're going out there to Google App Engine. They're building these huge Python applications and deploying them as a mechanism to solve some kind of a tactical business need that they have.

Well, they didn't factor in maintenance, and right now, they're going back to the IT group asking for forgiveness and trying to incorporate that application into the infrastructure. Of course, they don't do Python in IT. They have security issues around all kinds of things, and the application ends up going away. All that effort was for naught.

You need to work with your corporate infrastructure and you need to work under the domain of corporate governance. You need to understand the common policy and the common strategy that the corporation has and adhere to it. That's how you move to cloud computing.

Gardner: How do we know if companies are doing this right? Are there yet any established milestones or best practices? Clearly, we've seen that with other technology adoptions, we have certain signals that say, "Aha, we're doing something wrong. We need to reevaluate."

Any ideas, Lauren, about how companies would know whether they are doing cloud properly? What should they be getting in return?

States: That's a great question, Dana. Let me just take it from a couple perspectives.

First, we've looked at our own IT transformation within IBM to try to discover what were the activities we did to make sure that we could take out cost and reduce complexity. We feel that looking at the financial aspects helps drive an organization to a common goal.

In our company, we took $4 billion out of our IT infrastructure over the past five years, and that's part of our strategy for our common centralized functions. There's nothing like achieving a specific target to make an organization focus.

Our initial feeling is that you really have to get your arms around virtualization, so you can take out the capital expense and then have the real hard discussions around standardization.

You can reduce the complexity of the application portfolios, reduce the administration and support costs, and take a very serious look at your service management capability, so that you can get at the operations and implement the policies that you described, Dave, and continue to make progress.

I don't think that there's any completely done use-case out there that we can all look to and say, "Oh, that's what it looks like." It's starting to get clearer as we get more experienced. But, as I said, you need a specific target.

Our target was cost. Other organizations have other targets, like shared services or creation of new business models. You can get the whole organization clear and managed to that, and, as in our case, have some of these items be part of the executive compensation structure. Then, you have a better chance of achieving what the business is looking to do.

Gardner: I'm going to take the same question to you, Russ. What should companies be looking for if they do cloud properly? What are the returns?

Key questions

Daniels: This really starts with a couple of key questions. First, why do you have an IT function in your enterprise? Our answer to that is that you need to have someone responsible for the sourcing and delivering of services in a form that is consistent with the businesses needs.

The cloud just represents one more sourcing opportunity. It’s one more way to get services, and you have to think of it in the context of the requirements that the business has for those services. What value do they represent, and then where is the cloud an appropriate way to realize those benefits? Where is it the best answer?

It starts with that. To be able to answer that question is a significant issue for enterprise architecture. It means you have to have a pretty good model of what the enterprise is about -- how does it work, what are the key processes, what are the key concerns? That picture, that design of the enterprise itself, helps you make better choices about the appropriate way to source and deliver services.

There's a particular class of services, needs for the business, that when you try to address them in the traditional application-centric models, many of those projects are too expensive to start or they tend to be so complex that they fail. Those are the ones where it's particularly worthwhile to consider, "Could I do these more effectively, with a higher value to the business and with better results, if I were to shift to a cloud-based approach, rather than a traditional IT delivery model?"

It's really a question of whether there are things that the business needs that, every time we try to do them in the traditional way, they fail, under deliver, were too slow, or don't satisfy the real business needs. Those are the ones where it's worthwhile taking a look and saying, "What if we were to use cloud to do them?"

Gardner: Back to you, Dave. We've heard quite a bit about private clouds or on-premises clouds. On one hand, what's interesting about clouds is that you have a one size fits all. You have a common set of services or a common infrastructure, but lot of companies are interested in customization and differentiation and they also need to integrate with what's been running underneath the hood inside the organizations anyway.

Tell us how in your practice you see the role of a private cloud emerging, and particularly how that offsets this notion of it's all just a big common denominator cloud.

Linthicum: The value of private clouds is you can take what's best of cloud computing and implement it behind your firewall. So, you get around the whole control and security issues that people deal with -- and also the not-invented-here attitude out there.

The difficulty that people are running into right now is trying to figure out how to leverage cloud-computing environments when their existing architectures are so tightly coupled. They're coming to the conclusion that it's very difficult to do that. They can't use Amazon, Google, or other cloud-based services, because the information is so bound to the behaviors inside those systems and the systems are so tightly coupled. It's very difficult to decouple pieces of them and put them in the cloud. So, private clouds are an option for that.

You provide that on infrastructure that's shareable. You can expand it as you need it, and, as Russ mentioned earlier, give as many cycles as you need to particular applications that need them and take away the cycles from the applications that don't. Therefore, you end up with an architecture that's much more effective and efficient.

It also syncs up very well with the notion of service-oriented architecture (SOA) and is additive to an enterprise architecture and not necessarily negatively disruptive.

Gardner: Do you use your traditional enterprise architecture principles and skills when you construct your cloud on-premises or does it require something different?

It's enterprise architecture

Linthicum: You do. At the end of the day, it's enterprise architecture. So you're doing enterprise architecture and you're doing the sub-pattern of SOA. You're using cloud computing, specifically private clouds, as an end-state solution. So, it's nothing more than an instance of a solution in that matter.

That doesn't degrade it as far as having value, but you get to that through requirements, planning, governance, all the things that are around enterprise architecture -- and you get to the end-state. Cloud computing is in the arsenal of the technology you have to solve your problem, and that's how you leverage it.

Gardner: Lauren, in your presentation earlier today, you described some economic benefits that IBM is enjoying, or beginning to enjoy, as a result of some cloud activities. Tell us about the return-on-investment (ROI) equation. How substantial is it, and is it so enticing, particularly in today's tough economy where every dollar counts, that companies should be moving toward this cloud model quickly?

States: The ROI that we've done so far for one of our internal clouds, which is our technology adoption program, providing compute resources and services to our technical community so that they can innovate, has actually had unbelievable ROI -- 83 percent reduction in cost and less than 90-day payback.

We're now calibrating this with other clients who are typically starting with their application test and development workloads, which are good environments because there is a lot of efficiency to be had there. They can experiment with elasticity of capacity, and it's not production, so it doesn't carry the same risk.

Gardner: Let's just unpack those numbers a little bit. Are you talking about an on-premises cloud or grid that IBM has put together? Or is this leveraging outside third parties; a hybrid? What were you able to do those very impressive feats with?

States: This is an on-premises cloud. It’s at our data center in Southbury, Conn. There are three major levers for cost. First was virtualization. They virtualized the infrastructure. So, they cut down their hardware, software, and facilities cost.

They were able to put in significant automation, particularly around self-service request for the resources. We took out quite a bit of labor through automation, and that was what gave the substantial savings -- particularly the labor cost, from roughly 14 or 15 administrators, down to a couple or three. That's where we saved the cost.

Gardner: Russ Daniels, we have heard quite a bit from HP about transformation in IT, modernization, and consolidation. Do you see cloud as yet another facet of the larger topic, which is really IT transformation, and how big a piece of IT transformation will cloud end up being?

Daniels: It's very easy to get so excited about technologies that you forget about the fundamental challenges that every business face around change management, this concept of transformation.

Change management

Ultimately, if you want an organization to do something different than what it does, you have to take on the real work involved in that change management, getting people comfortable with doing things differently, moving out of their current comfort zones or current competencies, and learning new skills and new ways to do things. So, yeah, we think that that's a major component.

When we think about these kinds of applications of taking advantage of what sometimes is called a private cloud, what we tend to think of as an internal infrastructure utility. What we've discovered is that change management concerns -- getting people comfortable that their workloads will be adequately secure, that their needs will be met, when they are being delivered in the shared form -- has been a real challenge.

A lot of times the adoption of these technologies is slowed by the business' concern that they are going to end up at the end of the queue, rather than getting their fair share.

As you think about all of these opportunities, you have to source and deliver these services. It's critical that you build the right economic models and understand the trade-offs effectively.

If you have an internal shared capacity, you still, as a business, are taking on all of the fixed costs associated with the operation. It's different than if some third party is handling those fixed costs and you're only paying variable costs.

It's also true though that many times the least expensive way to do it is to do it for yourself, to do it internally, in the same way that, if you use a car 20 days a year, renting the car can be a real cost saver. If you use a car every day, it's typically better to just go ahead and buy the car, take on the maintenance responsibilities, the insurance cost, etc., yourself, because if you are using the car a lot over the course of that year, the costs amortize much more effectively.

Gardner: How does a cloud approach help organizations change more rapidly? There's some concern there that going to a cloud model, in this case a third-party cloud, might end up being another form of lock-in, and that you might lose agility. Public or private, what is it about a cloud model that makes your company more agile?

Daniels: Our view is that the real benefits, the real significant cost savings that can be gained. If you simply apply virtualization and automation technologies, you can get a significant reduction of cost. Again, self-service delivery can have a huge internal impact. But, a much larger savings can be done, if you can restructure the software itself so that it can be delivered and amortized across a much larger user base.

There is a class of workloads where you can see orders-of-magnitudes decreases in cost, but it requires competencies, and first requires the ownership of the intellectual property. If you depend upon some third-party for the capability, then you can't get those benefits until that third-party goes through the work to realize it for you.

Very simply, the cloud represents new design opportunities, and the reason that enterprise architecture is so fundamental to the success of enterprises is the role that design plays in the success of the enterprise.

The cloud adds a new expressiveness, but imagining that the technology just makes it all better is silly. You really have to think about, what are the problems you're trying to solve, where a design approach exploiting the cloud generates real benefit.

Gardner: The same question to you, Dave Linthicum. Public-private-hybrid: What is it about a cloud model that makes a company more responsive from a business outcomes perspective?

Key to agility

Linthicum: I don't think a cloud model inherently makes them more responsive. It's the fact that they're leveraging all kinds of technology, inclusive of cloud computing, as a mechanism to provide more agility to the enterprise.

In other words, if they're able to do that with external clouds to get applications up and running very quickly, with the security and the performance requirements still in line, design that into their enterprise architecture, and leverage the private clouds to get virtualization and get at resources in a shareable state among the various entities within the organization, they are able to share the cost. Then, they're going to be able to do IT better. That's what it's all about.

What we're looking to do is not necessarily reinvent or displace IT or throw out the old legacy stuff and put in this new cloud stuff. We're looking to provide a layer of good architecture and good technology on the existing things, as well as get back into the architecture and fix things that need to be fixed and provide good IT to address the business.

Gardner: There's an interesting confluence now with the harsh economic environment. We're looking at this cloud phenomenon largely as a cost benefit. Yes, do IT better, but there is a significant cost, better utilization, perhaps flexibility in services, and how even an IT organization runs itself.

Coming down to the end now. Do you agree, Lauren, that what's going to drive cloud into organizations and its use through a variety of models over the next couple of years is largely a function of cost?

States: Yes, cost will be a huge driver in this. Cost is a conversation that is very active in the C suite. The conversations on cloud have re-established some of the conversations with lines of business, because they are curious about how can they take out cost and achieve the agility that they're looking for.

But I'd also be mindful that there is an opportunity for us to drive innovation and economic growth with new business models, new businesses, new service deliveries, and new workloads. This will be something that large organizations look for, but it will unlock IT for many smaller organizations that don't have the resources within their organizations to provide these services to their constituents.

Gardner: Okay. Russ Daniels, same question. In an economic maelstrom, what are the economic drivers for cloud, and is that going to be the primary driver?

Daniels: I've not seen any time in the industry where the conversation between business and IT didn't have a significant cost component. Certainly, when the times become more difficult, that intensifies, but there's never a point at which that isn't an interesting question.

A few years ago, when Mark Hurd came in as our CEO, HP started to go through a very significant reduction in the cost of IT. Economic times were fine, but that was a very important focus.

A great opportunity

Cloud is relevant to that, but, as Lauren was saying, there is a great business opportunity as well. Every IT organization that's having those cost conversations would love to be able to have a value conversation, would love to be able to talk about how technology cannot just help control cost but can generate new business opportunities, open new markets, help the business gain share, improve the region and relationship that it has with its customers, and differentiate from its competitors better.

We think that cloud is really very suitable for many of those kinds of concerns. The ability to understand better what your customers care about and to tailor your offerings to those is something the cloud is particularly well suited to do, and allows the business to have a different conversation with IT, and one that the IT organization yearns for.

Gardner: This is probably a question that would be good for an entire hour-long additional podcast, but Dave Linthicum, on this notion of additional business and revenue, innovative processes that can create new wealth creation, what do you see as the top opportunities and using cloud in that regard, in creating new business?

Linthicum: Consulting companies are benefiting from it right now. They're getting a wealth of new business based on a new paradigm coming in. Lots of people are confused about how the paradigm should be used and they are building methodologies and those sorts of things.

The primary cloud component and the benefit that businesses will get will be the ability to leverage the network effect from the cloud-computing environment. In other words, they'll benefit if they're willing to engage infrastructure that's outside their firewall that they don't control in their host, and use that as a service -- in essence rent it -- and then they're able to see some additional value that the Internet web can bring, such as the social networking things and the ability to get analytical services.

I thought you put it great, saying that ultimately people are going to realize huge cost savings based on the ability to leverage what they have in a much more cost effective way. That's really where things are going right now.

So, I think the consultants are going to make the additional money and I think the hardware and software vendors are going to make some money, even though cloud computing will displace some hardware and software.

People are retooling right now and actually buying stuff, especially cloud providers that are building infrastructure. Then, it will come down to the core benefits that are being built around the private clouds and the public clouds that are being leveraged out there.

Gardner: So, it's perhaps a win-win-win just at the time in the economy when we need that. We'll have a win perhaps in being able to further leverage existing resources and assets and architectural methods and processes, further reduce the overall operating costs as a result of cloud, and at the same time, conjure up new business opportunities and models and ways of driving income across ecologies of players in ways we hadn't before.

That's a fairly auspicious position for cloud computing, and that's perhaps why we are hearing so much about it nowadays.

I want to thank our panelists. We have been joined by Lauren States, vice president in the IBM Software Group; Russ Daniels, vice president and CTO cloud services strategy for Hewlett-Packard; and Dave Linthicum, founder of Blue Mountain Labs.

Our conversation comes to you today through the support of The Open Group from the 21st Enterprise Architecture Practitioners Conference and Enterprise Cloud Computing Conference in San Diego in February, 2009.

I'm Dana Gardner, principal analyst at Interarbor Solutions. Thanks for listening and come back next time.

Listen to the podcast. Download the podcast. Find it on iTunes and Podcast.com. Learn more. Sponsor: The Open Group.

Transcript of a podcast with industry practitioners and thought leaders at The Open Group's Enterprise Cloud Computing Conference in San Diego, February, 2009. Copyright Interarbor Solutions, LLC, 2005-2009. All rights reserved.

View more podcasts and resources from The Open Group's recent conferences and TOGAF 9 launch:

The Open Group's CEO Allen Brown interview

Live panel discussion on enterprise architecture trends

Deep dive into TOGAF 9 use benefits

Reporting on the TOGAF 9 launch

Panel discussion on security trends and needs

Access the conference proceedings

General TOGAF 9 information

Introduction to TOGAF 9 whitepaper

Whitepaper on migrating from TOGAF 8.1.1 to version 9

TOGAF 9 certification information


TOGAF 9 Commercial Licensing program information

Saturday, February 14, 2009

Effective Enterprise Security Begins and Ends With Architectural Best Practices Approach

Transcript of a podcast on security as architectural best practices, recorded at the first Security Practitioners Conference at The Open Group's 21st Enterprise Architecture Conference in San Diego, February 2009.

Listen to the podcast. Download the podcast. Find it on iTunes and Podcast.com. Learn more. Sponsor: The Open Group.

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you're listening to BriefingsDirect. Today, we welcome our listeners to a sponsored podcast discussion coming to you from The Open Group's first Security Practitioners Conference in San Diego, the week of Feb. 2, 2009.

Our topic for this podcast, part of a series of events and coverage at this conference, centers on enterprise security and the intersection with enterprise architecture (EA). The goal is to bring a security understanding across more planning- and architectural-level activities, to make security pervasive -- and certainly not an afterthought.

The issue of security has become more important over time. As enterprises engage in more complex activities, particularly with a boundaryless environment -- which The Open Group upholds and tries to support in terms of management and planning -- security again becomes a paramount issue.

To help us understand more about security in the context of enterprise architecture, we're joined by Chenxi Wang, principal analyst for security and risk management at Forrester Research; Kristin Lovejoy, director of corporate security strategy at IBM; Nils Puhlmann, chief security officer and vice president of risk management of Qualys, and Jim Hietala, vice president of security for The Open Group.

Let's start with you, Jim. Security now intersects with more elements of what information technology (IT) does, and there are more people responsible for it. From the perspective of The Open Group, why has it been a transition or a progression in terms of bringing security into architecture? Why wasn't it always part of architecture?

Jim Hietala: That's a good question, but probably predates my involvement with The Open Group. In TOGAF 9, the latest iteration of TOGAF that we announced this week, there is a whole chapter devoted to security, trying to get to the idea of building it in upfront, as opposed to tacking it on after the fact.

You've seen movement, certainly within The Open Group, in terms of TOGAF, and our enterprise architecture groups try to make that happen. It's a constant struggle that we've had in security -- the idea that functionality precedes security, and security has to be tacked on after the fact. We end up where we are today with the kind of security threats and environment that we have.

Gardner: Chenxi, we've seen security officer emerge as a role in the past several years. Shouldn't everyone have, in a sense, the role of security officer as part of their job description?

Chenxi Wang: Everyone in the organization or every organization? My view is slightly different. I think that in the architecture group there should be somebody who is versed in security, and the security side of the house should have an active involvement in architecture design, which is what we are seeing as an emerging trend in a lot of organizations today.

Gardner: We're also facing a substantial economic downturn globally. Often, this accelerates issues around risk, change management, large numbers of people entering and leaving organizations, mergers and acquisitions, and provisioning of people off of applications and systems.

Kristin, perhaps you can give us a sense of why security might be more important in a downturn than when we were in a boom cycle?

New technologies

Kristin Lovejoy: There are a couple of things to think about. First of all, in a down economy, like we have today, a lot of organizations are adopting new technologies, such as Web 2.0, service-oriented architecture (SOA) style applications, and virtualization.

Why are they doing it? They are doing it because of the economy of scale that you can get from those technologies. The problem is that these new technologies don't necessarily have the same security constructs built in.

Take Web 2.0 and SOA-style composite applications, for example. The problem with composite applications is that, as we're building these composite applications, we don't know the source of the widget. We don't know whether these applications have been built with good secured design. In the long-term, that becomes problematic for the organizations that use them.

It's the same with virtualization. There hasn't been a lot of thought put to what it means to secure a virtual system. There are not a lot of best practices out there. There are not a lot of industry standards we can adhere to. The IT general control frameworks don't even point to what you need to do from a virtualization perspective.

In a down economy, it's not simply the fact we have to worry about privileged users and our employees, blah, blah, blah. We also have to worry about these new technologies that we're adapting to become more agile as a business.

Gardner: Nils, how do you view the intersection of what an enterprise architect needs to consider as they are planning and thinking about a more organized approach to IT and bringing security into that process?

Nils Puhlmann: Enterprise architecture is the cornerstone of making security simpler and therefore more effective. The more you can plan, simplify structures, and build in security from the get-go, the more bang you get for the buck.

It's just like building a house. If you don't think about security, you have to add it later, and that will be very expensive. If it's part of the original design, then the things you need to do to secure it at the end will be very minimal. Plus, any changes down the road will also be easier from a security point of view, because you built for it, designed for it, and most important, you're aware of what you have.

Most large enterprises today struggle even to know what architecture they have. In many cases, they don't even know what they have. The trend we see here with architecture and security moving closer together is a trend we have seen in software development as well. It was always an afterthought, and eventually somebody made a calculation and said, "This is really expensive, and we need to build it in."

Things like security and the software development lifecycle came up, and we are doing this now for architecture. Hopefully, we'll eventually do this for complex systems. Kristin mentioned Web 2.0. It's the same thing there. We have wonderful applications, and companies are moving towards Facebook en masse, but it's a small company. The question is, was security built in, has anyone vetted that, or are we not just repeating the same mistake we did so many times before?

A matter of process

Gardner: We see with security that it's not so much an issue of technology but really about process, follow through, policy determination and enforcement, and the means to do that.

Chenxi, when it comes to bringing security into a regulated provision, policy-driven process, it starts to sound like SOA. You'd have a repository, you'd have governance, and the ways in which services would be used or managed and policies applied to them. Is there actually an intersection between some of the concepts of architecture, SOA, and this larger strategic approach to security?

Wang: There is definitely some intersection. If you look at classic SOA architecture, there is a certain interface, and you can specify what the API is like. If you think about a virtual approach to security, it's also a set of policies you need to specify upfront, hopefully, and then a set of procedures in which you adhere to these policies.

It's very much like understanding the API and the parameters that go into using these APIs. I hadn't actually thought about this really nicely laid out analogy, Dana, but I think that's a quite good one.

Gardner: I think we're talking about lifecycles and managing lifecycles and services. I keep seeing more solutions, shared services, and then actual business and IT services, all being managed in a similar way nowadays with repository and architecture.

Jim, this is your first security conference at The Open Group. It's also coinciding with a cloud computing conference. Is there an element now, with the "boundarylessness" of organizations and what your architectures have tried to provide in terms of managing those permeable boundaries and this added layer, or a model for the cloud? More succinctly, how do the cloud and security come together?

Hietala: That's one of the things we hope to figure out this week. There's a whole set of security issues related to cloud computing -- things like compliance regulation, for example. If you're an organization that is subject to things like the payment card industry data security standard (PCI DSS) or some of the banking regulations in the United States, are there certain applications and certain kinds of data that you will be able to put in a cloud? Maybe. Are there ones that you probably can't put in the cloud today, because you can't get visibility into the control environment that the cloud service provider has? Probably.

There's a whole set of issues related to security compliance and risk management that have to do with cloud services. The session this week with a number of cloud service providers, we think, will bring a lot of those questions to the surface.

Gardner: Clearly, those on the naysaying side of the cloud argument often have a problem with the data leaving their premises. As we've heard from other speakers at the conference, having data or transactions that are separate from your organization or that happen at someone else's data center is actually quite common, and is sort of a cultural shift in thinking.

Nils, what do you think needs to happen from this cultural perspective in order for people to feel secure about using cloud models?

A shift in thinking

Puhlmann: We need to shift the way we think about cloud computing. There is a lot of fear out there. It reminds me of 10 years back, when we talked about remote access into companies, VPN, and things like that. People were very fearful and said, "No way. We won't allow this." Now is the time for us to think about cloud computing. If it's done right and by a provider doing all the right things around security, would it be better or worse than it is today?

I'd argue it would be better, because you deal with somebody whose business relies on doing the right thing, versus a lot of processes and a lot of system issues. A lot of corporations today are understaffed, or there is a lot of transition, and a lot of changes there. Simply, things are not in order or not the way they should or could be.

Then, we have the data issue. Let's face it, we already outsource so much work to other places. If ever my data is in a certain place, where I have audited and vetted that provider, or somebody from a remote country as a DBA is accessing my data in-house, is there really a difference when it comes to risk? In my mind, not really, because if you do both well, then it's a good thing.

There's too much fear going into this, and hopefully the security community will have learned from the past and will do a good job in addressing what we don't have today, like best practices, and how vendors and customers strive for that.

Gardner: Kristin, I read a quote recently where someone said that the person or persons that manage the firewall are the most important people in the IT organization. Given what we are dealing with in terms of security, and also trying to bail ourselves of some of these hybrid models, do you agree with that, and if so, why?

Lovejoy: That's a leading question. Is the firewall administrator important? Obviously, yes. More important than ever. In a world with no boundaries, it becomes very hard to suggest that that is accurate.

What we're seeing from a macro perspective is that the IT function within large enterprises is changing. It's undergoing this radical transformation, where the CSO/CISO is becoming a consultant to the business. The CSO/CISO is recognizing, from an operational risk perspective, what could potentially happen to the business, then designing the policies, the processes, and the architectural principles that need to be baked in, pushing them into the operational organization.

From an IT perspective, it's the individuals who are managing the software development release process, the people that are managing the changing configuration management process. Those are the guys that really now hold the keys to the kingdom, so to speak.

Particularly when you are talking about enterprise cloud, they become even more important, because you have to recognize -- and Nils was mentioning this or inferred this -- that cloud provides a vision of simplicity. If you think about cloud and the way it's architected, a cloud could be much simpler than the traditional enterprise. If you think about who's managing that change and managing those systems, it becomes those folks that are key.

Gardner: Why is the cloud simpler? Is it because you're dealing now at a services and API level and you're not concerned necessarily with the rest of the equation?

Lovejoy: That's correct.

Gardner: Is that good for security or bad?

Aligning security and operations

Lovejoy: We've been dancing around the subject, but my hope is that security and operations become much more aligned. It's hard to distinguish today between operations and security. So many of the functions overlap. I'll ask you again, changing configuration management, software development and release, why is that not security? From my perspective, I'd like to see those two functions melding.

Gardner: So, security concerns and approaches and best practices really need to be pervasive throughout IT?

Lovejoy: Exactly. They need to come from the top, they need to move to the bottom, and they need to be risk based.

Gardner: Now, when it comes to the economics behind making security more pervasive, the return on investment (ROI) for security is one of the easier stories. Not being secure is very expensive. Being publicly not secure is even more expensive. Let's go back to Chenxi, the economics of security, isn't this something that people should get easy funding for in an IT organization?

Wang: The economics of security. This issue has been in research for a long time. Ross Anderson, who is a professor at University of Cambridge, runs this economics of security workshop since 1996, or something like that. There is some very interesting research coming out of that workshop, and people have done case studies. But, I'm not sure how much of that has been adopted in practice.

I've yet to find an organization that takes a very extensive economics-based approach to security, but what Kristin said earlier and what you just said is happening. We're seeing the IT security team in many organizations now have a somewhat diminished role, in the sense that some of the traditional security tasks are now moving into IT operations or moving into risk and compliance.

We're even seeing that security teams sometimes have dotted reporting responsibility to the legal team. Some of the functions are moving out of the security team, but at the same time, IT security now has an expanded impact on the entire organization, which is the positive direction.

Gardner: If there is a relationship between doing your architecture well, making systemic security, thought, vision, and implementation part and parcel with how you do IT, then it seems to me that the ROI for security becomes a very strong rationale for good architecture. Would you agree with that, Jim?

Hietala: I would. Organizations want, at all costs, to avoid plowing ahead with architectures, not considering security upfront, and dealing with the consequence of that. You could probably point to some of the recent breaches and draw the conclusion that maybe that's what happened. So, I would agree with that statement.

Gardner: We did have quite a few high profile breaches, and of course, we're seeing a lot more activity in the financial sector. Actually, we could fairly call it a restructuring of the entire financial sector. Do you expect to see more of these high-profile breaches and issues in 2009?

Same song - second verse

Hietala: I'll be interested to hear everyone else's opinion on this as well, but my perspective would be yes. It's been interesting to me that 2009 has started out with what I would call "same song, second verse." We've had a massive worm that propagated through a number of means, but one of which is removable storage media. That takes me back to 1986 or 1988, when viruses propagated through floppy disk.

We've had the Heartland breach, which may be as many as 100 million credit cards exposed. Those kinds of things, unfortunately, are going to be with us for some time.

Gardner: Let's get the perspective of others. Kristin, is this going to be a very bad year for security?

Lovejoy: The more states that pass privacy disclosure requirements that mandate that you actually disclose a breach, the more we're going to hear. Does this mean that there haven't always been breaches? There have always been breaches, but we just haven't been talking about them. They're becoming much more public today.

Do I see a trend, where there are employees terminated or worried employees who are perpetrating harm on the business? The answer is yes. That is becoming a much more of an issue.

The second issue that we're seeing, and this is one of those quasi-security, quasi-operational issues, is that, because of the resource restrictions within organizations today, people are so resource starved, particularly around the changing configuration management process.

We're beginning to see where there are critical outages, particularly in infrastructure systems like those associated with nuclear power and heavy industry, where the folks are making changes outside the change process simply because they are so overloaded. They're not necessarily following policy. They're not necessarily following process.

So, we are seeing outages associated with individuals who are simply doing a job that they are ill-informed to do or overwhelmed and not able to do it effectively.

Gardner: Or perhaps cutting corners as a result of a number of other diminished resources.

Lovejoy: That's exactly right.

Gardner: Nils, do you have any recommendations for how to come into 2009 and not fall into some of these pitfalls, if you are an enterprise and you are looking at your security risk portfolio?

Security part of quality

Puhlmann: Security to me is always a part of quality. When the quality falls down in IT operations, you normally see security issues popping up. We have to realize that the malicious potential and the effort put in by some of the groups behind these recent breaches are going up. It has to do with resources becoming cheaper, with the knowledge being freely available in the market. This is now on a large scale.

In order to keep up with this we need at least minimum best practices. Somebody mentioned earlier, the worm outbreak, which really was enabled by a vulnerability that was quite old. That just points out that a lot of companies are not doing what they could do easily.

I'm not talking about the tip of the iceberg. I'm talking about the middle. As Kristin said, we've got to pay attention to these things and we need to make sure that people are trained and the resources are there at least to keep the minimum security within the company.

Gardner: As we pointed out a little earlier, security isn't necessarily an upfront capital cost. You don't download and install security. It's process and organizational and management centric. It sounds like you simply need a level of discipline, which isn't necessarily expensive, but requires intent.

Puhlmann: Yes, and that is actually similar to architecture. Architecture also is discipline. You need to sit down early and plan, and it's the same for security. A lot of things, a lot of low hanging fruit, you can do without expensive technology. It's policies, process, just assigning responsibility, and also changing security so it's a service of a business.

The business has no interest in either a breach or anything that would negatively affect the outcome of a business, for example, business continuity.

We talked earlier about how IT security might change. My feeling is that security will more and more become a partner of the business and help the business achieve its goals. At some point, nobody will talk about ROI anymore, because it's just something that will be planned in.

Gardner: Jim, what about this issue of intent? Is this something that we can bring into the architectural framework, elevate the need, and focus on intent for security?

Hietala: I believe so. Most system architects are going to be looking at trying to do the right things with respect to security and to ensure that it's thought about upfront, not later on in the cycle.

Gardner: Chenxi, in the market among suppliers that are focused on security, how are they adapting to 2009, which many of us expect to be a difficult year? We mentioned that it's about intent, but there are also products and technologies. Is there any top-of-mind importance from your perspective?

Slight increase in spending

Wang: We haven't seen a severe cut of IT security budget yet from organizations we surveyed, perhaps because some of those budgets were set before the economic downturn happened.

For some of them, we actually saw a slight increase, because just as Lehman Brothers is now Barclays, you have to merge the two IT systems. Now, you have to spend money on merging the two systems, as well as security. So, there is some actually increase in budget due to the economic situation.

A lot of vendors are taking advantages of that, and we are seeing an increased marketing effort on helping to meet security regulations and compliance. Most of us anticipate an increase of regulatory pressure coming down the pipeline, maybe in 2009, maybe in 2010. My belief is that we'll see a little bit more security spending there, because of the increased regulatory pressure.

Gardner: Kristin, we've discussed process and architecture, but are there any particular technologies that you think will be prominent in the coming year or two?

Lovejoy: Interestingly enough, identity and access management (IAM) is likely to be one of the more significant acquisitions that most businesses make.

This goes back to the business value point of security that we have been making, if you think about what's happening in the world with all of these folks wanting to access the network via smart devices. How are they going to do that? Well, they are going to do that using some sort of authentication mechanism that allows them to securely connect back.

Most organizations want to be able to access the new customer, the new consumer, via smart devices. They want to be able to allow their employees access to the network via smart devices or via any kind of other mobile device, which allows them to do things like telecommute.

IAM, as an example, is a technology that enables the business to offer a service to the employee or to that new consumer. What we're seeing is that organizations are purchasing IAM, not necessarily for security, but for the delivery of a secure service. That's one area where we are seeing uplift.

Gardner: Let's just unpack that a little bit. How is this is different from directory provisioning or some of the traditional approaches? These folks wouldn't be in the directories at that point?

Identity managements

Lovejoy: What we're seeing is much more of a focus on federated identity management and single sign-on. In fact, we're beginning to see this trend in our customer base, and a lot of organizations have been talking about this issue of mobile endpoint management. It's very hard in the new world to secure these mobile devices. What organizations are saying to us is, "Why can't we just use single sign-on and federated identity management?"

Single sign-on, in particular, has the capacity, if you think about it in the right way, to uncouple the device from the individual who is using the device, define the policy, apply the policy to the role, and then based on the role, secure the endpoint or isolate the endpoint. It's a very interesting way in which organizations are beginning to think about how they can use this technology as an alternative to traditional secure mobile endpoint management.

Gardner: It also sounds, while pertinent to mobile, that they would have a role in cloud or hybrid boundaryless types of activities.

Lovejoy: That's absolutely correct.

Gardner: Does anyone have anything to offer on this IAM in the cloud.

Puhlmann: Kristin is right. We've tried IAM for many years, and there have been many expensive failed projects in large corporations. Perhaps, we need the cloud to give us this little push to really solve it once and for all in a very federated model. I'd very much like to see that. Based on past experience, though, I'm a little cautious how quickly it will happen.

I think what we will see is a simplification of security, because it has gotten to a point where it's just too complex to handle with too many moving parts, and that makes it hard to work with and also expensive.

Also, we'll see a more realistic approach to security. What really matters? Do we really need to secure everything, or do we need to focus on certain types of data, and where is that really? Do we have to close off every little door, or can we leave some doors open and go closer to where our assets are. How much do they really mean to us?

Gardner: Great. We've been discussing security and some of the pressures of the modern age, this particular economic downturn period, but also in the context of process and architecture.

I want to thank our panelists. We were joined by Chenxi Wang, principal analyst for security and risk management at Forrester Research; Kristin Lovejoy, director of corporate security strategy at IBM; Nils Puhlmann, chief security officer and vice president of risk management of Qualys, and Jim Hietala, vice president of security for The Open Group.

Thanks to you all. Our conversation comes to you through the support of The Open Group, from the first Security Practitioners Conference here in San Diego in February, 2009.

Listen to the podcast. Download the podcast. Find it on iTunes and Podcast.com. Learn more. Sponsor: The Open Group.

Transcript of a podcast on security as architectural best practices, recorded at the first Security Practitioners Conference at The Open Group's 21st Enterprise Architecture Conference in San Diego, February 2009. Copyright Interarbor Solutions, LLC, 2005-2009. All rights reserved.

View more podcasts and resources from The Open Group's recent conferences and TOGAF 9 launch:

The Open Group's CEO Allen Brown interview

Live panel discussion on enterprise architecture trends

Deep dive into TOGAF 9 use benefits

Reporting on the TOGAF 9 launch

Panel discussion on cloud computing and enterprise architecture


Access the conference proceedings

General TOGAF 9 information

Introduction to TOGAF 9 whitepaper

Whitepaper on migrating from TOGAF 8.1.1 to version 9

TOGAF 9 certification information


TOGAF 9 Commercial Licensing program information

Friday, February 13, 2009

Interview: Guillaume Nodet and Adrian Trenaman on Apache ServiceMix and Role of ESBs in OSS

Transcript of a BriefingsDirect podcast with Guillaume Nodet and Adrian Trenaman of Progress Software on directions and trends in SOA and open source infrastructure.

Listen to the podcast. Download the podcast. Find it on iTunes and Podcast.com. Learn more. Sponsor: Progress Software.

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you’re listening to BriefingsDirect. Today, a sponsored podcast discussion about open source, service-oriented architecture (SOA) developments, and trends.

We are going to catch up and get a refresher on some important open-source software projects in the Apache Software Foundation. We’ll be looking at the Apache ServiceMix enterprise service bus (ESB), and the toolkit, and we are going to talk with some thought leaders and community development leaders to assess the market for these products, particularly in the context of cloud computing, which is certainly getting a lot of attention these days.

We'll also look at the context around such technologies as OSGi and Java Business Integration (JBI). We want to also think about what this means for enterprise-caliber SOA, particularly leveraging open-source projects. [Access more FUSE Community podcasts.]

To help us sort out and better understand the open-source SOA landscape, we’re joined by Guillaume Nodet, software architect at Progress Software and vice president of Apache ServiceMix at Apache. Welcome to the show, Guillaume.

Guillaume Nodet: Thank you.

Gardner: We are also joined by Adrian Trenaman, distinguished consultant at Progress Software. Hey, Adrian.

Adrian Trenaman: Hey, Dana. How is it going?

Gardner: Good. Now, we are starting to see different patterns of adoption and use-case scenarios around SOA, and open-source projects. Counterpart offerings for certification and support, such as the FUSE offerings from Progress, are getting more traction in interesting ways. The role of ESBs, I think as we can safely say, it is expanding.

The role for management and policy and rules and roles is becoming much more essential, not on a case-by-case basis or tactical basis, but more from a holistic management overview of services and other aspects of IT development and deployment.

First I want to go to Guillaume. Give us a quick update on Apache ServiceMix, and how you see it being used in the market now?

Nodet: Apache ServiceMix is one of the top-level projects at the Apache Software Foundation. It was started back in 2005, and was graduated as a top-level project a year-and-a-half ago. ServiceMix is an open-source ESB and it's really a well-known ESB for several reasons, which we’ll come to later. It's really a full-featured ESB that is widely used in a whole range of companies from government to banking applications. There’s very wide use of ServiceMix.

Gardner: Tell us a little bit about your background, and how you became involved. How long have you been working on ServiceMix, and what led you up to getting involved?

Nodet: Back in 2004, I was working at a small company based in France, and we were looking for an ESB for internal purposes. I began to do some research on the open-source ESBs available at that time. I was involved in the Mule Project and I became a committer in my spare time, and had been one of the main committers for six months.

In the summer of 2005, my company was firing people for economical reasons, and I decided to take a break and leave the company. So I sent an email to James Strachan, who was just starting ServiceMix, and that's how I became involved. I was hired by LogicBlaze at the time, which has been acquired by IONA and now Progress.

Gardner: Tell us a little bit more about the context of the ServiceMix ESB in some of the other Apache Software Foundation projects, just so our listeners understand that this isn't necessarily standalone. It can be used, of course, standalone, but it fits into a bigger picture, when it comes to SOA infrastructure. Maybe you could just explain that landscape as it stands now.

The bigger picture

Nodet: ServiceMix is an ESB and reuses lots of other Apache projects. The main ones which ServiceMix reuse is Apache ActiveMQ which is a message broker so it is for the JMS backbone infrastructure. We also heavily use Apache CXF, which is a SOAP stack that integrates nicely in ServiceMix. One of the other projects that we use is Apache Camel, which is a sub-project of Apache ActiveMQ, is a message router, which is really efficient and it uses DSL to be able to configure routers very easily. So these are the three main projects that we use.

Of course, for ServiceMix 4.0, we are also using the Apache Felix OSGi Framework, and lots of other projects that we use throughout ServiceMix. There are really big ties between ServiceMix and the other projects. Another project that we can leverage in ServiceMix is Apache ODE, which is the business process execution language (BPEL) Engine.

Gardner: Now, it's not always easy to determine the number of implementations, particularly in production, for open-source projects and code. It's a bit easier when you have a commercial vendor. You can track their sales or revenues and you have a sense of what the market is doing.

Do you have any insight into what's been going on, in a larger trend around these SOA open-source projects in terms of implementation volumes? Are we still in test, are people in pilot, or are we seeing a bit more. And, what trends are there around actual production implementation? I'll throw that to either one, Adrian or Guillaume.

Trenaman: I’m happy to chip in there. We’ve seen, quite a lot of work in terms of real-world sales. So you started in ServiceMix, obviously. We have been using ServiceMix for some time with our customers, and we have seen it used and deployed, in anger, if you will. What's interesting for me is the number of different kinds of users out there, the different markets that it gets deployed in. We have had users in airline solutions, in retail, and extensive use in government situations as well.

We recently finished a project in mobile health, where we used ServiceMix to take information from a government health backbone, using HL7 formatted messages, and get that information onto the PDAs of the health-care officials like doctors and nurses. So this is a really, really interesting use case in the healthcare arena, where we’ve got ServiceMix in deployment.

It’s used in a number of cases as well for financial messaging. Recently, I was working with a customer, who hoped to use ServiceMix to route messages between central securities depositories, so they were using SWIFT messages over ServiceMix. We’re getting to see a really nice uptake of new users in new areas, but we also have lots of battle-hardened deployments now in production.

Gardner: One of the nice things about this trend towards adoption is that you often get more contributions back into the project. Maybe it would be good now to understand who is involved with Apache, who is really contributing, and who is filling out the feature sets and defining the requirements around ServiceMix. Guillaume, do you have any thoughts about who is really behind this in terms of the authoring and requirements?

From the community

Nodet: The main thing is that everything comes from the community at large. It’s mainly users asking how they can implement a given use case. Sometimes, we don't have everything set up to fulfill the use case in the easiest way. In such a case, we try to enhance ServiceMix to cover more use cases.

In terms of contributors, we have lots of people working for different companies. Most of them are IT companies who are working and implementing SOA architecture for one of their customers and they are using ServiceMix.

We have a number of individual contractors who do some consulting around ServiceMix and they are contributing back to the software. So, it's really a diverse community. Progress is, obviously, one of the big proponents of Apache ServiceMix. As you have said, we run our business using the FUSE family of projects.

So, it's really a very diverse community and with different people from different origins, from everywhere in the world. We have Italian guys, we have, obviously, US people, and we have a big committee.

Gardner: The JBI specification has been quite central to ServiceMix. If you could, give us an update on what JBI, as its own spec, has been up to, and what that means for ServiceMix, and ultimately FUSE. Furthermore, let's get into some of the OSGi developments. It has really become hot pretty quickly in the market. So what's up with JBI and OSGi?

Nodet: The JBI specification has been out since the beginning of 2005. It defines an architecture to build some ESBs in Java. The main thing is that the key concept is normalized exchanges. This means that you can deploy components on the JBI container, and all of these components will be able to work together without any problems because they share a common knowledge of exchanges, and the messages between components are implemented. This is really a key point.

Anyone can grab a third party component from outside ServiceMix. There are a number of examples of components that exist, and you can grab such a component and deploy it in ServiceMix and it will just work.

That's really one of the main points behind the JBI specification. It’s a Java centric specification. I mean that the implementation has to be done in Java, but ServiceMix allows a lot of different clients from other technologies to jump into the bus and exchange data with other components.

So one of the things that we use for that is a STOMP protocol, which is a text-based messaging protocol. We have lots of different implementations from Ruby, Python, JavaScript and lots of different languages that you can use to talk to the ServiceMix bus.

OSGi is a specification that is really old, about 10 years, at least. It was originally designed for embedded devices. During the past two years, we have seen a lot of traction in the enterprise market to push OSGi. The main thing is that the next major version of ServiceMix, which will be ServiceMix 4.0, is based on OSGi and reuses the OSGi benefit.

The main driver behind that was mainly to get around some weaknesses of the JBI specification mainly related to the JBI packaging and class loader architecture. OSGi is really a nice specification for that and we decided to use it for the next version of ServiceMix.

Gardner: Now, we tend to see a little bit of politics oftentimes in the market around specifications, standards, who supports them, whether there is a competing approach, and where that goes. We’ve seen a bit of that in the Java Community Process over the years. I wonder, Adrian, if you might be able to set the table, if you will, around where these specifications are and what some of the commercial interests are?

For example, I know that IBM is quite strong behind OSGi, and Oracle has backed it to an extent as well. These guys, obviously, have quite a bit clout in the market. Set the table on the vendors and the specification situation now.

Sticking with JBI 1.0

Trenaman: JBI is currently at version 1.0, or 1.11 actually. There is a JBI 2.0 expert group, and I believe they are working under JSR 312. So, I think there’s work going on to advance that specification.

However, if you look at what the vendors are doing -- be it Sun, Progress, or Red Hat through JBoss -- I think the vendors are all sticking with JBI 1.0 at the moment, making customers successful with that version of the spec and in anticipation of a new version of the spec. But, I believe it’s quite quiet. Guillaume, is that correct, for 2.0?

Nodet: Yes. I am part of the 2.0 expert group for JBI and the activity has been quite low recently. One main driver behind JBI 2.0 is to refocus on what I explained is the key point of the JBI 1.0 specification, which is the concept of normalized exchanges and the normalized message router.

The goal of the JBI 2.0 Expert Group I think is to refocus on that and making JBI play much more nicely with other specifications that somewhat are seen as opponents to JBI, like SCA, and also play more nicely with OSGi because ServiceMix is not the only JBI implementation that goes towards the OSGi way. So we want also to be sure that everything aligns correctly.

Gardner: Just so listeners can understand, what is it about OSGi that is valuable or beneficial as a container in an architectural approach, when used in conjunction with the SOA architectural component?

Trenaman: OSGi is the top of the art, in terms of deployment. It really is what we’ve all wanted for years. I’ve lost enough follicles on my head fixing class-path issues and that kind of class-path hell.

OSGi gives us a badly needed packaging system and a component-based modular deployment system for Java. It piles in some really neat features in terms of life cycle -- being able to start and shut down services, define dependencies between services and between deployment bundles, and also then to do versioning as well.

The ability to have multiple versions of the same service in the same JVM with no class-path conflicts is a massive success. What OSGi really does is clean up the air in terms of Java deployment and Java modularity. So, for me, it's an absolute no-brainer, and I have seen customers who have led the charge on this. This modular framework is not necessarily something that the industry is pushing on the consumers. The consumers are actually pulling us along.

I have worked with customers who have been using OSGi for the last year-and-a-half or two years, and they are making great strides in terms of making their application architecture clean and modular and very easy and flexible to deploy. So, I’ve seen a lot of goodness come out of OSGi and the enterprise. You mentioned politics earlier on, Dana, and the politics for me are interesting on the number of levels.

Here is my take on it. The first level is on the OSGi core platform, and what you’ve got there is a number of players who are all, in some sense I guess, competing to get the de-facto standard implementation or reference implementation. I think Eclipse Equinox emerges as the winner. They are now strongly backed by IBM.

The key players

And in the Apache Software Foundation you’ve got Felix. One of the other key players will be Knopplerfish OSGi, which is really Makewave, and they deliver Knopplerfish under a BSD-style license. So, we have some healthy competition there, but I guess in terms of feature build out Equinox seems to be the winner in that area.

That's one way of looking at it. The other thing is, if you look at your traditional app server vendors and what they are doing, IBM, Oracle, Red Hat, and Sun have all put OSGi, or are about to put OSGi, within their application servers. This is a massive movement.

I think it's interesting that OSGi is no longer a differentiator. It’s actually an important gatekeeper. You have to have it. This is a wave that the industry and that our customers are all riding, and I think they are very welcoming to it.

Politically, all of the app server vendors seem to be massively behind OSGi and supportive of it. The other area that maybe you alluded to is that in the broader Java community, there’s been a debate that's gone for some time now about JSR 277, which is the Java Community Process attempt at Java modules. The scene there is that JSR 277 overlaps massively with what OSGi intends to achieve, or rather has already achieved.

That starts getting messy all over again, because Java 7.0 will include JSR 277. So the future of Java seems to have hooked into this Java module specification, and not taking what would be the sensible choice, which would be to follow an OSGi based model, or at least to passionately embrace OSGi and weave it in a very nice way into JSR 277.

So, there is still some distance to go there on that debate over which one actually gets accepted and gets embraced by the community. I think the happiest conclusion for that is where JSR 277 really does embrace what OSGi has done, and actually, in a sense, builds support into the Java language for OSGi.

Gardner: Clearly, the momentum around OSGi has been substantial. I’ve been amazed at how far this has come so quickly.

Trenaman: Exactly.

Gardner: Now, IONA now within Progress Software, is in this not just for “peace on earth and good will toward men.” With the latest FUSE version being 4.0, you have a certification, support, and enterprise-ready service value around the ServiceMix core. Is there something about OSGi that helps Progress in delivering this to market, given the modularity and the better control and management aspects? I am thinking, if I am in certification and enterprise-ready mode for these, that OSGi actually helps me, is that correct?

It's a community issue

Trenaman: My perspective on that would be that embracing of OSGi in FUSE is a community issue. It’s the community that's driven that and that's a part of ServiceMix. So, this is something that we in Progress now are quite happy to embrace and then take into FUSE.

For me, what the OSGi gives us is clearly a much better plug-in framework, into which we can drop value-added services and into which we can extend. I think the OSGi framework is great for that, as well as in terms of management, maybe moving toward grid computing. The stuff that we get from OSGi allowed us to be far more dynamic in the way we provision services.

Gardner: Great. Now, you mentioned the big “grid” word. A lot is being talked about these days in cloud computing, and there’s an interesting intersection here between open-source early adopters and the very technology savvy providers or companies and the cloud phenomenon.

We’ve seen some quite successful cloud implementations at such organizations as Google, Yahoo!, and Amazon, and we’re starting to see more with chat in the market from Microsoft and IBM that they are going to get into this as well.

These are the organizations that are looking for control, the ability to extend code and “roll their own.” That's where their value add is. What's the intersection between SOA, open-source infrastructure, and these cloud implementations? Then, we’ll talk about where these clouds might go in terms of enterprises themselves. Who wants to take the high view on the cloud and open-source SOA discussion?

Trenaman: A lot of SOA is down to simply "Good Design 101." The separation of the interface from the implementation is absolutely key, and then location independence, as well. You know, being able to access a service of some kind and actually not really care exactly where that is on the cloud, so that the whole infrastructure behind the service is transparent. You do not get to see it.

SOA brings some very nice concepts in terms of contract-first design and standard-based specification of interfaces, be they using WSDL or just plain old XML and REST -- or even XML and JMS.

I think the fact that we can now define in a well-understood way what these services are, and that allows us to get the data into and out of the cloud in a standardized way. I think that's massively important. That's one of the things that SOA brings to the cloud that becomes very important.

What open-source brings to cloud, apart from quality software against which to build massively distributed systems. What it brings is maybe a business model or a deployment model that actually suits the cloud.

I think of the traditional software licensing models for closed source where you are charging per CPU. When you look at massive cloud deployments with virtual machines on many different physical hardware boxes, those models just don't seem to work.

Gardner: A great deal of virtualization is taking place in these cloud infrastructures.

A natural approach

Trenaman: I think open source becomes a very natural and desirable approach in terms of the technologies that you are going to use in terms of accessing the cloud and actually implementing services on the cloud. Then, in order to get those services there in the first place, SOA is pivotal. The best practices and designs that we got from the years we have been doing SOA certainly come into play there.

Gardner: Let's move into this notion of a private cloud, which also requires us to understand a hybrid, or managing what takes place within a private, on-premises cloud infrastructure -- and then some of these other available services from other large consumer-facing and business-facing cloud providers.

Vendors and, in many cases, community development organizations are starting to salivate over this opportunity to provide the software, services, and support in helping enterprises create that more efficient, high availability, much more creative utilization range incumbent in a well-designed cloud infrastructure or grid or utility infrastructure.

Trenaman: Sure.

Gardner: It seems unlikely that an organization creating one of these clouds is going to go out and just buy it out of the box. It seems much more likely that, at least for the early adoption stages, this is going to be a great opportunity to be exerting your own special sauce as an internal IT organization, well versed in open-source community development projects and then delivering services back to your employees and your customers and your business partners in such a way that you can really reduce your total cost, gain agility, and gain more control.

Let's go to Guillaume. How do you see ServiceMix, in particular, playing in this movement, now that we are just starting to see the opening innings of private cloud infrastructure?

Nodet: ServiceMix has long been a way that you can distribute your SOA artifacts. ServiceMix is an ESB and by nature, it can be distributed, so it's really easy to start several instances of ServiceMix and make them seamlessly talk together in a high availability way.

The thing that you do not really see yet is all the management and all the monitoring stuff that is needed when you deploy in such an architecture. So ServiceMix can really be used readily to fulfill the core infrastructure.

ServiceMix itself does not aim at providing all the management tools that you could find from either commercial vendors or even open-source. So, on this particular topic, ServiceMix, backed by Progress, is bringing a lot of value to our customers. Progress now has the ability to provide such software.

Gardner: So, Progress has had quite a long history, several decades, in bringing enterprise development and deployment strategies, platforms, tools, a full solution. This seems to be a pretty good heritage combined with what community development can offer in starting to craft some of these solutions for private clouds and also to manage the boundaries, which I think is essential.

I can see an ESB really taking on a significantly larger role in managing the boundaries between and among different cloud implementations for integration, data portability, and transactional integration. Adrian anything to further add to that.

Dynamic provisioning

Trenaman: Certainly, you could always see the ESBs being sort of on the periphery of the cloud, getting data in and out. That's a clear use case. There is something a little sweeter, though, about ServiceMix, particularly ServiceMix 4, because it's absolutely geared for dynamic provisioning.

You can imagine having an instance of ServiceMix 4 that you know is maybe just an image that you are running on several virtual machines. The first thing it does is contact a grid controller and says, “Well, okay, what bundles do you want me to deploy?” That means we can actually have the grid controller farming out particular applications to the containers that are available.

If a container goes down, then the grid controller will restart applications or bundles on different computing resources. With OSGi at the core of ServiceMix, at the core of the ESB, that’s a step forward now in terms of dynamic provisioning and really like an autonomous competing infrastructure.

Nodet: Another thing I just want to add about ServiceMix 4, complementing what Adrian, just said is that ServiceMix split into several sub-projects. One of them is ServiceMix Kernel, which is an OSGi enhanced runtime that can be used for provisioning education, and this container is able to deploy virtually any kind of abstract. So, it can support Web applications, and it can support JBI abstracts, because the JBI container is reusing it, but you can really deploy anything that you want.

So, this piece of software can really be leveraged in cloud infrastructure by virtually deploying any application that you want. It could be plain Web services without using an ESB if you don’t have such a need. So it's really pervasive.

Gardner: We were quite early in this whole definition of what private cloud would or wouldn't be. Even the word “cloud,” of course, is quite nebulous nowadays.

I do see a huge opportunity here, given also the economic pressures that many organizations are going to be facing in the coming years. It's really essential to do more with less. As we move toward these cloud implementations, you certainly want to be able to recognize that it isn't defined. It's a work in progress, and having agility, flexibility, visibility into the code, understanding the origin for the code, and the licensing and so forth, I think is extremely important.

Trenaman: It’s massively important for anyone building the cloud, particularly a public cloud. That has got to be watched with total care.

Gardner: We’ve been talking about SOA infrastructure, getting some updates and refreshers on the ServiceMix and Apache Foundation approaches. talking to some community and thought leaders. We've learned a little bit more also about Progress Software and FUSE 4.0.

I’m very interested and excited about these cloud opportunities for developers to use as they already are. The uptake in Amazon Web Services for development activities and test-and-deploy scenarios and performance testing has been astonishing.

Microsoft is going to be right behind them with an appeal to developers to build on a Microsoft cloud. These are going to be ongoing and interesting, and so managing them is going to be critical to their success. A key differentiator from one enterprise to another it is how well they can take advantage of these, and manage the boundaries quite well.

I want to thank our participants. We have been joined by Guillaume Nodet. He is the software architect at Progress Software and vice president of Apache ServiceMix. Thank you, Guillaume, we really appreciate your input.

Nodet: No problem. I am glad that we have been able to do this.

Gardner: We have also been joined by Adrian Trenaman. He is distinguish consultant at Progress Software. Great to have you with us, Adrian.

Trenaman: It's a pleasure.

Gardner: This is Dana Gardner, principal analyst at Interarbor Solutions. I want to thank our sponsor for today's podcast, Progress Software. We’re coming to you through the BriefingsDirect Network. Thanks for listening and come back next time.

Listen to the podcast. Download the podcast. Find it on iTunes and Podcast.com. Learn more. Sponsor: Progress Software.

Transcript of a BriefingsDirect podcast with Guillaume Nodet and Adrian Trenaman of Progress Software on directions and trends in SOA and open source. Copyright Interarbor Solutions, LLC, 2005-2009. All rights reserved.