Showing posts with label Ward-Dutton. Show all posts
Showing posts with label Ward-Dutton. Show all posts

Tuesday, November 13, 2007

BriefingsDirect SOA Insights Analysts Examine Microsoft SOA and Evaluate Green IT

Edited transcript of weekly BriefingsDirect[TM] SOA Insights Edition podcast, recorded October 26, 2007.

Listen to the podcast here.

Dana Gardner: Hello, and welcome to the latest BriefingsDirect SOA Insights Edition, Volume 27. A weekly discussion and dissection of Services Oriented Architecture (SOA) related news and events with a panel of industry analysts, experts and guests.

I'm your host and moderator, Dana Gardner, principal analyst at Interarbor Solutions. We’re joined today by a handful of prominent IT analysts who cover SOA and related areas of technology, business, and productivity.

Topics we're going to discuss this week include the SOA & Business Process Conference held by Microsoft in Redmond, Wash., at which Microsoft announced several product roadmaps and some strategy direction around SOA.

We're also going to discuss issues around "Green SOA." How will SOA impact companies, as they attempt to decrease their energy footprint, perhaps become kinder and gentler to the environment and planet earth, and what SOA might bring to the table in terms of a long-term return on investment (ROI), when energy related issues are factored in?

To help us sort through these issues, we’re joined this week by Jim Kobielus. He is a principal analyst at Current Analysis. Welcome back, Jim.

Jim Kobielus: Hi, Dana. Hello, everybody.

Gardner: We're also joined by Neil Macehiter, principal analyst at Macehiter Ward-Dutton in the UK. Thanks for coming along, Neil.

Neil Macehiter: Hi, Dana. Hi, everyone.

Gardner: Joe McKendrick, an independent analyst and blogger. Welcome back to the show, Joe.

Joe McKendrick: Thanks, Dana, glad to be here.


On Microsoft-Oriented Architecture and the SOA Confab ...

Gardner: Let’s dive into our number one topic today. I call it Microsoft Oriented Architecture -- MOA, if you will -- because what we've been hearing so far from Microsoft about SOA relates primarily to their tools and infrastructure. We did hear this week some interesting discussion about modeling, which seems to be a major topic among the discussions held at this conference on Tuesday, Oct. 30.

It's going to be several years out before these products arrive -- we probably won’t even see data until well into 2008 on a number of these products. Part of the logic seems to be that you can write anywhere, have flexibility in your tooling, and then coalesce around a variety of models or modeling approaches to execute through an über or federated modeling approach that Microsoft seems to be developing. That would then execute or deploy services on Microsoft foundational infrastructure.

I'm going to assume that there is also going to be loosely coupled interoperability with services from a variety of different origins and underlying infrastructure environments, but Microsoft seems to be looking strategically at this modeling layer, as to where it wants to bring value even if it’s late to the game.

Let’s start with Jim Kobielus. Tell us a little bit about whether you view Microsoft's moves as expanding on your understanding of their take on SOA, and what do you make of this emphasis on modeling?

Kobielus: First, the SOA universe is heading toward a model-driven paradigm for distributed service development in orchestration, and that’s been clear for a several years now. What Microsoft has discussed this week at its SOA and BPM conference was nothing radically new for the industry or for Microsoft.

Over time, with Visual Studio and the .NET environment, they've been increasingly moving toward a more purely visual paradigm. "Visual" is in the very name of their development tool. Looking at the news this week from Microsoft on the so-called Oslo initiative, they are going to be enhancing a variety of their Visual Studio, BizTalk Server, BizTalk Services, and Microsoft System Center, bringing together the various metadata repositories underlying those products to enable a greater model-driven approach to distributed development.

Gardner: They get into some BizTalk too, right?

Kobielus: Yes, BizTalk Server for premises-based and BizTalk Services, software as a service (SaaS), the channel through which it can deliver BizTalk functionality going forward. I had to pinch myself and ask myself what year this is. Oh, it’s 2007, and Microsoft is finally getting modeling religion. I still remember in 2003-2004 there was a big up swell of industry interest in model-driven architecture (MDA).

Gardner: We've had some standards developed in the industry since then too, right?

Kobielus: I was thinking, okay, that’s great, Microsoft, I have no problem with your model-driven approach. You're two, three, or four years behind the curve in terms of getting religion. That’s okay. It’s still taking a while for the industry to completely mobilize around this.

In order words, rather than developing applications, they develop business models and technology models to varying degrees of depth and then use those models to automatically generate the appropriate code and build the appropriate sources. That’s a given. One thing that confuses me, puzzles me, or maybe just dismays me about Microsoft’s announcement is that there isn't any footprint here for the actual standards that have been developed like OMG’s unified modeling language (UML), for example.

Microsoft, for some reason I still haven’t been able to divine, is also steering clear of UML in terms of their repositories. I'm not getting any sense that there is a UDDI story here or any other standards angle to these converged repositories that they will be rolling out within their various tools. So, it really is a Microsoft Oriented Architecture. They're building proprietary interfaces. I thought they were pretty much behind open standards. Now, unless it’s actually 2003, I have to go and check my calendar.

Gardner: They did mention that they're going to be working on a repository technology for Oslo metadata, which will apparently be built into its infrastructure services and tools. There was no mention of standards, and part of the conceptual framework around SOA is that there has to be a fairly significant amount of standardization in order to make this inclusion of services within a large business process level of activity possible.

Some of the infrastructure, be it repository, ESB, management, or governance, needs to be quite open. So, you're saying you're not sure that you're seeing that level of openness. It reminds us of the CORBA versus COM and DCOM situation. OMG was involved with that and supported the development of CORBA

Let’s go to Neil Macehiter. Do you see this as MOA or do you think that they are going to have to be open, if it’s going to be SOA values?

Macehiter: I don’t see this as exclusively Microsoft-oriented, by any stretch. I’d also question Jim’s comment on there being nothing radically new here. There are a couple of elements to the strategy that Microsoft’s outlined that differentiate it from the model-driven approaches of the past.

The first is that they are actually encompassing management into this modeling framework, and they're planning to support some standards around things like the service modeling language (SML), which will allow the transition from development through to operations. So, this is actually about the model driven life cycle.

The second element where I see some difference is that Microsoft is trying to extend this common model across software that resides on premises and software that resides in the cloud somewhere with services. So, it has a common framework for delivering, as Microsoft refers to it, software plus services. In terms of the standard support with respect to UML, Microsoft has always been lukewarm about UML.

A few years ago, they were talking about using domain specific language (DSL), which underpin elements of Visual Studio that currently exist, as a way of supporting different modeling paradigms. What we will see is the resurgence of DSL as a means of enabling different modeling approaches to be applied here. The comment regarding UDDI is only one element at the repository, because where Microsoft is really trying to drive this is around a repository for models, for an SML model or for the models developed in Visual Studio, which is certainly broader.

Gardner: There really aren’t any standards for unifying modeling or repository for various models.

Macehiter: No, so this smacks of being a very ambitious strategy from Microsoft, which is trying to pull together threads from different elements of the overall IT environment. You've got elements of infrastructure as a service, with things like the BizTalk Services, which has been the domain of large Web platforms. You've got this notion of computer applications in BPM which is something people like IBM, BEA, Software AG, etc. have been promoting.

Microsoft has got a broad vision. We also mustn’t forget that what underpins this is the vision to have this execution framework for models. The models will actually be executed within the .NET framework in the future iteration. That will be based on the Window’s Communication Foundation, which itself sits on top of the WS-* standards and then also on top of Windows Workplace Foundation.

So, that ambitious vision is still some way off, as you mentioned -- beta in 2008, production in 2009. Microsoft is going to have to bring its ISVs and systems integrator (SI) community along to really turn this from being an architecture that's oriented towards Microsoft to something broader.

Gardner: Now, Neil, if Microsoft is, in a sense, leapfrogging the market, trying to project what things are going to be several years out, recognizing that there is going to be a variety of modeling approaches, and that modeling is going to be essential for making SOA inclusive, then they are also going to be federating, but doing that vis-à-vis their frameworks and foundations.

If there is anything in the past that has spurred on industry standards, it's been when Microsoft puts a stake in the ground and says, “We want to be the 'blank,'” which, in this case, would be the place where you would federate models.

Kobielus: I’m glad you mentioned the word "federation" in this context, because I wanted to make a point. I agree with Neil. I’m not totally down on what Microsoft is doing. Clearly, they had to go beyond UML in terms of a modeling language, as you said, because UML doesn’t have the constructs to do deployment and management of distributed services and so forth. I understand that. What disturbs me right now about what Microsoft is doing is that if you look at the last few years, Microsoft has gotten a lot better when they are ahead of standards.

When they're innovating in advance of any standards, they have done a better job of catalyzing a community of partners to build public specs. For example, when Microsoft went ahead of SAML and the Liberty Alliance Federated Identity Standards a few years back, they wanted to do things that weren't being addressed by those groups.

Microsoft put together an alliance around a spec called WS-Federation, which just had sort of hit-and-miss adoption in the market, but there has been a variety of other WS-* standards or specifications that Microsoft has also helped to catalyze the industry around in advance of any formal, de-jure standard. I'd like to see it do the same thing now in the realm of modeling.

Macehiter: My guess is that’s exactly what they’re doing by putting a stake in the ground this early. "This is coming from us. There are going to be a lot of developers out there using our tools that are going to be populating our repositories. If you're sensible, you're going to federate with us and, therefore, let’s get the dialogue going." I think that’s partly why the stake is out there as early as it is.

Gardner: Let’s go to Joe McKendrick, Joe, we've seen instances in the past where – whether they're trailing or leading a particular trend or technology -- Microsoft has such clout and influence in the market that they either can establish de-facto standards or they will spur others to get chummy with one another to diminish the Microsoft threat. Do you expect that Microsoft's saying they're going to get in the modeling federation and repository business will prompt more cooperation and perhaps a faster federated standard approach in the rest of the market?

McKendrick: Definitely more and more competitive responses. Perhaps you’ll see IBM, BEA, Oracle, or whatever other entity propose their own approaches. It's great that Microsoft is talking SOA now. It's only been about a year that they have really been active.

Gardner: They didn’t even want to use the acronym. Did they?

McKendrick: I think what's behind this is that Microsoft has always followed the mass market. Microsoft’s sweet spot is the small- and the medium-business sector. They have a presence in the Fortune 500, but where they’ve been strong is the small to medium businesses, and these are the companies that don’t have the resources to form committees and spend months anguishing over an enterprise architectural approach, planning things out. They may be driven by the development department, but these folks have problems that they need to address immediately. They need a focus and to put some solutions in place to resolve issues with transactions, and so forth.

Gardner: That’s interesting, because for at least 10 years Microsoft has had, what shall we say, comprehensive data center envy. They've seen themselves on a department level. They've been around the edges. They've had tremendous success with the client and productivity applications with some major components, including directory and just general operating system level to support servers, and, of course, their tools and the frameworks.

However, there are still very few Fortune 500 or Global 2000 companies that are pure Microsoft shops. In many respects, enterprise Java, distributed computing, and open-standards approaches have dominated the core environment in architecture for these larger enterprises. If Microsoft is going to get into SOA, they're in a better position to do what we’ve been calling Guerrilla SOA, which is on a project-by-project basis.

If you had a lot of grassroots, small-developer, department-level server-oriented activities that Microsoft infrastructure would perhaps be better positioned to be dominant in, then that’s going to leave them with these islands of services. A federated modeling level or abstraction layer would be very fortuitous for them. Anyone have any thoughts about the comprehensive enterprise-wide SOA approach that we have heard from others vendors, versus what Microsoft might be doing, which might not be comprehensive, but could be in a sense grassroots even within these larger enterprise.

Macehiter: The other vendors in the non-Microsoft world might talk about enterprise-wide SOA initiatives and organizations that are planning to adopt SOA on an enterprise-wide basis, based on their infrastructure. The reality is that the number of organizations that have actually gone that far is still comparatively small, as we continually see with the same case-study customers being reintroduced again and again.

Microsoft will have to adopt an alternative model. For example, I think Microsoft will follow a similar model and explore the base they had around the developer community within organizations with things like Visual Studio.

SQL Server is pretty well deployed in enterprise elements of the application platform, by virtue of their being bundled into the OS already. So, they're quite well-positioned to address these departmental opportunities, and then scale out.

This is where some of the capabilities that we talked about, particularly in combination with things like BizTalk Services, allow organizations to utilize workflow capabilities and identity management capabilities in the cloud to reduce the management overhead. The other potential route for Microsoft is through the ISV community.

Gardner: I suppose one counterpoint to that is that Microsoft is well positioned with it's tools, frameworks, skill set, and entrenched positions to be well exploited for creating services, but when it comes to modeling business processes, we're not really talking about a Visual Studio-level user or developer. Even if the tools are visually oriented, the people or teams that are going to be in a position to construct, amend, develop, and refine these business processes are going to be at a much higher level. They're going to be architects and business analysts. They're going to be different types of persons.

They are going to be people who have a horizontal view of an entire business process across the heterogeneous environments and across organizational boundaries. Microsoft is well positioned within these grassroots elements. I wonder if they can, through a modeling federation layer and benefit, get themselves to the place where they are going to be the tools and repository for these analysts and architect level thinkers.

Kobielus: I think they will, but they need to play all this Oslo technology into their dynamics strategy for the line of business applications. The analysts that operate in the dynamics world are really the business analyst, the business process re-engineering analysts, etc., who could really use this higher layer entity modeling environment that Microsoft is putting there. In other words, the analysts we are discussing are the analysts who work in the realm of the SAP or Oracle applications, or the dynamic applications, not the departmental database application developers.

Macehiter: The other community there would be the SIs, who do a lot of this work on behalf of organizations. As part of the Oslo messaging, Microsoft has talked about this sort of capability being much more model-driven than a high level of abstraction, as a means to allow SOAs to become more like all ISVs, in terms of delivering more complete solutions. That’s another key community, where Microsoft just doesn't compete, in contrast to IBM, which is competing directly with the likes of Accenture and CapGemini. That’s another community that Microsoft will be looking to work very closely with around this.

Gardner: In the past, Microsoft did very well by targeting the hearts and minds of developers. Now, it sounds like they are going to be targeting the hearts and minds of business analysts, architects, and business-process level oriented developers. Therefore, they can position themselves as a neutral third party in the professional services realm. They can try to undermine IBM’s infrastructure and technology approach through this channel benefit of working with the good tooling and ease of deployment at the modeling and business-process construct level with these third-party SIs. Is that it?

McKendrick: As an addendum to what you just said, Microsoft isn't necessarily going to go directly after customers of IBM, BEA, etc. IBM is providing this potential to companies that have been under-served, companies that cannot afford extensive SOA consulting or integration work. It's going after the SMB sector, the Great Plains, the dynamics application that Jim spoke of. Those are SMB application. The big companies will go to SAP.

Gardner: So, Microsoft could have something that would be a package more amenable to a company, of say, 300-to-2,000 seats, maybe even 300-to-1,000.

McKendrick: Exactly, Microsoft is the disrupter in this case. There are other markets where Microsoft is being disrupted by Web 2.0, but in SOA, Microsoft is playing the role of disrupter and I think that’s what their strategy is.

Kobielus: I want to add one last twist here. I agree with everything Joe said. Also, the Oslo strategy, the modeling tools, will become very important in Microsoft’s overall strategy for the master data management (MDM) market they have announced already. A year from now, Microsoft will release their first true MDM product that incorporates, for example, the hierarchy and management and cross-domain catalog, management capabilities from their strategic acquisitions.

What Microsoft really needs to be feature-competitive in the MDM market is a model-driven, visual business-process development and stewardship tool. That way teams of business and technical analysts can work together in a customer data-integration, product information-management, or financial consolidation hub environment to build the complex business logic into complex applications under the heading of MDM. If Microsoft's MDM team knows what they are doing and I assume they do, then they should definitely align with the Oslo initiative, because it will be a critical for Microsoft to compete with IBM and Oracle in this phase.

Gardner: As we've discussed on this show, the whole data side of SOA in creating common views, cleaning and translating, schemas and taxonomies, and MDM is extremely important. You can’t do SOA well, if you don’t have a coherent data services strategy. Microsoft is one of the few vendors that can provide that in addition to many of these other things that we're discussing. So, that’s a point well taken. Now, to Joe’s point about the SMB Market, not only would there be a do-it-yourself, on-premises approach to SOA, but there are also SaaS and wire-based approaches.

We've heard a little bit about a forthcoming protocol -- BizTalk Services 1 -- and that probably will relate to Microsoft's Live and other online-based approaches. The end user, be they an architect and analyst or someone who is going to be crafting business processes, if they're using a strictly Web-based approach, they don’t know or care what’s going on beneath the covers in terms of foundations, frameworks, operating systems, and runtime environments. They are simply looking for ease of acquisition and use productivity-scale reliability.

It strikes me that Microsoft is now working towards what might be more of a Salesforce.com, Google, or Amazon-type environment, where increasingly SOA is off the wire entirely. It really is a matter of how you model and tool those services that becomes a king maker in the market. Any thoughts on how Microsoft is positioning these products for that kind of a play?

Macehiter: Definitely. The software plus services, which is the way that Microsoft articulates this partitioning of capability between on-premise software and services delivered in the cloud, is definitely a key aspect of the Oslo strategy and BizTalk Services. It’s just one element of that.

For example, if an organization needs to do some message flow that crosses between organizations over the firewall, BizTalk Services will provide a capability that allows you to explore that declaratively. You can see that evolving, but that’s more infrastructure services. Clearly, another approach might be a high-level service, an application type service, and this architecture that Microsoft is talking about is attempting to address that as well.

This is definitely a key element of the story, which is about making sure that Microsoft remains relevant in the face of in an increasing shift, particularly in the SMB market, towards services delivered in the cloud. It’s about combining the client, the server and services, and providing models in terms the way you think about the applications you need and in terms of the way you manage and deploy them that can encompass that in a way that doesn’t incur significant effort.

Gardner: Perhaps the common denominator between the on-premises approach -- be it departmental level, enterprise-wide, SMB, through the cloud, or though an ecology of providers -- is at this modeling layer. This is the inflection point where, no matter how you do SOA, you’re going to want to be in a position to do this well, with ease, and across a variety of different approaches. Is that fair?

Macehiter: Yes. That’s why this is a better attempt by Microsoft to change the game and push the boundaries. It’s not just a model-driven development revisited in a NDI and a .NET world. This is broader than that.

Gardner: This is classic Microsoft strategy, leapfrogging and trying to get to what the inflection point or the lock-in point might be, and then rushing to it and taking advantage of its entrenched positions.

McKendrick: Forming the mass market, exactly.

Gardner: Let’s move on to our next subject, now that we’ve put that one to rest. The implications are that Microsoft is not out of the SOA game, that it's interested in playing to win, but, once again, on its own terms based on its classic market and technology strategies.

McKendrick: And reaching out to companies that could not afford SOA or comprehensive SOA, which it's done in the past.


On Green SOA and the IT Energy-Use Factor ...

Gardner: Let’s move on to our new subject, Green SOA. SOA approaches and methodologies bring together abstractions of IT resources, developing higher level productivity through business process, management, organization, and governance. How does that possibly impact Green IT?

It's a very big topic today. In fact, it was the top of the strategic technology areas that Gartner Group identified for 2008. Green IT was named number one, a top-ten strategic technology area. How does SOA impact this? Jim Kobielus, you have been given this a lot of thought. Give us the lay of the land.

Kobielus: Thank you, Dana. Clearly, in our culture the Green theme keeps growing larger in all of our lives, and I'm not going to belabor all the ramifications of Green. In terms of Green, as it relates to SOA, you mentioned just a moment ago, Dana, the whole notion of SOA is based on abstraction, service contracts, and decoupling of the external calling interfaces from the internal implementations of various services. Green smashes through that entire paradigm, because Green is about as concrete as you get.

SOA focuses on maximizing the sharing, reuse, and interoperability of distributed services or resources, application logic, or data across distributed fabrics. When they're designing SOA applications, developers aren't necessarily incentivized, or even have the inclination, to think in terms of the ramifications at the physical layer of these services they're designing and deploying, but Green is all about the physical layer.

In other words, Green is all about how do human beings, as a species, make wise use and stewardship of the earth’s nonrenewable, irreplaceable resources, energy or energy supplies, fossil fuels, and so forth. But also it’s larger than that, obviously. How do we maintain a sustainable culture and existence on this planet in terms of wise use of the other material resources like minerals and the soil etc.?

Gardner: Isn't this all about electricity, when it comes to IT?

Kobielus: Yes, first and foremost, it’s pitched at the energy level. In fact, just this morning in my inbox I got this from IBM: "Join us for the IBM Energy Efficiency Certificate Announcement Teleconference." They're going to talk about energy efficiency in the datacenter and best practices for energy efficiency. That’s obviously very much at the core of the Green theme.

Now, getting to the point of how SOA can contribute to the greening of the world. SOA is the whole notion of consolidation -- consolidation of application logic, consolidation of servers, and consolidation of datacenters. In other words, it essentially reduces the physical footprint of the services and applications that we deploy out to the mesh or the fabric.

Gardner: Aren't those things independent of SOA? I mean, if you're doing datacenter consolidation and modernization, if you are moving from proprietary to standards-based architectures, what that has got to do with SOA?

Kobielus: Well, SOA is predicated on sharing and reuse. Okay, your center has a competency. You have one hunk of application logic that handles order processing in the organization. You standardize on that, and then everybody calls that, invokes that over the network. Over time, if SOA is successful other centers of development or other deployed instances of code that do similar things will be decommissioned to enable maximum reuse of the best-of-breed order-processing technology that’s out there.

As enterprises realize the ROI, the reuse and sharing should naturally lead to greater consolidation at all levels, including in the datacenter. Basically, reducing the footprint of SOA on the physical environment is what consolidation is all about.

Gardner: So, these trends that are going concurrently -- unification, consolidation, and virtualization -- allow you to better exploit those activities and perhaps double down on them in terms of a fewer instances of an application stack, but more opportunity to reuse the logic and the resources more generally. So a highly efficient approach that ultimately will save trees and put less CO2 in the atmosphere.

Kobielus: I want to go back to Microsoft. Four years ago, in 2003, I went to their analyst summit in Redmond. They presented something they called service definition modeling language (SDML) as proprietary spec and a possible future spec for modeling services and applications at the application layer and physical layer. An application gets developed, it gets orchestrated, it gets distributed across different nodes, and it allows you to find the physical partitioning of that application across various servers. I thought:

That’s kind of interesting. They are making a whack at both trying to model from the application down to the physical layer and think through the physical consequences of application development activities.

Gardner: Another trend in the market is the SaaS approach, where we might acquire more types of services, perhaps on a granular level or wholesale level from Google, Salesforce, Amazon, or Microsoft, in which case they are running their datacenters. We have to assume, because they're on a subscription basis for their economics, that they are going to be highly motivated toward high-utilization, high-efficiency, low-footprint, low-energy consumption.

That will ultimately help the planet, as well, because we wouldn’t have umpteen datacenters in every single company of more than a 150 people. We could start centralizing this almost like a utility would. We would think that these large companies, as they put in these massive datacenters, could have the opportunity for a volume benefit in how they consume and purchase energy.

Gardner: Neil Macehiter, what do you make of this Green-SOA relationship?

Macehiter: We need to step back and look at what we are talking about. You mentioned ROI. If we look at this from a Green ROI perspective, organizations are not going to be looking at SOA as the first step in reducing their Green footprint. It's going to be about server and storage consolidation to reduce the power consumption, provide more efficient cooling, and management approaches to ensure that servers aren’t running when they don’t need to be. That’s going to give them much bigger Green bang for the buck.

Certainly, the ability to reuse and share services is going to have an impact in terms of reducing duplications, but in the broader scheme of things I see that contribution as being comparatively small. The history that we have is largely ignoring the implications of power and heat, until we get to the size of a Google or a Microsoft, where we have to start thinking about putting our datacenters next to large amounts of water, where we can get hydroelectric power.

So, IT has a contribution to make, but there isn't anything explicit in SOA approaches, beyond things like service reuse and sharing that can really contribute. The economies of scale that you get from SaaS in terms of exploiting those services come from more effective use of the datacenter resources. This is those organizations' business, and, given the constraints they operate under, they can’t get datacenters big enough, because then there are no power stations big enough.

Gardner: Your point is well taken. Maybe we're looking at this the wrong way. Maybe we’ve got it backwards. Maybe SOA, in some way, aids and abets Green activities. Maybe it's Green activities, as they consolidate, unify, seek high utilization, and storage, that will aid and abet SOA. As Gartner points out, in their number one strategic technology area for 2008, Green initiatives are going to direct companies in the way that they deploy and use technology towards a situation where they can better avail themselves of SOA principles. Does that sound right, Joe McKendrick?

McKendrick: In an indirect way, it sounds right, but I want to take an even a further step back and look at what we have here. Frankly, the Green IT initiative is misguided and the wrong questions are being asked about Green IT. Let me say that I have been in active environmental causes and I have done consulting work with a company that has worked with utilities and ERP Electric Car Research Institute on energy saving initiatives.

It's great that IT is emphasizing efficient datacenters, but what we need to look at is how much energy IT has saved the world in general? How much power is being saved as a result of IT initiatives? SOA rolls right into this. For example, how many business trips are not being taken now, because of the availability of video conferencing and remote telecommuting, telework and things of that sort? We need studies. I don’t have the data on this and there isn’t any data out there that has really tracked this. In e-commerce, for example, how many stores have not been built because of e-commerce?

Gardner: These are really good points that the overall amount of energy consumption in the world would be much greater and productivity. It's very difficult to put all the cookie crumbs together and precisely measure the inputs and outputs, but that’s not really the point. We're not talking about what we would have saved, if we didn’t have IT for saving. What can we do to refine even further that what which we have to use to create the IT that we have?

Macehiter: The reality is that we can’t offset what we’ve saved in the past against what we are going to conceive in the future. We are at a baseline and it is not about apportioning blame between industries and saying, "Well, IT doesn’t have to do so much, because we’ve done a lot in the past."

McKendrick: But, we are putting demands on IT, Neil. We're putting a lot of demands on IT for additional IT resources.

Macehiter: If you go into a large investment bank, and look at what proportion of their electricity consumption is consumed by IT, I'd hazard a guess that it's a pretty large chunk, alongside facilities.

McKendrick: And probably lots of demands are put on those datacenters, but how much energy is that saving because of additional services being put out to the world, being put out to society?

Gardner: What's your larger point, Joe, that we don’t need to worry too much about making IT more energy efficient because it's already done such a great job compared to the bricks-and-mortar, industrialized past?

McKendrick: The problem is, Dana, we don’t know. There are no studies. I'd love to see studies commissioned. I'd love to see our government or a private foundation fund some studies to find out how much energy IT has been saving us.

Kobielus: I agree with everything you guys are saying, because the issue is not so much reducing IT’s footprint on the environment. It’s reducing our species' overall footprint on the resources. One thing to consider is whether we have more energy-efficient datacenters. Another thing to consider is that, as more functionality gets pushed out to the periphery in terms of PCs and departmental servers, the vast majority of the IT is completely outside the datacenter.

Gardner: Jim, you are really talking about networked IT, so it's really about the Internet, right? The Internet has allowed for a "clicks in e-commerce" and not a "bricks in heavy industries" approach. In that case, we're saying it's good that IT in the Internet has given us the vast economies of scale, productivity, and efficiency, but that also requires a tremendous amount of electricity. So, isn’t this really an argument for safe nuclear and to put small nuclear reactor next to datacenters and perhaps not create CO2?

Macehiter: Let's not forget that this isn't just about enterprise use of IT. If I look at my desk, as a consumer of IT, I've got a scanner, hard disk, two machines, screen, two wireless routers, and speakers that are all consuming electricity. Ten years ago, I just wouldn’t have had that. So, we have to look broader than the enterprise. We can get into a whole other rat’s nest, if we start into safe nuclear power or having wind farms near our datacenter.

Gardner: It's going to be NOA, that’s Nuclear-Oriented Architecture…

Kobielus: In the Wall Street Journal this morning, there was an article about Daylight Saving Time. This year, in the US, Daylight Saving Time has been moved up by a week at the beginning in March and moved back by a week into November. So, this coming Sunday, we are going to finally let our clocks fall back to so-called Standard Time.

The article said that nobody has really done a study to show whether we are actually saving any energy from Daylight Saving Time? There have been no reliable studies done. So, when the legislatures change these weeks, they're just assuming that, by having more hours of daylight in the evening, we are using less illumination, therefore the net budget or net consumption of energy goes down.

In fact, people have darker mornings, and people tend to have more morning-oriented lives. People in the morning quite often are surfing the Web, and viewing the stuff on their TiVo, etc. So, net net, nobody even knows with Daylight Saving Time whether it's Green friendly, as a concept.

Gardner: Common sense would lead you to believe that you’re just robbing Peter to pay Paul on this one, right? Perhaps there are some lessons to be learned on that same level for IT. We think we're saving footprints in data centers and we are consolidating and unifying, but we are also bringing more people online and they have larger energy-consuming desktop environments or small-office environments that Neil described. If there are 400 million people with small offices and there are a billion people on the Internet, then clearly the growth is far and away outstripping whatever efficiencies we might bring to the table.

McKendrick: The efficiencies gained by IT might be outstripping any concerns about green footprints with datacenters. We need data. We need studies to look at this side of it. The U.S. Congress is talking about studying the energy efficiency of datacenters, and you can imagine some kind of regulations will flow from that.

Kobielus: I'm going to be a cynic and am just going to guess that large, Global 2000 corporations are going to be motivated more by economics than altruism when it comes to the environment. So back to the announcement today, on Nov. 2, about IBM launching an initiative to give corporate customers a way to measure and potentially monetize energy efficient measures in their datacenters.

I think IBM is trying to come up with the currency of sorts, a way to earn energy-efficient certificates that can then apply some kind of an economic incentive and/or metric to this issue. As we discussed earlier, the Green approach to IT might actually augment SOA, because I don’t think SOA leads to Green, but many of the things you do for Green will help people recognize higher value from SOA types of activities.

Gardner: Let's leave it at that. We're out of time. It's been another good discussion. Our two topics today have been the Microsoft SOA conference and abstract relationship between Green IT and SOA. We have been joined with our great thinkers and fantastic contributors here today including Jim Kobielus, principal analyst at Current Analysis. Thanks, Jim.

Kobielus: Thank you, Dana. I enjoyed it as always.

Gardner: Neil Macehiter, principal analyst at Macehiter Ward-Dutton. Thanks, Neil.

Macehiter: Thanks, Dana. Thanks, everyone.

Gardner: And, Joe McKendrick, the independent analyst and blogger extraordinaire. Thanks, Joe.

McKendrick: Thanks, Dana. It was great to be here.

Gardner: This is Dana Gardner, principal analyst at Interarbor Solutions. You’ve been listening to BriefingsDirect SOA Insights Edition, Volume 27. Come back again next time. Thank you.

Listen to the podcast here.

Produced as a courtesy of Interarbor Solutions: analysis, consulting and rich new-media content production.

If any of our listeners are interested in learning more about BriefingsDirect B2B informational podcasts or to become a sponsor of this or other B2B podcasts, please fill free to contact Interarbor Solutions at 603-528-2435.

Transcript of BriefingsDirect SOA Insights Edition podcast, Vol. 27, on Microsoft SOA and Green IT. Copyright Interarbor Solutions, LLC, 2005-2007. All rights reserved.

Saturday, February 17, 2007

Transcript of BriefingsDirect SOA Insights Edition Vol. 9 Podcast on TIBCO's SOA Tools News, ESBs as Platform, webMethods Fabric 7, and HP's BI Move

Edited transcript of weekly BriefingsDirect[TM] SOA Insights Edition, recorded Jan. 19, 2007.

Listen to the podcast here. If you'd like to learn more about BriefingsDirect B2B informational podcasts, or to become a sponsor of this or other B2B podcasts, contact Dana Gardner at 603-528-2435.

Dana Gardner: Hello, and welcome to the latest BriefingsDirect SOA Insights Edition, Volume 9. This is a weekly discussion and dissection of Services Oriented Architecture (SOA) related news and events with a panel of IT industry analysts. I’m your host and moderator, Dana Gardner, principal analyst at Interarbor Solutions, ZDNet blogger, and Redmond Developer magazine columnist.

This week, our panel of independent IT analysts includes show regular Steve Garone. Steve is an independent analyst, a former program vice president at IDC and the founder of the AlignIT Group. Welcome back, Steve.

Steve Garone: Hi, Dana. It's great to be here again.

Gardner: Also joining us is Joe McKendrick, an independent research consultant and columnist at Database Trends, as well as a blogger at ZDNet and ebizQ. Welcome back to the show, Joe.

Joe McKendrick: Hi, Dana.

Gardner: Next Neil Ward-Dutton, research director at Macehiter Ward-Dutton in the U.K., joins us once again. Hello, Neil.

Neil Ward-Dutton: Hi, Dana, good to be here.

Gardner: Jim Kobielus, principal analyst at Current Analysis, is also making a return visit. Thanks for coming along, Jim.

Jim Kobielus: Hi, everybody.

Gardner: Neil, you had mentioned some interest in discussing tools. We’ve discussed tools a little bit on the show, but not to any great depth. There have been some recent announcements that highlight some of the directions that SOA tools are taking, devoted toward integration, for the most part.

However, some of the tools are also looking more at the development stage of how to create services and then join up services, perhaps in some sort of event processing. Why don’t you tell us a little bit about some of the recent announcements that captured your attention vis-a-vis SOA tools?

Ward-Dutton: Thanks, Dana. This was really sparked by a discussion I had back in December -- and I think some of the other guys here had similar discussions -- with TIBCO Software around the announcement that they were doing for this thing called ActiveMatrix. The reason I thought it was worth discussing was that I was really kind of taken by surprise. It took me a while to really get my head around it, because what TIBCO is doing with ActiveMatrix is shifting beyond its traditional integration focus and providing a rear container for the development and deployment of services, which is subtly different and not what TIBCO has historically done.

It’s much more of a development infrastructure focus than an integration infrastructure focus. That took me by surprise and it took me a while to understand what was happening, because I was so used to expecting TIBCO to talk about integration. What I started thinking about was, "What is the value of something like ActiveMatrix?" Because at first glance, ActiveMatrix appears to be something with JBI, a Java Business Integration implementation, basically a kind of standards-based plug-and-play ESB on steroids. It's probably a crass way of putting it, but you kind of get the idea.

Let’s look at it from the point of view of a development theme. What is required to help those guys get into building high-quality networks of services? There are loads of tools around to help you take existing Java code, or whatever, right-click on it, and create SOAP and WSDL bindings, and so on. But, there are other issues of quality, consistency of interface definitions, and use of schemas -- more leading-edge thinking around using policies, for example. This would involve using policies at design time, and then having those enforced in the runtime infrastructure to do things like manage security automatically and help to manage performance, availability, and so on.

It seems to me that this is the angle they’re coming from, and I haven’t seen very much of that from a lot of the other players in the area. The people who are making most of the noise around SOA are still approaching it from the point of view: "You’ve got all this stuff already, all these assets, and what you’re really doing is service-enabling and then orchestrating those services." So, I just want to throw that out there. It would be really interesting to hear what everyone else thinks. Is what TIBCO is doing useful? Are they out ahead or are there lots of other people doing similar things?

Gardner: TIBCO’s heritage has been in middleware messaging, which then led them into integration, Enterprise Application Integration (EAI), and now they’ve moved more toward a service bus-SOA capability. Just to clarify, this tooling, is it taking advantage of the service bus as a place to instantiate services, production, and management? And is it the service bus that’s key to the fact that they’re now approaching tooling?

Ward-Dutton: That’s how I believe it, except it extends to service bus in two ways. One is into the tooling, if you think about what Microsoft is doing with Windows Communication Framework. From a developer perspective, they’re abstracting a lot of the glop they need to tie code into an ESB, and TIBCO is trying to do something similar to that.

It’s much more declarative. It’s all about annotations and policies you attach to things, rather than code you have to write. On the other side, what was really surprising to me was that, if I understand it right, [TIBCO] are unlike a lot of the other ESB players. They are trying to natively support .NET, so they actually have a .NET container that you can write .NET components in and hook them into the service bus natively. I haven’t really seen that from anywhere else, apart from Microsoft. Of course, they’re .NET only. I think there’s two ways in which they’re moving beyond the basic ESB proposition.

Gardner: So, the question is about ESB as a platform. Is it an integration platform that now has evolved into a development platform for services, a comprehensive place to manage and produce and, in a sense, direct complex service integration capabilities? Steve Garone, is the definition of ESB, now, much larger than it was?

Garone: I think it is. I agree with Neil. When I looked at this announcement, the first thing that popped into my mind was, "This is JBI." When Sun Microsystems talked about JBI back in 2005, this is what they were envisioning, or was part of what they were envisioning. Basically, as a platform, that raises the level of abstraction to where current ESB thinking was already. At the time was confusing users -- and still is -- because they didn’t quite understand how, or why, or when they should use an ESB?

In my opinion, this raises that level of abstraction to eliminate a lot of the work developers have to do in terms of coding to a specific ESB or to a specific integration standard, and lets them focus on developing the code they need to make their applications work. But, I would pull back a little bit from the notion that this is purely, or at a very high percentage, a developer play. To me, this is a logical extension of what companies like TIBCO have done in the past in terms of integration and messaging. However, it does have advantages for developers who need to develop applications that use those capabilities by abstracting out some of the work that they need to do for that integration.

Gardner: How about you, Joe? Do you see this as a natural evolution of ESB? It makes sense for architects and developers and even business analysts to start devoting logic of process to the ESB and let the plumbing take care of itself, vis-à-vis standards and module connectors.

McKendrick: In terms of ESBs, there’s actually quite a raging debate out there about the definition of an ESB, first of all, and what the purpose of an ESB should be. For example, I quote Ann Thomas Manes . . .

Gardner: From Burton Group, right?

McKendrick: Right. She doesn’t see ESB as a solution that a company should ultimately depend on or focus on as mediation. She does seem to lean toward the notion of an ESB on the development side as a platform-versus-mediation system. I've also been watching the work of Todd Biske, he is over at MomentumSI. Todd also questions whether ESBs can take on such multiple roles in the enterprise as an application platform versus a mediation platform. He questions whether you can divide it up that way and sell it to very two distinct markets and groups of professionals within the enterprise.

Gardner: How about you, Jim Kobielus? Do you see the role of ESB getting too watered down? Or, do you see this notion of directing logic to the ESB as a way of managing complexity amid many other parts and services, regardless of their origins, as the proper new direction and definition of ESB?

Kobielus: First of all, this term came into use a few years back, popularized by Gartner and, of course, by Progress Software as a grand unification acronym for a lot of legacy and new and emerging integration approaches. I step back and look at ESB as simply referring to a level backplane that virtualizes the various platform dependencies. It provides an extremely flexible integration fabric that can support any number of integration messaging patterns, and so forth.

That said, looking at what TIBCO has actually done with ActiveMatrix Service Grid, it's very much to the virtualization side of what an ESB is all about, in the sense that you can take any integration logic that you want, develop it to any language, for any container, and then run it in this virtualized service grid.

One of the great things about the ActiveMatrix service grid is that TIBCO is saying you don’t necessarily have to write it in a particular language like Java or C++, but rather you can compose it to the JBI and Service Component Architecture (SCA) specifications. Then, through the magic of ActiveMatrix service grid, it can get compiled down to the various implementation languages. It can then get automatically deployed out to be executed in a very flexible end-to-end ESB fabric provided by TIBCO. That’s an exciting vision. I haven’t seen it demonstrated, but from what they’ve explained, it’s something that sounds like it’s exactly what enterprises are looking for.

It’s a virtualized development environment. It’s a virtualized integration environment. And, really, it’s a virtualized policy management environment for end-to-end ESB lifecycle governance. So, yeah, it is very much an approach for overcoming and taming the server complexity of an SOA in this level backplane. It sounds like it’s the way to go. Essentially, it sounds very similar to what Sonic Software has been doing for some time. But TIBCO is notable, because they’re playing according to open standards that they have helped to catalyze -- especially the SCA specifications.

Gardner: Now, TIBCO isn’t alone in some releases since the first of the year. We recently had webMethods with its Fabric 7.0. Has anyone on the call taken a briefing with webMethods and can you explain what this is and how it relates to this trend on ESB?

Kobielus: I've taken the briefing on Fabric 7.0, and it’s really like TIBCO with ActiveMatrix in many ways. It's a strong development story there and it’s a strong virtualization story. In the case of webMethods Fabric 7.0, you can develop complex-end-to-end integration process logic in a high-level abstraction. In their case, they’re implementing the Business Process Modeling Notation (BPMN) specification. Then, you can, within their tooling, take that BPMN definition, compile it down to implementation languages like BPEL that can then get executed by the process containers or process logic containers within the Fabric 7.0 environment.

It’s a very virtualized ESB/SOA development environment with a strong BPMN angle to it and a very strong metadata infrastructure. WebMethods recently acquired Infravio, and so webMethods is very deep now both on the UDDI registry side and providing the plumbing for a federated metadata infrastructure that’s necessary for truly platform agnostic ESB and SOA applications.

Gardner: And, I believe BEA has come out through its Liquid campaign with the components that amount to a lot of this as well. I'm not sure there are standards in interoperability, based on TIBCO's announcement, but clearly I think they have the same vision. In the past several weeks, we’ve discussed how complexity has been thrown at complexity in SOA, and that’s been one of the complaints, one of the negative aspects.

It seems to me that this move might actually help reduce some of that by, as you point out, virtualizing to the level where an analyst, an architect, a business process-focused individual or team can focus in on this level of process to an ESB, not down to application servers or Java and C++, and take advantage of this abstraction.

Before we move on to our next topic, I want to go back to the panel. Steve Garone, do you see this as a possible way of reducing the complexity being thrown at complexity issue?

Garone: Yes, I do. A lot of it's going to depend on how well this particular offering -- if you're talking about TIBCO or webMethods, but I think we were sort of focusing mostly on TIBCO this morning.

Gardner: I think I’d like to extend to the larger trend. Elements that IBM is doing relates to this. Many of the players are trying to work toward this notion of abstracting up, perhaps using ESB as a platform to do so. Let's leave it on more general level.

Garone: That’s fine a good point. You’re right. IBM is doing some work in this area, and logically so, although they come at this even though they have a lot of integration products. I consider them a platform vendor, which means their viewpoint is a little more about the software stack than a specific integration paradigm.

I think the hurdle that we’ll need to get over here in terms of users taking a serious look at this is the confusion over what an ESB actually is and what it should be used for by customers. The vendors who talk to their customers about this are going to have to get over a perception hurdle that this is somewhat different. It makes things a lot easier and resolves a lot of those confusion points around ESBs. Therefore, it's something they should look at seriously, but in terms of the functionality and the technology behind it, it's the logical way to go.

Gardner: Joe McKendrick, how about you in this notion of simplicity being thrown at complexity? Are we going to retain that? Is this the right direction?

McKendrick: Ah, ha. Well, I actually have fairly close ties with SHARE, the mainframe user group, and put out a weekly newsletter for them. The interesting point about SOA in general is that TIBCO, webMethods and everybody are moving to SOA. They have no choice. They have to begin to subscribe to the standards they agree upon. What else would they do?

When we talk about what was traditionally known as the Enterprise Application Integration (EAI) market, it’s been associated with large-scale, expensive integration projects. What I have seen in the mainframe market is that there is interest in SOA, and there is a lot of experimentation and pilot projects. There are some very clear benefits, but there is also a line of thinking that says, "The application we have running on the mainframe, our CICS application transaction system, works fine. Why do we need to SOA-enable this platform? Why do we need to throw in another layer, an abstraction of service layer, over something that works fine, as-is?"

It may seem archaic or legacy. You may even have green-screen terminals, but it runs. It’s got mainframe power behind it. It’s usually a two-tier type of application. The question organizations have to ask themselves is, Do we really need to add another layer to an operation that runs fine as-is?

Gardner: If they only have isolated operations, and they don’t need to move beyond them, I suppose it's pretty clear for them from cost-benefit analysis to stay with what works. However, it seems that more companies, particularly as they merge and become engaged in partnerships, or as they ally with other organizations and go global, want to bring in more of their assets into a business process-focused benefit. So, that's the larger evolution of where we’re going. It's not islands of individual applications churning away, doing their thing, but associating those islands for a higher productivity benefit.

Kobielus: The notion of what organizations have to examine is right on the money, but I think that’s more of a fundamental issue around SOA in general. I think the question you asked was how does something like this affect the ease with which one can do that, and will it figure into the cost-benefit analysis that an organization does to see if in fact that's the right way to go.

Gardner: Neil, this was your topic. How do you see it? Does this larger notion strike you as moving in a direction of starting to solve this issue of complexity being thrown a complexity? That is to say, there’s not enough clear advantage and reduced risk as an organization for me to embrace SOA. Do you think what you’re seeing now from such organizations as TIBCO and webMethods is ameliorating that concern?

Ward-Dutton: Yes and no. And I think most of my answers on these podcasts end up like that, which is quite a shame. The "no" part of my answer is really the cynical part, which is that, at the end of the day, too much simplicity is bad for business. It’s not really in any vendor’s interest to make things too easy. If you make things too easy, no one’s going to buy any more stuff. And the easiest thing to do, of course, for the company is to say, "You know what? Let’s just put everything on one platform. We’ll throw out everything we’ve got, and rebuild everything from the ground up, using one operating system, one hardware manufacturer, one hardware architecture, and so on."

If the skills problem would go away overnight, that would be fantastic. Of course, it’s not about technology. It’s all of our responsibility to keep reminding everyone that, while this stuff can, in theory, make things simpler, you can’t just consider an end-state. You've got to consider the journey as well, and the complexity and the risk associated with the journey. That’s why so many organizations have difficulties, and that's why the whole world isn't painted Microsoft, IBM, Oracle, or webMethods. We’re in a messy, messy world because the journey is itself a risky thing to do.

So, I think that what's happening with IBM around SCA, and what TIBCO is doing around ActiveMatrix, and what webMethods is doing, have the capability for people with the right skills and the right organizational attributes. They have the ability to create this domain, where change can be made pretty rapidly and in a pretty manageable way. That's much more than just being about technology. It’s actually an organizational, cultural process, an IT process, in terms of how we go about doing things. It's those issues, as well as a matter of buying something from TIBCO. Everything’s bound up together.

Gardner: To pick up on your slightly cynical outlook on vendors who don’t want to make it too simple, they do seem to want to make things simpler from the tooling perspective, as long as that requires the need for their run time, their servers, their infrastructure, and so on.

TIBCO has also recently announced BusinessWorks 5.4, which is a bit more complex-turnkey-platform approach that a very simplified approach to tools might then engender an organization to move into. I guess I see your point, but I do think that the tooling and the simplification is a necessary step for people and process to be the focus and the priority, and that the technology needs to help to bring that about?

Ward-Dutton: You’re absolutely right, Dana, but I think the part of the point you made when you were asking your question a few minutes ago was around do we see less technical communities getting more heavily involved in development work. This is the kind of the mythical use of programming thing I remember from Oracle 4GL and Ingress 4GL. That was going to be user-programming, and, of course, that didn’t happen either. I do see the potential for a domain where it’s easier to change things and it’s more manageable, but I don’t see that suddenly enabling this big shift to business analysts doing the work -- just like we didn’t do with UML or 4GLs.

Gardner: We’re not yet at the silver-bullet level here.

Kobielus: Neil nailed it on the head here. Everybody thinks of simplicity in terms of, "Well, rather than write low-level code, people will draw high-level pictures of the actual business process, not that technical plumbing." And, voila! the infrastructure will make it happen, and will be beautiful and the business analysts will drive it.

Neil alluded to the fact that these high-level business processes, though they can be drawn and developed in BPMN, or using flow charting and all kinds of visual tools, are still ferociously complex. Business process logic is quite complex in it’s own right, and it doesn’t simply get written by the business analyst. Rather, it gets written by teams of business and IT analysts, working hand in hand, in an iterative, painful process to iron out the kinks and then to govern or control changes, over time, to various iterations of these business processes.

This isn’t getting any simpler. In fact, the whole SOA governance -- the development side of the governance process -- is just an ongoing committee exercise of the IT geeks and the business analyst geeks getting together regularly and fighting it out, defining and redefining these complex flow charts.

Gardner: One of the points here is around how the plumbing relates to the process, and so it’s time and experience that ultimately will determine how well this process is defined. As you say, it’s iterative. It’s incremental. No one’s just going to sit there, write up the requirements, and it’s going to happen. But it’s the ability to take iterations and experience in real time and get the technology to keep up with you as you make those improvements that's part of the “promise” of SOA.

McKendrick: The collaboration is messy. You’re dealing with a situation where you’ve got collaboration among primarily two major groups of people who have not really worked a lot together in the past and don’t work that well together now.
Link
Gardner: Well, that probably could be said about most activities from last 150,000 years. All right, moving onto our next topic: IBM came out with its financials this week, we’re talking about the week of January 15, 2007, and once again, they had a strong showing on their software growth. They had 14 percent growth in software revenues, compared to the year-ago period. This would be for the fourth quarter of 2006, and that's compared to the total income growth for the company of 11 percent -- services growing 6 percent, and hardware growing only 3 percent.

So, suddenly, software, which does include a lot at IBM, but certainly a large contribution form WebSphere and middleware and mainframes. Mainframes are still growing, but not great -- 5 percent. Wow. The poster child at IBM is software. Who'd have thunk it? Anybody have a reaction to that?

Ward-Dutton: Of course, one of the things that's been driving IBM software growth has been acquisitions. I know I’m a bit behind the curve on this one, but the FileNet acquisition was due to close in the fourth quarter. If that did happen, then that probably had quite a big impact. I don’t know. Does anyone else know?

Gardner: I guess we’d have to do a bit more fine-tuning to see what contribution the new acquisition’s made on a revenue basis, but the total income growth being a certain percentage and then the software, as a portion of that, I suppose, is the trend. Even so, if they’re buying their way into growth, software is becoming the differentiator in the growth opportunity for IT companies, not hardware, not necessarily even professional services.

That does point out that where companies are investing, where enterprises are investing, and where they're willing to pay for a high-margin and not fall into a commodization pattern, which we might see in hardware, is in software.

Kobielus: Keep in mind, though, in the fourth quarter of 2006, IBM had some major product enhancements. Those happened both in the third and the fourth quarter in the software space, and those were driving much of this revenue growth. In July, they released a DB2 Version 9, formerly code-named Viper, and clearly they were making a lot of sales of new licenses for DB2 V9. Then, in the beginning of the fourth quarter, they released their new Data Integration Suite. That's not so new, but rather enhancements to a variety of point integration tools that they’ve had for a long time, including a lot of these software products that they'd acquired with Ascential.

Gardner: That’s the ETL stuff, right?

Kobielus: Not only that, it's everything, Dana. It’s the ETL, the EII, the metadata, the data quality tools, and the data governance tools. It’s a lot of different things. Of course, they also acquired FileNet during that time. But also in the late third quarter IBM released at least a dozen linked solo-product upgrades. In the late third quarter, they were clearly behind much of the revenue growth, and in the fourth quarter for the software group. In other words, the third and fourth quarters of this past year had announcements that IBM had primed the pump for in terms of the customers’ expectations. And, clearly, there were a lot of pent-up orders in hand from customers who were screaming for those products.

Gardner: So you're saying that this might be a cyclical effect, that we shouldn't interpret the third and fourth quarter software growth as a long-term trend but perhaps as beneficial but nonetheless a "bump in the road" for IBM.

Kobielus: Oh, yeah. Just like Microsoft is finally having a bump, now that it’s got Vista and all those other new products coming downstream. These few quarters are going to be a major bump for Microsoft, just like the last two were a major bump for IBM.

Gardner: Let’s take that emphasis that you have pointed out, and I think is correct, on the issue of data -- the lifecycle of data, and how to free it and expose it to wider uses and productivity in enterprise. IBM has invested quite a bit in that. We just also heard an announcement this week from Hewlett-Packard that they are going to be moving more aggressively into business intelligence (BI) and data warehouse activities, not necessarily trying to sell databases to people, but to show them how to extract, and associate, and make more relevant data that they already have -- a metadata-focused set of announcements. Anyone have reaction to that?

Garone: I don’t know too much about this announcement, but from what I’ve read it seems as if this is largely a services play. HP sees this as a professional services opportunity to work with customers to build these kinds of solutions, and there's certainly demand for it across the board. I’m not so sure this is as much products as it is services.

Kobielus: HP, in the fourth quarter of 2006, acquired a services company in the data warehousing and BI arena called Knightsbridge, and Knightsbridge has been driving HP's foray into the data warehousing market. But, also HP sees that it’s a major hardware vendor, just as Teradata and IBM are, and wants to get into that space. If you look at the growth in data warehousing and BI, these are practically the Number 1 software niches right now.

For HP it’s not so much a software play. They are partnering with a lot of software vendors to provide the various piece parts, such as overall Master Data Management (MDM), data warehousing, and business intelligence product sets. But, very clearly, HP sees this as a services play first and foremost. If you look at IBM, 50 percent of their revenues are now from the global services group, and a lot of the projects they are working on are data warehousing, and master data management, and data integration. HP covets all that.

They want to get into that space, and there’s definitely a lot of room for major powerhouse players like them to get into it. Also, very interestingly, NCR has announced in the past week or so that it’s going to spin off Teradata, which has been operating more or less on an arms-length basis for some time. Teradata has been, without a doubt, the fastest growing product group within NCR for a long time. They're probably Number 1 or a close Number 2 in the data warehousing arena. This whole data warehousing space is so lucrative, and clearly HP has been coveting it for a while. They’ve got a very good competency center in the form of Knightsbridge.

They have got a good platform, this Neoview product that they are just beginning to discuss with the analyst community. I’m trying to get some time on their schedule, because they really haven't made a formal announcement of Neoview. It’s something that’s been trickling out. I’ve taken various informal briefings for the last six months, and they let me in on a few things that they are doing in that regard, but HP has not really formally declared what its product road map is for data warehousing. I expect that will be imminent, because, among other things, there is a trade show in February in Las Vegas, the Data Warehousing Institute, and I’m assuming that they -- just like Teradata and the others -- will have major announcements to share with all of us at that time.

Gardner: Well, thanks for that overview. Anyone else have anything to offer on the role of data warehousing?

McKendrick: Something I always found kind of fascinating is that the purpose and challenges of data warehousing are very much parallel to those of SOA. The goal of data warehousing is to abstract data from various sources or silos across the enterprise and bring it all into one place. And the goal of SOA is to take these siloed applications, abstract them and make them available across the enterprise to users in a single place. The ROI formula interestingly is the same as well.

When you start a data warehouse, you’re pumping in a lot of money. Data warehouses aren't cheap. You need to take a single data source, apply the data warehouse to that, and as that begins to generate some success, you can then expand the warehouse to a second data source, and so forth. It’s very much the same as SOA.

Kobielus: I agree wholeheartedly with that. Data warehouses are a platform for what’s called master data management. That's the term in the data-management arena that refers to a governance infrastructure to maintain control over the master reference data that you run your business on -- be it your customer data, your finance data, your product data, your supply chain data and so forth.

If you look at master data management, it’s very much SOA but in the data management arena. In other words, SOA is a paradigm about sharing and re-using critical corporate resources and governing all that. Well, what's the most critical corporate resource -- just about the most critical that everybody has? It's that gospel, that master reference data, that single version of the truth.

MDM needs data warehousing, and data warehousing very much depends on extremely scalable and reliable and robust platforms. That’s why you have these hardware vendors like HP, IBM, Teradata, and so forth, that are either major players already in data warehousing or realizing that they can take their scalable, parallel processing platforms, position them into this data warehousing and MDM market, and make great forays.

I don’t think HP, though, will become a major software player in its own right. It’s going to rely on third-party partners to provide much of the data integration fabric, much of the BI fabric, and much of the governance tooling that is needed for full blown MDM and data warehousing.

Gardner: Great. I'd like to thank our panel for another BriefingsDirect SOA Insights Edition, Volume 9. Steve Garone, Joe McKendrick, Neil Ward-Dutton, Jim Kobielus and myself, your moderator and host Dana Gardner. Thanks for joining, and come back next week.

If any of our listeners are interested in learning more about BriefingsDirect B2B informational podcasts or to become a sponsor of this or other B2B podcasts, please fill free to contact me, Dana Gardner at 603-528-2435.

Listen to the podcast here.

Transcript of Dana Gardner’s BriefingsDirect SOA Insights Edition, Vol. 9. Copyright Interarbor Solutions, LLC, 2005-2007. All rights reserved.