Saturday, May 03, 2008

BriefingsDirect Analyst Insights Podcast Examines WOA to SOA Continuum With Keen Eye on Cloud Computing

Edited transcript of periodic BriefingsDirect Analyst Insights Edition podcast, recorded April 24, 2008.

Listen to the podcast here. If you'd like to learn more about BriefingsDirect B2B informational podcasts, or to become a sponsor of this or other B2B podcasts, contact Interarbor Solutions at 603-528-2435.


Dana Gardner:
Hello, and welcome to the latest BriefingsDirect Analyst Insights Edition, Volume 28, a periodic discussion and dissection of software, services, cloud computing and related news and events with a panel of industry analysts and guests. I'm your host and moderator, Dana Gardner, principal analyst at Interarbor Solutions.

Our distinguished panel this week -- and this is the week of April 21, 2008 -- consists of Joe McKendrick, an independent analyst and prolific blogger. Welcome back, Joe.

Joe McKendrick: Thanks, Dana, happy to be here.

Gardner: We’re also joined by Jim Kobielus, senior analyst at Forrester Research. How do you do, Jim?

Jim Kobielus: Hi, Dana. Hi, everybody. Glad to be back in the saddle.

Gardner: Also joining us this week, Tony Baer, principal at onStrategies and also a prolific blogger. Welcome, Tony.

Tony Baer: Hey, Dana, good to hear you again.

Gardner: Also joining us is Brad Shimmin, principal analyst at Current Analysis. Hello, Brad.

Brad Shimmin: Hey there, Dana. Thanks for having us. Good to be back on the air with you as well.

Gardner: And making his debut on this particular podcast, Phil Wainewright. He is an independent analyst, the director of Procullux Ventures and a ZDNet blogger. Welcome to the show, Phil.

Phil Wainewright: Glad to be here, Dana.

Gardner: Our discussion this week will focus on several recent news events. We've seen the Live Mesh announcement from Microsoft -- and that came on the heels of the Application Engine from Google.

So, we're going to look at those. We're also going to put this discussion in the context of some recent back-and-forth blogging and some discussion about leadership around the intersection relationship between Web-oriented architecture (WOA) or "webby applications" and their environment and support technologies, as well as traditional services-oriented architecture (SOA).

I'm going to start with you, Tony. You just came out in the last day or two with a blog that referred to WOA as a "lowest common denominator." I wonder if you could help us understand what you mean by that.

Baer: My sense of it is that it’s technologies that are basically extremely accessible and relatively simple, and you don’t have a very complicated stack to wade through. They're technologies that have been around with Web developers for five to ten years, and, in that sense, it’s very much like Ajax, in that these are technologies that are there are and, guess what, we’ve found new ways to repurpose them.

We're using HTTP, plain old XML, and a RESTful style of service requests to essentially make the Web more dynamic, almost an application-centric environment. What it lacks is the perceived complexity or what would be considered as a capital "S" SOA, which should be the Web services stack. I've lost count of the number of standards or proposed standards. I know on the Wikipedia page, they list about 80. I think Linthicum has quoted numbers close to a couple of hundred.

Again, it’s part of a back-to-basics backlash against complexity, whether you’re perceived correctly or not. Like Ajax, it’s just have a loosey-goosey collection of things that were already out there.

Gardner: So, from your reference point, "lowest common denominator" isn't necessarily derogatory or a bad thing, but is inclusive and perhaps a positive.

Baer: If it gets you the information you need, who cares about how ugly it is?

Gardner: Okay, let’s take this to Brad Shimmin next. Brad, we have seen some opportunity now, public APIs, Web Services, even platform-as-a-service (PaaS) offerings. We all have seen Live Mesh from Microsoft, which I guess you can call a direct re-interoperability as a service function. What is relationship between WOA and SOA? Are they exclusive or are they separate? Can they overlap? How do you view that?

Shimmin: I just see the two as different sides of the same Rubik’s Cube that you can mix up and make into a complete absolute mess if you want to. Or, you can see them as highly simplified, depending on your viewpoint. If you are a developer, building an app that’s just going to run on the Web and you want to avail yourself of the benefits of SOA, you are probably going to want to use REST, because of its simplicity and applicability to services that are running on the wire or on the Web.

I see it as a continuum, the Rubik’s Cube continuum, if you will. But, now things are coming out of Google and Microsoft and I don’t see those two as being competitive with one another or similar, but as a representation of a very fast moving, huge wave.

We're being inundated right now from a number of vendors who are really pushing hard to make use of the Web as not just a form of connectivity, but as an actual platform for the services and software that we consume -- whether you are a consumer using Live Mesh to synchronize your computers or you’re an IT department looking to extend your B2B network without having to go to a VAN provider.

Gardner: Now, Phil Wainewright, you cover software as a service (SaaS) diligently. I'm not wedded to the term WOA. I'm happy with "webby applications" or 'web-facing technologies." Do you think that with these announcements from the cloud providers, you’re better off availing yourself of them as an enterprise or a service provider organization? Is there a benefit from a WOA perspective for absorbing and using these Web-based or cloud-based services, or are good old SOA technology and approaches just as good.

Wainewright: We really shouldn’t try and separate these two phenomena, because they are two sides of the same coin or two facets of the same Rubik’s Cube. To deliver something using WOA, any kind of serious provider is going to need to instantiate that within a SOA. So, there’s going to be SOA underneath a WOA provider.

We're going to look a little bit stupid if we start to debate the difference between one artificial term that we have created and another artificial term we have created. Really, it is just about services that you use within a certain set of protocols, which are really based on the Web anyway.

We are realizing that what was taught or asked as SOA within the enterprise is actually something that we can do in the WOA Web. If we do it an environment where there are lots of different participants who all play very different rules, then you do need a lowest common denominator approach to put everything together.

That’s why the emphasis now is on doing things using REST rather than SOA, trying to keep it as simple as possible and as standards-based as possible, and exposing things in a simple way, rather than making it really complex.

Gardner: Now, Jim Kobielus, you’ve been known as a wordsmith. If we're using artificial designations with WOA and SOA, but we still want to recognize this lowest common denominator benefit to tie some of these up together, what typically we should call this lowest common denominator approach?

Kobielus: Before I answer that, let me just peel the onion. I agree with what Phil Wainewright just said, which is that we’ll see SOA underneath a WOA provider. That is crystal clear to me. It’s already happening in my core area, which is data warehousing. The delivery layer or the front-end presentation layer of most business intelligence (BI) or data warehousing environments is going to go WOA or REST or Web 2.0, however you look at it.

So the middle persistence layer, the primary interface there, is not so much SOA or WOA, but SQL for querying data in databases and in OLAP cubes, etc. Then, what I call the "ingest layer" that extracts data from the sources and brings them into the data warehouses has a bit of SOA, bit of ESB, bit of EAI, and EPL.

So, looking at the big picture here, the whole notion of this is simply that cloud services is a convergence term, or should be, because the cloud that all of these paradigms inhabit is a multi paradigm cloud and they are co-existing in various ways. It’s a semi-permeable membrane between these organisms that live in the same soup or the same cloud.

Gardner: Maybe a common factor here is that we are extending and deepening the value we extract from widely embraced standards.

We wouldn’t be able to extend the lowest common denominator, the cloud, or improved SOA conceptually across more assets, resources, and infrastructure in middleware, if we didn’t have some either de-facto or established standards.

Let’s go to you Joe McKendrick. How do you see this? Are we really talking about the result of a lot of standards work in the acceptance and need for standards over the past 20 years?

McKendrick: Exactly, Dana. There are a lot of complaints about what's happening with standards these days. As Tony pointed out, there are anywhere between 80 to 200 standards that are evolving, particularly in the SOA-Web services space, as we know it. But, looking back over a 20-year horizon, we see that things have come a long way. We have HTTP, for example, and the rise of the Web.

I'm going to borrow a bit from Dion Hinchcliffe. He calls the whole Web 2.0 cloud phenomenon "The Global SOA." We have the standards and services that could be built using these standards out in the cloud. That essentially will function as one humongous universal SOA. Bringing that down to WOA, you could look at SOA, as it’s enacted within organizations, as a WOA island, or an internal cloud.

I have heard the phrase "my cloud" applied to an internal instantiation. SOA essentially acts as internal cloud within organizations. I think that’s a good way to sell SOA. I know there has been a lot of difficulty selling the concept to the business, and you can explain to the business that SOA is actually cloud computing, a SaaS enacted within the organization. Your business units no longer have to worry about building their own services or their own interfaces. You have this secure cloud service that exists within the boundaries of your enterprise.

Gardner: Okay, so we have standards and, of course, the funny thing about standards is that some are more standard than others or more accepted than others. What I think I hear you saying is that there is this private cloud or SOA activity in an enterprise, and the standards by which that functions can be the choice of that organization or perhaps what they have been left with as a result of their legacy and on-going IT adoption over the years.

Then, there is this public cloud, WOA, or extended global SOA, which is based on those standards that are accepted by a larger group, perhaps from a social networking perspective, the anointed standards from the social technical graph. What do you think, Phil Wainewright, are we talking about sort of tiers of standards here?

Wainewright: Well, I was listening to what Joe said and it kind of crystallized in my mind. WOA is actually SOA that works, because, as you said, you can build a SOA in your own organization with your internally defined standards. I am thinking back to the fact that the SOA required standards so that two SOA implementations can work with each other.

These internal SOAs are actually nonsense, because they are totally internalized and they can't interact with the outside world. If you want to actually take advantage of all the resources that have been on the Web, if you want to interact with people in other organizations or with computer systems or database resources that other organizations are making available, you have to go to WOA.

That's the SOA that works. The reason it works is because it’s been implemented using standards that everyone actually understands and haven’t got the latitude to define for themselves.

Perhaps this is the moment when all those kind of people who have been building all these wonderful SOA infrastructures within their organizations -- for whatever it was they thought people were going to do at the end of the day -- are really going to meet their nemesis.

Gardner: Okay, so we have a set of SOA principles and standards that have a certain internal maybe even extranet type of a flavor, but in order for those islands to work well across other islands or to avail themselves of highly cost-efficient cloud service that are made available by such notables as Amazon, Google, Microsoft, IBM, eBay, EMC and Apple, you need to go to this higher common denominator, accepted level of standards.

Tony Baer, do you think we’re getting close to what’s the proper understanding of WOA/SOA and what should come next?

Baer: The way Phil was characterizing WOA as "SOA that works" and the way Joe characterized SOA as basically the internal cloud, kind of the rang bells rang here. It was like, "Aha! Yeah, that’s really what it’s about." It seems like what we are sailing into is these tiers of granularity.

If you are going out to the wider world, you go out to the wider world of the standards that have already been there for years, where we don’t need a learning curve. And, it makes sense, because the more sources you deal with, the lower your common denominator has to be. You need to basically widen the gate there. The way Joe and Phil put it really sums up how these are settling out. So, that makes a lot of sense.

Gardner: Alright, so perhaps from the user-centric, developer-centric, and even the disrupter-centric viewpoint, that being the cloud, the new cloud providers are happy to embrace these more open or common-denominator standards. On the other hand, there are vendors who are established that have incumbency and perhaps have business models to protect.

So, there could be some tension here between the SOA as an internal cloud and the WOA as the more external, more highly interoperable cloud. What do you make of that, Brad Shimmin? Are we are going to have some tension here between the incumbents and the user/developers/disrupters?

Shimmin: If IBM is any indicator of things to come, I would say no. It’s simply going to be the established firms taking advantage of the situation and partnering with or building up their own infrastructures for delivering RESTful-based Web-Service applications

Gardner: It doesn’t seem like they have too much of a choice, right?

Shimmin: Absolutely not, and if you look at companies like BEA -- Cape Clear was one of the first actually -- they’re in the application for SOA structure’s base. They are climbing over themselves to REST enable all of their APIs, strangely enough, starting with their governance tools and then moving to their more messaging-oriented software.

So they are REST enabling all the software, actively partnering with service providers to help them, and enabling ISVs to build apps that use their technologies. BEA and Progress Software have well-established ISV programs for customers to build out these SaaS apps.

Gardner: Alright, I think we're looking at a period of some disruption, particularly on the business-model side. So, if there is this great sucking sound that the Web as a platform is defining what is productive, what can be done cost effectively?

You can produce and put apps up on Amazon or Google or other alternatives. That’s kind of an offer you can’t refuse, if you are a startup or if you are an internal development organization within an enterprise and you have limited funds, but you want to accomplish something. These are very enticing opportunities to take the logic to the cloud, perhaps even do the tooling and development in the cloud, produce something, and then pay for that as it produces revenue or is in demand.

So, given the disruption on the business model side – again, good news for developers and users and disrupters -- isn’t there a risk if this happens too quickly for folks like IBM, TIBCO, and perhaps SAP? What do you think about that Phil?

Wainewright: One of the things it’s going to expose is that the WOA world is still in some early days and that there are quite a lot that the providers have got to get it right in terms of a service level commitments and in terms of what they are billing for their services, how they are establishing the robustness or reliability of a data feed, and of course, the security stuff.

Before these providers become highly competitive for the established enterprise vendors, there is some work that has to be done. But having said that, if you look at the more established players like Intuit, QuickBase, and salesforce.com and as well as the attractions for doing stuff on Amazon EC2, a fair bit of enterprise use is already being made of these capabilities.

Gardner: So perhaps the period of some mutual harmony with the older providers continuing to provide services value, perhaps maintenance and ongoing technical support for the SOA cloud as Joe referred to it, but also a new opportunity and competition in the higher level cloud. Let’s go back to Tony Baer, what are we missing in all this? Is there something that needs to happen in order for harmony between the WOA providers and the SOA providers?

Baer: I think Phil was hinting at that. I was just thinking about which part of what was supposed to be the appeal. Part of the problem of the Web services stack is that it tries to be very ambitious in terms of what it has tried to accomplish within that technology stack. Not only did it provide a service request to providers, or conversation infrastructure, but also tried to internalize all the types of security, reliability, transactionality, that traditionally were internal to application or database silos.

The need for those services and those guarantees of robustness doesn't go away, but the question is, where do you implement them? It’s one thing to request a transaction that's not humongously mission critical. But, at some point, you're going to need to ensure that the requester is authenticated and is authorized. If you were going to make this a business, you have to ensure that you maintain service levels. This is very data- transaction focused. You need those transaction guarantees, guarantees of roll back, etc.

I still haven’t figured out the answers to this yet, but my sense is that, if we are going to try and do this and do this on top of a lowest common denominator stack, you can't expect to internalize that within that lowest common denominator stack. You have to apply this externally.

A harbinger of that kind of approach is in some of these new enterprise mashup sandboxes that folks like IBM and Serena and a whole bunch of others are now trying to set up. We recognize that within the sandbox we will make it easy for you to do what you want to do, but we will put external controls to make it safe for enterprise consumption.

I don't have the answers to how this is going to happen, but controls over service levels, security and all that sort of thing will have to be externalized to the WOA stack, if you want to call it a stack.

Gardner: So, clearly there's a set of issues that needs to be thought through and those are thorny around transactional activities or mission criticality, absolute delivery, and guaranteed delivery. Performance levels need to be maintained and there will be compliance and regulatory impacts in that space as well. So let’s talk about data.

Let’s go back to Jim Kobielus. On the issue of data, when it comes to the lowest common denominator or cloud based, isn't there an opportunity for data to become a little bit more inclusive of WOA. Can we exploit the benefits of the cloud, either services or the repository in the cloud and virtualized data repositories, before we have to deal with the thorny transactional set.

Kobielus: Right. In the discussion that we had on SOA versus WOA, I've seen everybody tune into the issue. Offering transactional applications is the primary focus of much of what's going on. In terms of analytical applications, on the analytical data sets and where they are hosted and so forth in the cloud, that’s a big virgin territory that’s beginning to be opened up by, among others, Microsoft.

I was just on the phone with Microsoft yesterday about their SQL server data services, basically database in the cloud offering, which is in limited beta. They plan to go production in 2009. They were keying into an issue that I heard Tony talk about a moment ago that, as you externalize more of these sources into the cloud in an SaaS environment, the controls, whether internal to the cloud or external, are critically important.

Right now, SQL server data service is just a subset of the premises-based SQL server 2005 functionality. Microsoft recognizes that, as they bring it along to more production, they are going to need to build in the 24/7 service level guarantees and all of the security and other mission critical features that customers have come to demand and the premises based version of that particular database.

So, as you go out into the cloud then, that’s a huge open issue. First and foremost is what I call DW2.0, light-weight data warehousing. Microsoft is not the only one in the space. There is Zoho and a few others. It's very light weight, not really mission- or enterprise-grade data warehousing capability, hosting structured data sets in the cloud and then making them available for analytics such as reporting and dashboard. You can wait for this to play out, before these cloud-based data warehouses achieve some degree of functional parity and robustness, comparable to the premises-based offerings that enterprises everywhere have already implemented.

Gardner: Right, but part of the rationale for embracing the Web-tier, cloud-tier of interoperability is, because you can play across ecologies, be inclusive of more partners, allow for SOAs and applications sets within organizations to be more interruptible, then it’s not just going to be analytics. Why not put data in the cloud, because you want to share certain data on a privileged basis with other people and create layers of metadata that can then make for highly productive business processes.

Kobielus: Exactly. Microsoft is positioning SQL Server Data Services (SSDS) supposedly for B2B integration scenarios and also for the mid-market, but is very much focused primarily right now on transactional applications, database applications and so forth.

Microsoft has very much bought into the whole data services vision. They have a very strong one going forward. With WOA, they're at the front and center of it. When I say "front and center," it's in that delivery, access, presentation, sharing, and synchronization layer, leveraging things like Live Mesh and so forth, going forward in their road map.

So, WOA is very big on the front end. On the back end of SSDS, they are very keen and hot on SOA, and everything SOA implies. But, they're not really keen on exposing all of that SOA natively to their target customer, which is a mid-market or a small company that doesn’t have the technical resources or the skills to do programming in a real fixed SOA stack. They prefer to virtualizes all of that stuff and have WOA be that simple front end.

Gardner: Now, when you’re talk about standards in Microsoft, we have to look also at tradition, with Microsoft wanting to establish its own standards, ones that continue its strength and extend its strength into other areas. I think we've seen another example of that most recently with Live Mesh, in that it’s got interoperability across devices, two way communication using RSS and Atom, and other technologies that we would consider WOA technologies. But again, to a subset of the overall device environment or the software and standards environment.

So, let’s go to Joe McKendrick. Do you think that, as Microsoft moves into the cloud, it’s going to fully embrace the lowest common denominator, or perhaps attempt to take its platform approach and extend it into the Web?

McKendrick: Well, you can definitely see Microsoft moving in that direction. Jim made some excellent observations about what's going on in the SQL server space. Microsoft is in a difficult position here, because most of its revenues come off of the traditional, onsite, resident software stack, Windows, the Window server and Window’s client, the Office Suite, the SQL server onsite. If you look at its revenue pictures, billions and billions come out of that stack.

So Microsoft has to be out there, talking about changing the paradigm of computing, which runs against its revenue stream.

Gardner: Alright, I agree with you 100 percent. Let’s put that in the context of its pursuit of Yahoo. Microsoft is seeking to buy Yahoo for approximately $40 billion and Yahoo doesn’t want them to.

Brad Shimmin, if you recall when this was announced on February 1, people were scratching their head and saying, "What, is Microsoft crazy? What are they doing?" I think that over the last couple of months, it started to make more sense. Do you think that the Yahoo component can help Microsoft bridge this crosscurrent, as Joe described it?

Shimmin: I do, and it’s interesting, isn't it. When that was first announced, I think most people thought this was an advertising play and a play for eyeballs, but it seems after looking at releases like Live Mesh that it’s really more about the connectivity services that already exist within Yahoo. They have a broad audience that Microsoft would like to capitalize on, because, when you look at the numbers between MSN and Yahoo, it’s a staggering difference, both in terms of the people who use those services and the number of services available.

For example, the Yahoo mashup server or mashup tool they have is one of those things that Microsoft could easily pull into Live Mesh and make a part of that. Live Mesh, in a way, is a response to some of that. Steve Ballmer, a week or so ago, said something like, "The desktop doesn't matter anymore. It’s the network that matters." This harkens back to something Sun Microsystems said some 20-odd years ago.

Gardner: And, what about Ken Olsen, he said that the PC was a toy, and he had been ridiculed for it, but we are beginning to come back to that, aren’t we?

Shimmin: Exactly. It has come a full circle. Microsoft is running here and running back to the past, if you will, but, as we were just talking about, they have a significant investment that they need to protect. When I look at Live Mesh, I see that as a protection of that system of the desktop and the applications themselves. It is really what Apple has been doing for the last three or four years with OS X, with their Sync Technology, just expanding it a little bit, and dropping some Atom and RSS on top of it.

Gardner: Kind of Hailstorm, Chapter Two?

Shimmin: Right. When you look at what they have been doing with their Office applications, in terms of enabling those to utilize the web, they have been extremely slow in doing so, and are just now sort of picking up where the companies like Zoho have gone light years beyond.

Gardner: You hit upon an interesting issue, Brad, discussing the fact that the cloud compute value isn't just in low-cost per compute tick or for storage per hour, but that there is also this notion of an audience, and the metadata associated with that audience. All those end-users can be provisioned, perhaps quite powerfully and at massive scale, both scaling up and down in terms of the massive size of the possible audience, but the granularity of the service provision to them is another value that Microsoft perhaps sees in Yahoo.

Let’s go to Phil Wainewright with this. Are we talking about cloud computing, not so much functionally, but as a way of bringing together applications, companies, services and the end-users?

Wainewright: I think there is a social dimension to cloud computing, because there is a social dimension to the Web. Looking at if from a WOA perspective -- I do hate these acronyms, especially when we start to turn them into words, but anyway, let’s go with that.

We looked at if from a WOA perspective and what you are actually doing is looking at it also from a social and a user perspective. SOA always tended to be about linking our systems together, and once we’ve done that, it was then what do users actually want, and the business case was often quite a long way in the thinking.

So, cloud computing is good because you have to put the service out there. You have to think about where the people are going to use it. The problem I have with a lot of the kind of Web 2.0 space is that getting eyeballs is the name of the game, and people aren’t really thinking about what the commercial proposition is and in what way are you actually delivering value and making revenue.

That’s why I'm a little skeptical about how much value there is for Microsoft in the Yahoo acquisition, because I hear Steve Ballmer and other Microsoft leaders talking about advertising as a major fund revenue stream.

The amount of money that is available in advertising is virtually nothing compared to the amount of money that is available in transactions as a whole, if you are providing value to businesses. I think there is a great more revenue potential there than simply enabling businesses to get close to the consumers.

Gardner: Okay. You put your finger on something here. It's not just the advertising revenue, but the potential transactional revenue by linking up these constituencies, providing the scale up and down, and giving the developers the opportunity to originally create applications to feed this kind of cycle. If somebody has taken a big portion of the transaction across these activities, that’s a much more sizable market, than the advertising market.

Gardner: Anybody want to react to that?

Baer: I definitely agree that transactions are where the money is, but I think that we need to be careful not to fall into the trap of these B2B exchanges of about nine or ten years ago, when we thought that that was going to be the future of commerce. Instead, it was basically trying to institute a practice that was going against 20 years of supply-chain management and partner management trends. I think we need to watch out there to avoid getting ahead of ourselves with the hype.

Gardner: You've also put your finger on something. What will be the future of commerce in the WOA-SOA linked world, where internal business networks and resources and assets can, at a highly automated level, with full scaling, security and reliability, start interacting with the cloud and therefore, with the end-users. It's really an automation of commerce. What's going to come next? Any idea?

Kobielus: I agree with everybody that Microsoft needs to pursue Yahoo just to keep on building up that audience, because obviously it's an eyeball decision in the whole Web 2.0, eCommerce arena.

Gardner: And that’s because they missed the boat on search, right?

Kobielus: Right. If you look at it, and I agree with what everybody said, transactions are the money to be made from connecting those eyeballs/wallets to transactions. It's where the Web 2.0 money will primarily be made.

In a sense, tracking eyeballs is instrumental to both connecting those wallets to transaction, but also connecting those eyeballs and the brains behind them to the intelligence and the analytics that are out there, selling that service as well. Microsoft is providing not only business intelligence, but market intelligence, consumer intelligence and so forth into the cloud.

McKendrick: Let me add to that. It's going to be changing the nature of organizations internally as well, not only on a B2B basis. The dot-coms that you saw arise in the 1990s all had to buy their infrastructure. They needed to buy Sun servers and everything else to support their operations. Now startups especially can just tap into the infrastructure and services that are available across the cloud and offer up these services to their consumers.

I’ll leave with an example. If you look at the Amazon Web services site, look at some of their case studies. There is a company -- a podcast service company, as a matter of fact -- Online Podcast Service, that was able to start up with a full-fledged infrastructure that cost a total of $82 for the first two months for storage, processing, messaging, everything they needed.

Gardner: I agree. There is the whole middle layer of the applications and services right before an explosion -- sort of what we are seeing on our trees and in our lawns these days in April -- that can create a very fertile environment. But, that environment exists within the confines of someone’s cloud. That cloud can interoperate significantly, but the metadata that ties the constituencies together can be manifested across these relationships at a price.

McKendrick: Dana, you just sent a shiver up my backbone here, because what I see in my lawn are tons of mushrooms. So, when I put mushrooms in clouds in the same sentence, I have a shiver running down my spine.

Gardner: Maybe the English language isn’t sufficient to keep up with the technology concept?

Shimmin: I am disposed to use WOA as a word, I guess. I've seen two examples in the last couples of days that speak to what seems to be a growing future for this sort of commerce that you are talking about. I don’t know if you got to see this, but Sun’s Solaris On Demand program that they launched a couple days ago, is really about enabling ISVs to make their applications and to host them on either Sun’s network or on Sun’s partner’s networks. In either case, Sun takes a cut from both the partner and the ISV for being the middleman -- the provider of some technology and some supported services for those other datacenters.

Gardner: We're going to have to wrap this up pretty quickly now, but we haven’t discussed the whole mobile tier, and the fact that many more consumers entering their metadata continuously about what their wants, needs, desires and what they are willing to pay money for, will come through a mobile device, not necessarily a PC or even a browser. This offers yet a much larger global audience potential for what these clouds can pull off. Anybody wanted to discuss very quickly, the mobile edge impact on this before we close up for the day?

Wainewright: I’ve made a couple of observations,. The mobile Web is not going to exist as a separate thing. It’s just going to be the Web as we experience it on PCs.

Gardner: Right.

Wainewright: So, that's was interesting for two reasons. Yes, the way it is going to be available on mobile devices. We are going hook into a mobile, but it’s going to look much more like the Web that we are already used to rather than some kind of completely separate thing.

Gardner: Excellent.

Kobielus: A lot like a Live Mesh.

Wainewright: Indeed, and I think that the genius of Live Mesh is that Microsoft has really created in Live Mesh a bridge that enables it to take the cloud rather than being something that’s up there on the Web, but to bring it down and envelop the desktop as well, which then gives it a transition bridge to bring it’s own products into the cloud.

Gardner: But in doing so with a certain risk by not being inclusive of all the different permutations of how that could be done.

Wainewright: I think it's going to do more permutations. It’s has come right with the Microsoft-only implementation, because this is the first release. We will see how much effort it puts into making Mesh available on other platforms; that’s going to be a big test of how successful it will be in the long term.

Gardner: Well, thanks. We're going to have to close it out here. Tony Baer and I had a quick discussion at IBM Impact a couple of weeks ago about how the thought process is so much richer when you’ve got a group of bright and educated people like you all.

So I appreciate the group thing. I think we have been able to solidify and even move the needle a little bit on some of these concepts. I hope the readers and listeners appreciate that. So, I want to thank our panel. Once again we have had Joe McKendrick. Thanks, Joe.

McKendrick: Thanks, Dana. Glad to be here.

Gardner: Jim Kobielus. Thanks, Jim.

Kobielus: Oh, it’s been a pleasure once again.

Gardner: Tony Baer, I appreciate your input.

Baer: Nice to be back.

Gardner: Great to have you here, Brad Shimmin.

Shimmin: Thank you, Dana.

Gardner: Well, once again, welcome to Phil Wainewright, and we certainly hope you come back again.

Wainewright: Thanks very much, Dana. It’s been a pleasure being here.

Gardner: This is Dana Gardner, principal analyst at Interarbor Solutions. You have been listening to the latest BriefingsDirect Analyst Insights edition, Volume 28. Please come back and listen next time.

Listen to the podcast here.

Edited transcript of software services, cloud computing and related trends and analysis discussion. Copyright Interarbor Solutions, LLC, 2005-2008. All rights reserved.

Sunday, April 27, 2008

HP Creates Security Reference Model to Better Manage Enterprise Information Risk

Transcript of BriefingsDirect podcast on best practices for integrated management of security, risk and compliance approaches.

Listen to the podcast here. Sponsor: Hewlett-Packard.

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you’re listening to BriefingsDirect. Today, a sponsored podcast discussion about risk, security, and management in the world’s largest organizations. We're going to talk about the need for verifiable best practices, common practices, and common controls at a high level.

The idea is for management of processes, and the ability to prevent unknown and undesirable outcomes -- not at the silo level, or the instance-level of security breaches that we hear about in the news. We will focus instead on what security requires at the high level of business process.

These processes have been newly managed through Information Security Service Management (ISSM) approaches, and there is a reference model (ISSM RM) that goes along with it.

To help us learn more about ISSM, we are joined by two Hewlett-Packard (HP) executives. We are going to be talking with Tari Schreider, the chief security architect in the America’s Security Practice within HP’s Consulting & Integration (C&I) unit.

Also joining us to help us understand ISSM is John Carchide, the worldwide governance solutions manager in the Security and Risk Management Practice within HP C&I. Welcome to you both.

Tari Schreider: Thank you.

John Carchide: Thank you, Dana.

Gardner: John, we have a lot of compliance and regulations to be concerned about. We are in an age where there is so much exposure to networks and the World Wide Web. When something goes wrong, and the word gets out -- it gets out in a big way.

Help us to understand the problem. Then perhaps we'll begin to get closer to the solutions for mitigating risk at the conceptual and practical levels.

Carchide: Part of the problem, Dana, is that we've had several highly publicized incidents where certain things have happened that have prompted regulatory actions by local, state, and foreign governments. They are developing standards, defining best practices, and defining what they call control objectives and detailed controls for one to comply with, prior to being a viable entity within an industry.

These regulatory requirements are coming at us from all directions. Our senior management is currently struggling, because now they have added personal liability and fines associated with this, as each event occurs, like the TJ Max event. The industry is being inundated with compliance and regulatory requirements.

On the other side of this, there are some industry-driving forces, like Visa, which has established standards and requirements that, if you want to do business with Visa, you need to be Payment Card Compliance (PCI) compliant.

All these requirements are hitting senior-level managers within organizations, and they're looking at their IT environment and asking their management teams to address compliance. “Are we compliant?” The answers they're getting are usually vague, and that’s because of the standards.

What Tari Schreider has done is establish a process of defining requirements, based on open standards, and mapping them to risk levels and maturity levels. This provides customers with a clear, succinct, and articulated picture. This tells them what their current state is, what they are doing well, what they are not doing well, where they're in compliance, where they're not in compliance. And it helps them to build the controls in a very logical and systematic way to bring them into compliance.

In the 32 years of security experience I have, Tari is one of the most forward-thinking individuals I've met. It gives me nothing but great pleasure to bring Tari to a much larger audience so he can share his vision.

Information Security Service Management is his vision, his brainchild. We've invested heavily, and will continue to, in the development and maturity of this process. It incorporates all of HP’s services from the C&I organizations and others. It takes HP’s best practices, methodologies, and proven processes, and incorporates them into a solution for a customer.

So, I would like to introduce everyone to the ISSM godfather, Tari Schreider -- probably one of the most innovative individuals you will ever have the privilege of meeting.

Gardner: Thank you, John. Tari, that’s a lot to live up to. Tell us a little bit about how you actually got started in this? How did you end up being the “godfather” of ISSM?

Schreider: Well, let me compose myself from that introduction. When I joined the Security Practice, we would make sales calls to some of HP’s largest customers. Although we were always viewed as great technologists and operationally competent providers of products and services, we weren’t really viewed -- or weren’t on the radar screen -- as a security service provider, or even a security consulting organization.

Through close alignment with the financial services vertical -- because they had basically heard the same message -- we came up with a strategy where we would go out to the top 30 or so financial services clients and talk with them.

"What is it that you're looking for? Where would you like to see us provide leadership? Where do you see us as a component provider of security services? What level do you view us playing at?"

We took that information, went throughout HP, and invited individuals that we felt were thought leaders within the organization. We invited people from the CTO’s office, from HP Labs, from financial services, worldwide security, as well as representation from a number of senior solution architects.

We got together in Chicago for what we look back on and refer to as the "Chicago Sessions." We hammered out a framework based upon some early work that was done principally in control assessments, building on top of that, and leveraging experiences with delivery in terms of what worked and what didn’t.

We started off with what was referred to then as the "building of the house" and the "blueprint." Then, over the last couple of years, as we have delivered and worked with various parts of the organization, as well as clients, we realized that one of the success factors that we would have to quickly align ourselves with was the momentum that we had with HP’s ITSM, now called Service Management Framework. We had to articulate security as a security service management function within that stack. It really came together when we started viewing security as an end-to-end operational process.

Gardner: What happened that required this to become more of a top-down approach? In John’s introduction, it sounded as if there was a lot of history, where a CIO or an executive would just ask for reports, and the information would flow from the bottom on up.

It sounds like something happened at some point where that was no longer tenable, that the complexity and the issues had outgrown that type of an approach. What happened to make compliance require a top-down, systemic approach?

Schreider: One problem that we were constantly faced with was that clients were asking us, "Where is your thought leadership on security? We know we bring you in here when we have to fix security vulnerabilities on the server, and we get that. We know that you know what you are doing and you're competent there. But frankly, we don’t know what it is that you do. We don’t know the value that you can bring to the table. When we invite you in, you come in with a slide deck full of products. Pretty much, you are like everybody else. So where is your thought leadership?"

Because nobody will ever argue against that HP is an operations- and process-oriented company, we wanted to leverage that. And what we wanted to do was stop the assessment and reporting bureaucracy that CIOs and CSOs and CFOs were in because of Sarbanes-Oxley and so forth, and to provide real meat to their information security programs.

The problem was, we had some very large customers that we were losing to competition, because we basically ran out of things to sell them -- only because we didn’t know we had anything to sell them. We had all of this knowledge. We had all of this legacy of doing security in technology for 20 or 30 years, and we didn’t know how to articulate it.

So we formulated this into a reference model, the Information Security Service Management Reference Model, where it would basically serve as an umbrella, by which all of the pillars of security for trusted infrastructure and proactive security management -- and identity and access management, and governance and so forth -- would be showcased under this thought leadership umbrella.

It got us invited into the door, with things like, "You guys are a breath of fresh air. We have all of these Big Four accounting firm-type organizations. They are burying us in reports. And at the end of the day we still fail audits and nothing gets done."

Gardner: I know this is a large and complex topic, on common security and risk management controls, but in a nutshell, or as simply as we can for those folks that might be coming to this from a different perspective, What is ISSM, and what does it mean conceptually?

Schreider: Well, if you look at ISSM, it’s very specifically referred to as the Information Security Service Management Reference Model. It is several things, a framework, architecture, a model, and a methodology. It's a manner in which you can take an information-security program and turn it into a process-driven system within your organization.

That provides you with a better level of security alignment with the business objectives of your organization. It positions security as a driver for IT business-process improvement. It reduces the amount of operational risk, which ensures a higher degree of continuity of business operations. It’s instrumental in uncovering inadequate or failing internal processes that stave off security breaches, and it also turns security into a highly leveraged, high-value process within your organization.

Gardner: This becomes, in effect, a core competency with a command and control structure, rather than something that’s done ad hoc?

Schreider: Absolutely. The other aspect is that through the definition of linked attributes, which we can talk about later, it allows you to actually make security sticky to other business processes.

If you're a financial institution, and you are going to have Web-based banking, it gives you the ability to have sticky security controls, rather than “stovepipes.”

If you're a utility industry, and you have to comply with North America Reliability Corporation (NERC) and Critical Infrastructure Protection (CIP) regulations, it gives you the ability to have sticky security controls around all of your critical cyber assets. Today, they’re simply security controls that are buried in some spreadsheet or Word document, and there is really no way to manage the behavior of those controls.

Gardner: Why don’t we then just name somebody the “Chief Risk Officer” and tell them to pull this all together and organize it in such a way that this is no longer just piecemeal? Is that enough or does something bigger or more methodological have to take place as well?

Schreider: What’s important to understand is that all of our clients represent fairly large global concerns with thousands of employees and billions of dollars in revenue, and with many demands on their day-to-day operations. A lot of them have done some things for security over time.

Pulling the risk manager aside and sort of leaving him with the impression that everything they are doing, they are doing wrong is probably not the best course. We've recognized that through trial and error.

We want to work with that individual and position the ISSM Reference Model as the middle layer, which is typically missing, to pull together all the pieces of their disparate security programs, tools, policies, and processes in an end-to-end system.

Gardner: It sounds as if we really need to look at security and risk in a whole new way.

Schreider: I believe we do. And this is key because what differentiates us from our contemporaries is that we are now “operationalizing” security as a process or a workflow.

Many times, when we pull up The Wall Street Journal or Information Week, and we read about a breach of security -- the proverbial tape rolling off the back of the truck with all of the Social Security numbers -- we find that, when you look at the morphology of that security breach, it’s not necessarily that a product failed. It’s not necessarily that an individual failed. It’s that the process failed. There was no end-to-end workflow and nobody understood where the break points were in the process.

Our unique methodology, which includes a number of frameworks and models, has a component called a P5 Model, where every control has five basic properties:
  • Property 1 -- People, has to be applied to the control.
  • Property 2 --Policies, certainly has to have clear and unambiguous governance in order for controls to work.
  • Property 3 -- Processes, is an end-to-end workflow, where everyone understands where the touch points are.
  • Property 4 -- Products, means technology has to be applied in many cases to these controls in order to bring them to life and to be functioning appropriately, and
  • Property 5 -- Proof, because there have to be proof points to demonstrate that all of this is actually working as prescribed by a standard, a regulation, or best practice.
Gardner: It seems that you are weaving this together so that you get a number of checks and balances, backstops and redundancies -- so that there aren’t unforeseen holes through which these risky practices might fall.

Schreider: I couldn’t say it any better than that.

Gardner: How do I know that I am a company that needs this? Maybe I am of the impression that, "Well, I've done a lot. I've complied and studied and I've got my reports."

Are there any telltale signs that an organization needs to shift the way they are thinking about holistic security and compliance?

Schreider: I'm often asked that question. When I sit down with CFOs or CIOs or business-unit stakeholders, I can ask one question that will be a telltale sign of whether they have a well-managed, continuously improving information security program. That question is, "How much did you spend on security last year?" Then I just shut up.

Gardner: And they don’t have an answer for it at all?

Schreider: They don't have any answer. If you don’t know what you are spending on security, then you actually don’t know what you are doing for security. It starts from there.

Gardner: That’s because these measures are scattered around in a variety of budgets. And, as you say, they evolve through a “siloed” approach. It was, "Okay, we've got to put a band-aid here, a band-aid there. We need to react to this." Over time, however, you've just got a hairball, rather than a concerted, organized, principled approach.

Schreider: That’s correct, Dana. As a matter of fact, we have a number of tools in our methodology that expose this disfranchised approach to security. Within our Property #4 portion of the P5 Model, we have a tool that allows us to go in and inventory all of the products that an organization has.

Then we map that to things like the Open Systems Interconnection (OSI) Reference Model for security on a layered approach, a "defense in depth" approach, an investment approach, and also from a risk and a threat model approach, and in ownership.

When they see the results of that, they say, "Wait a second. I thought we only had 10 or 12 security products, and I manage that." We show them that they actually have 40, 50, or 60, because they're spread throughout the organization, and there's a tremendous amount of duplication.

It’s not unusual for us to present back to a client that they have three or four different identity management systems that they never knew about. They might have four or five disparate identity stores spread throughout the organization. If you don’t know it and if you can’t see it, you can’t manage it.

Gardner: Now, it sounds as if, from an organizational and a power-structure perspective, this could organize itself in several places. It could be a function within IT, or within a higher accounting or auditing level or capability.

Does it matter, or is there high variability from organization to organization as to where the authority comes for this? Do you have more of a prescriptive approach as to how they should do it?

Schreider: The answer to both of those questions is "yes." We recognize that just because of the dynamics, the culture, and the bureaucracy, in many of our customers' organizations, security is going to live in multiple silos or departments. Through our P5 Model, we have the ability to basically take and share the governance of the control.

So, for example, the office of the Business Information Security Officers (BISO) or the Chief Security Officer (CSO) typically owns policies and proof. For the technology piece -- which has been always a struggle between the office of security and the office of technology on who owns what -- we can define the control of the attributes. So, the network-operations people can then own the technical controls, because they are not going to give up their firewalls and their intrusion detection systems. They actually view that as an integral component of their overall network plumbing.

The beauty of ISSM is that it's very nimble and very malleable. We can assign responsibilities at an attribute level for control, which allows people to contribute and then it allows them to have a sharing-of-power strategy, if you will, for security.

Gardner: There's an analogy here to Service Oriented Architecture (SOA) from the IT side. In many respects, we want to leave the resources, assets, applications, and data where they are, but elevate them through metadata to a higher abstraction. That allows us then to manage, on a policy basis, for governance, but also to create processes that are across business domains and which can create a higher productivity level.

I'm curious, did this evolve from the way that IT is dealing with its complexity issues? Is there an analogy here?

Schreider: It's very much similar to how IT is managed, where basically you want to push out to the lowest common denominator and as close as possible to the customer the services that you provide.

By this whole concept of what we would refer to as BISOs there are large components of security that should actually live in the business unit, but they shouldn’t be off doing their own thing. It shouldn’t be the Wild West. There is a component that needs to be structured for overall corporate governance.

We're certainly not shy about lessons learned and about borrowing from what contemporaries have done in the IT world. We're not looking to buck the trend. That’s why we had to make sure that our reference model supported the general direction of where IT has been moving over the last few years.

Gardner: Conceptually I have certainly bought into this. It makes a great deal of sense. But implementation is an entirely different story. How do you approach this in a large global organization, and actually get started on this? To me, it's not so much daunting conceptually, but how do you get started? How do you implement?

Schreider: One of the reasons people come to HP is that we are a global organization. We have the ability to field 600 security consultants in over 80 countries and deliver with uniformity, regardless of where you’re at as a customer.

There is still a bit of work that goes in. Although we have the ISSM Reference Model, and we have a tremendous amount of methodology and collateral, we are not positioning ourselves as a cookie-cutter approach. We spend a good bit of time educating ourselves about where the customer is, understanding where their security program currently lies, and -- based on business direction and external drivers, for example, regulatory concerns -- where it needs to go.

We also want to understand where they want to be in terms of maturity range, according to the Capability Maturity Model (CMM). Once we learn all of that, then we come back to them and we create a road map. We say that, "Today, we view that you are probably at a maturity level of ‘One.’ Based upon the risk and threat profile of your organization, it is our recommendation that you be at a maturity level of ‘Three’."

We can put together process improvement plans that show them step-by-step how they move along the maturity continuum to get to a state that’s appropriate for their business model, their level of investment, and appetite for risk.

Gardner: How would one ever know that they are done, that you are in a compliant state, that your risk has been mitigated? Is this a destination, or is it a journey?

Schreider: It's a journey, with stops along the way. If you are in the IT world -- compliance, risk management, continuity of operation -- it will always be a journey. Technology changes. Business models change. There are many aspects to an organization that require that they continually be moving forward in order to stay competitive.

We map out a road map, which is their journey, but we have very defined stops along the way. They may not ever need to go past a level of maturity of “Three,” for example, but there are things that have to occur for them to maintain that level. There's never a time when they can say, "Aha, we have arrived. We are completely safe."

Security is a mathematical model. As long as math exists, and as long as there are infinite numbers, there will be people who will be able to scientifically or mathematically define exploits to systems that are out there. As long as we have an infinite number of numbers we will always have the potential for a breach of security.

Gardner: I also have to imagine that this is a moving target. Seven years ago, we didn’t worry about Sarbanes-Oxley, ISO, and on-going types of ill effects in the market. We don’t know what’s going to come down the pike in a few years, or perhaps even some more in the financial vertical.

Is there something about putting this ISSM model in place that allows you to better absorb those unforeseen issues and/or compliance dictates? And is there a return on investment (ROI) benefit of setting up your model sooner rather than later?

Schreider: Absolutely. Historically, businesses throughout the world have lacked the discipline to self-regulate. So there is no question that the more onerous types of regulations are going to continue. That's what happened in the subprime [mortgage] arena, and the emphasis toward [mitigating] operational risk is going to continue and require organizations to have a greater level of due diligence and control over their businesses.

Businesses are run on technology, and technologies require security and continuity of operations. So, we understand that this is a moving target.

One of the things we have done with the ISSM Reference Model is to recognize that there has to be an internal framework or a controlled taxonomy that allows you to have a base root that never changes. What happens around you will always change, and regulations always change -- but how you manage your security program at its core will relatively stay the same.

Let me provide an example. If you have a process for hardening a server to make sure that that the soft, chewy inside is less likely to be attacked by a hacker or compromised by malware, that process will improve over time as technology changes. But at the end of the day it is not going to fundamentally change, nor should it change, just because a regulation comes out. How you report on what you are doing is going to change almost on a daily basis.

So we have adopted the open standard with the ISO 27001 and 17799 security-control taxonomy. We have structured the internal framework of ISSM for 1,186 base controls that we have then mapped to virtually every industry regulation and standard out there.

As long as you are minding the store, if you will, which is the inventory of controls based on ISO, we can report out to any change at any regulatory level without having to reverse engineer or reorganize your security program. That level of flexibility is crucial for organizations. When you don't have to redo how you look at security every time a new regulation comes out, the cost savings are just obvious.

Gardner: I suppose there is another analogy to IT, in that this is like a standardized component object model approach.

Schreider: Absolutely.

Gardner: Okay. How about examples of how well this works? Can you tell us about some of your clients, their experiences, or any metrics of success?

Schreider: Let me share with you as many different cross-industry examples that come to mind. One of the first early adopters of ISSM was one of the largest banks based in Mumbai, India.

One issue they had was a great deal of their IT operation was outsourced. They were entering into an area with a significant amount of regulatory oversight for security that never existed before. They also had an environment where operational efficiencies were not necessarily viewed as positive. The cost component of being able to apply human resources to solve a problem or monitor something manually was virtually unlimited, because of the demographics of where their financial institution was located.

However, they needed to structure a program to manage the fact that they had literally hundreds of security professionals working in dozens of different areas of the bank, and they were all basically doing their own things, creating their own best practices, and they lacked sort of that middleware that brought them all together.

ISSM gave them the flexibility to have a model that accounted for the fact that they could have a great number of security engineers and not worry so much about the cost aspect, but for them what was important is that they were basically all following the same set of standards and the same control model.

It worked very well in their example, and they were able to pass the audits of all of the new security regulations.

Another thing was, this organization was looking to do financial instruments with other financial organizations from around the world. They now had an internationally adopted, common control framework, in which they could provide some level of assurance that they were securing their technology in a manner that was aligned to an internationally vetted global and widely accepted standard.

Gardner: That brings to mind another issue. If I am that organization and I have gone through this diligence, and I have a much greater grasp on my risks and security issues, it seems to me I could take that to a potential suitor in a merger and acquisition situation.

I would be a much more attractive mate in terms of what they would need to assume, in terms of what they would be inheriting in regard to risk and security.

Schreider: Sure. When you acquire a company, not only do you acquire their assets, you also acquire their risk. And it’s not unusual for an organization not to pay any attention whatsoever to the threats and vulnerabilities that they are inheriting.

We have numerous stories of manufacturing or financial concerns that open up their network to a new company. They have never done a security assessment, and now, all of a sudden, they have a lot of Barbarians behind the firewall.

Gardner: Interesting. Any other examples of how this works?

Schreider: Actually there are two other ones that I would like to talk about quickly. One of the largest public municipalities in the world was in the process of integrating all of their disparate 911 systems into a common framework. What they had basically was 700 pages of security controls spread over almost 40 different documents, with a lot of duplication. They expected all of their agencies to follow this over the last number of years.

What resulted was that there was no commonality of security approach. Every agency was out there negotiating their own deals with security providers, service providers, and product providers. Now that they were consolidating, they basically had a Tower of Babel.

One thing we were able to do with the ISSM Reference Model was to take all of this disparate control constructs, normalize it into our framework, and articulate to them a comprehensive end-to-end security approach that all of the agencies could then follow.

They had uniformity in terms of their security approaches, their people, their roles, responsibilities, policies, and how they would actually have common proof points to ensure that the key performance indicators and the metrics and the service-level agreements (SLAs) were all working in unity for one homogenized system.

Another example, and it is rapidly exploding within our security practice is the utility industry. There are the NERC CIP regulators, which have now passed a whole series of cyber incident protection standards and requirements.

This just passed in January 2008. All U.S.-based utility organizations -- it could be a water utility, electric utility, anybody who is providing and using a control system -- has to abide by these new standards. These organizations are very “stove-piped.” They operate in a very tightly controlled manner. Most of them have never had to worry about applying security controls at all.

Because of the malleability of the ISSM Reference Model, we now have one that is called the ISSM Reference Model Energy Edition. We have it preloaded with all the NERC CIP standards. There are very specific types of controls that are built into the system, and the types of policies and procedures and workflows that are unique to the energy industry, and also partnerships with products like N-Dimension, Symantec, and our own TCS-e product. We build a compliance portfolio to allow them to become NERC CIP-compliant.

Gardner: That brings to mind another ancillary benefit of the ISSM approach and that is business continuity. It is your being able to maintain business continuity through unforeseen or unfortunate issues with nature or man. What’s the relationship between the business continuity goals and what ISSM provides?

Schreider: There are many who will argue that security is just one facet of business continuity. If you look at continuity of operations and you look at where the disrupters are, it could be acts of man, natural disasters, breaches of security, and so forth. That’s why when you look at our Service Management Framework and availability, continuity, and security-service management functions are all very closely aligned.

It's that cohesion that we bring to the table. How they intersect with one another, and how we have common workflows developed for the process in an organization gives the client a sense that we are paying attention to the entire continuum of continuity of business.

Gardner: So when you look at it through that lens, this also bumps up against business transformation and how you run your overall business across the board?

Schreider: Continuity of business, and security in particular, is an enabler for business transformation. There are organizations out there that could do so much better in their business model if they were able to figure out a way to get a higher degree of intimacy with their customer, but they can’t unless they can guarantee that transaction is secure.

Gardner: Well, great. We've learned a lot today about ISSM as a reference model for getting risk, security, and management together under a common framework, best practices and common controls approach.

I want to thank our guest, Tari Schreider, the chief security architect in the America’s Security Practice at HP’s Consulting & Integration Unit. We really appreciate your input. Tari, great to have you on the show.

Schreider: Thank you, Dana.

Gardner: I also want to thank our introducer, John Carchide, the worldwide governance solutions manager in the Security & Risk Management Practice, also within HP C&I. Thanks to you, John, as well.

Carchide: Thank you very much, Dana.

Gardner: This is Dana Gardner, principal analyst at Interarbor Solutions. You have been listening to a sponsored podcast discussion. This is the BriefingsDirect Podcast Network. Thank you for joining, and come back next time.

Listen to the podcast here. Sponsor: Hewlett-Packard.

Transcript of BriefingsDirect podcast on best practices for integrated security, risk and compliance approaches. Copyright Interarbor Solutions, LLC, 2005-2008. All rights reserved.

Monday, April 07, 2008

XML-Empowered Documents Extend SOA’s Connection to People and Processes

Transcript of BriefingsDirect podcast on XML structured authoring tools and dynamic documents’ extended role in SOA.

Listen to the podcast here. Sponsor: JustSystems.

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you’re listening to BriefingsDirect. Today, a sponsored podcast discussion about two large growth areas in IT, and these are two areas that are actually going to coalesce and intersect in a relationship that we are still defining. This is very fresh information.

We're going to talk about dynamic documents. That is to say, documents that have form and structure and that are things end-users are very familiar with and have been using for generations, but with a twist. That's the ability to bring content and data, on a dynamic lifecycle basis, in and out of these documents in a managed way. That’s one area.

The second area is service-oriented architecture (SOA), the means to automate and reuse assets across multiple application sets and data sets in a large complex organization.

We're seeing these two areas come together. Structured documents and the lifecycle around structured authoring tools come together to provide an end-point for the assets and resources managed through an SOA, but also providing a two-way street, where the information and data that comes in through end-users can be reused back in the SOA to combine with other assets for business process benefits.

To help us understand this interesting intersection and the somewhat complex relationship between structured documents and SOA, we are joined by Jake Sorofman. He is the senior vice president of marketing and business development, for JustSystems North America. Welcome to the show, Jake.

Jake Sorofman: Thank you, Dana, great to be here.

Gardner: There has been a lot of comment around SOA. It’s been discussed and debated for some time. What I'm seeing in the market is the need for bringing more assets, more information, more data, and more aspects of application activities into SOA to validate the investment and the growth.

Tell us what it is about SOA, in the sense that it is data-obsessed? What is it that we need to bring more of into SOA to make it valued?

Sorofman: We’ve all heard the statistics for ages about 80-plus percent of all the information and the enterprise being unstructured information, and how it’s contained within documents, reports, and email etc., and doesn't fit between the columns and rows in the database.

That’s the statistic we’ve all grown comfortable with. The reality, though, is that the SOA initiative today and the whole SOA conversation has really centered on structured data, transactional data, and hierarchical data, as opposed to unstructured content that’s stored within these documents. The documents, as they are created and managed today, are often monolithic artifacts and all the information within those artifacts is locked up and isolated from the business services that comprise our SOA.

Our premise is that you need to find new and unique ways to author your content as extensible markup language (XML), to make it more richly described and widely accessible in the context of SOAs, because it’s an important target source for a lot of these services that comprise your SOA applications.

Gardner: So, there are a number of tactical benefits to recognizing the dynamic nature of documents. Then, to me, there is also this strategic benefit from XML enabling them to provide a new stream or conduit between the content within the lifecycle of these documents and then what can be used in applications and composite applications that an SOA underpins. Help us understand the tactical, and then perhaps the strategic, when it comes to a lifecycle of document and content.

Sorofman: That’s a really good way to think about it. A lot of companies will take on this notion of XML authoring from a tactical perspective. They are looking for new and improved ways to accelerate the creation, maintenance, quality, and consistency of the content that they produce.

It could be all their branded language, all their lock-down regulated language, various technical publications, etc. They need to streamline and improve that process. So, they embrace XML authoring tools as the basis for creating valid XML, to manage the lifecycle of those documents and deliverables.

What they realize in the process of doing so is that there is a strategic byproduct to creating XML content. Now, it’s more accessible by various line-of-business applications and composite applications that can consume it much more readily.

So, it’s enriching the corpus that various applications can draw from, beyond traditional or relational databases, and allowing this more unstructured content to be more widely accessible.

Gardner: In the past, we’ve seen this document management and content management value through some very large, complex, cumbersome, and frankly expensive, standalone management infrastructure that would, in a sense, find a way of bringing these structured and the unstructured worlds together. It seems to me you’ve found a quicker and more direct way of doing this, or am I overstating it?

Sorofman: I think that’s largely right, to the extent that, at author time, the content is created as XML, particularly when that XML is organized within a taxonomy that makes some sense and makes it discoverable in context. Then, that content can just be reused. It can be reused like any other data asset that’s richly described and that doesn’t require heavyweight infrastructure or sizable strategic investments in content infrastructure.

Gardner: Another thing that fascinates me about this topic is a problem with SOA, and that has been the disconnect between the people and the processes that the IT systems can support. We've heard it referred to as "human-oriented architecture," versus SOA. The people that are in the trenches, that are in maintenance types of activities, that are in a highly compliance-oriented environments, need to adhere very closely to regulations, and that the documents become the way that they do that.

It seems to me that if you take the documents that these people thrive on and create en masse, and make those available to the SOA and the composite business processes that that architecture is supporting, then you are able to bridge, this gap between the people, the process, and the systems. Help me understand that a little better.

Sorofman: That makes a great deal of sense. Thus far we’ve been talking about the notion of unstructured content as a target source to SOA-based applications, but you can also think about this from the perspective of the end application itself -- the document as the endpoint, providing a framework for bringing together structured data, transactional data, relational data, as well as unstructured content, into a single document that comes to life.

Let me back up and give you a little context on this. You mentioned the various documents that line workers, for example, need to utilize and consume as the basis for their jobs. Documents have unique value. Documents are portable. You can download a document locally, attach it to an email, associate it with a workflow, and share it into a team room. Documents are persistent. They exist over a period of time, and they provide very rich context. They're how you bring together disparate pieces of information into a cohesive context that people can understand.

Documents allows information to stand alone. They're how knowledge is transferred, and how information is shared between people. Those are all the good things about documents. But, historically, documents have been a snapshot in time. So, even when you have embraced an XML publishing processes, the documents as published as a static artifact. It’s a snapshot in time. As the information feeding these documents changes, what you see within the documents as a published artifact is effectively out of date.

Gardner: I suppose one way that people have gotten around that is to create portals and Web applications, where there is a central way of controlling the data that gets distributed through many views and could be updated. I suppose there must be some drawbacks to the portal perspective. What do we do in here? We take in the best of a Web and portal application and the best of a document and try to bring them together?

Sorofman: Bingo! It’s really about blurring the lines between documents and data or documents and applications. So the portability, the persistence, and the rich context of a document, because documents matter and sometimes and on the glass portal-style application experience, is just not a substitute for what you need out of the document.

But, providing a container for much more dynamic and interactive information and ensuring what you find in that document is always authoritative is just the direct reflection of the sources of truth in the enterprise. All this information is introduced as a set of persistent links back to the sources of record. What you are looking at isn’t an embedded snapshot. You are looking at a reflection of these various systems of record.

Gardner: I was reminded of the importance of the format of a document, just recently when I was doing some tax forms. It’s fine for me to have all this information on my computer about the numbers and the figures, but I have to then present that back to the IRS through this very refined and mandatory format. I need to bring these two together, and, once I have done that, I can see that the IRS is benefiting from the standardization that the format and document brings, and I am of course benefiting from the fact that I can bring fresh data into that.

But, we are now proposing instead these documents that hold value based on their format, their taxonomy, their relevance to a specific regulatory impetus or a vertical industry imperative. What we get beyond that is not just bringing that data from a Web application out, but from perhaps myriad applications and/or this entire SOA, and using the policy-driven benefits of an enterprise service bus (ESB) and governance to help direct the right data to the right document.

Sorofman: Absolutely. The other thing that I mentioned is making these documents semantically aware. The document actually becomes intelligent about its environment. It knows who you are as a user, what your role is, what your permission profile is.

Gardner: And that’s because of the XML that they can make that leap to intelligence?

Sorofman: Well, it’s actually because of the various dynamic document formats that are emerging today, including xfy from JustSystems. We provide the ability to embed this application logic within the document format. The document becomes very attuned to its environment, so it can render information dynamically, based on who you are, what your role is, and where the document is within a process. It can even interact with its environment. The example I would like to use is interactive electronic technical manuals (IETM) for aerospace and defense. These are all the methods and procedures for maintaining the aircraft, often very, very complex documents.

Gardner: We're talking about large tomes, not just a document, but really a publication.

Sorofman: Exactly, and there are really a couple of different issues at work here. The first is the complexity of a document makes it very difficult to keep it up to date. It’s drawing from many different sources of record, both structured and unstructured, and the problem is that when one of the data elements changes, the whole document needs to be republished. You simply can’t keep it up-to-date.

This notion of the dynamic documents ensures that what you’re presenting is always an authoritative reflection of the latest version of the truth within the enterprise. You never run the risk of introducing inaccurate, out of date, or stale information to field base personnel.

The second issue is pinpointing the information that someone needs in the context of the task they are performing, so, targeting the information appropriately. You can lose valuable minutes and hours by thumbing through manuals and trying to find the appropriate protocols for addressing a hydraulic fluid leak, for example.

The environment can actually ping the document. For example, a fault is detected in-flight, and the fault detection that happens in real time can actually interact with the document itself, ping the document, and serve up the set of methods and procedures that represent the fix that needs to be made when the plane reaches its destination. The maintenance crew can start picking the parts and preparing to make fix before the plane lands.

Gardner: It almost sounds like we are bringing some of the benefits that people associate with search into the realm of documents, because they are now structured XML-published and authored documents. There’s XML integration among and between them and their sources. You could do a search and not just come up with an 800-page document, but a search within discrete aspects of that document.

Sorofman: That’s exactly right. You start seeing some blurring some between all these categories of technology around information, search and retrieval, semantics, and document management and data integration. It’s all resulting in a much a richer way of working with and utilizing information.

Gardner: So, we are bringing together what had been document management, content management, data integration, data mashups, compound documents, forms, and requirements for regulatory compliance. That’s why I think it relates to SOA so well.

We're finding a commonality between these, rather than having them be completely separate things that only people physically shuffling complex documents around their desktops could manage. We're starting to automate and bring the IT infrastructure to help in this mixing and matching between these formally siloed activities.

Sorofman: Yes, pretty much so.

Gardner: Alright. One of the things that is a little bit complex for me is understanding the way that the content, the XML, and the data flows among and between documents, and then also how it could flow within the SOA. I think this is still a work in progress. We are really on the cutting edge of how these two different areas come together.

Maybe we could go a little bit into the blue-sky realm for a moment. How do you think the SOA architects should start thinking about dynamic documents, and, then perhaps conversely, how should those that are into structured document authoring start thinking of how that might benefit a larger SOA type of activity?

Sorofman: Great questions. To start with, I don’t think that SOA architects have given a great deal of thought to date to unstructured content and how it plays into SOA architectures. So, there certainly needs to be consideration paid to how you get the information in, in a way that makes it rich to describe, reusable, more akin to relational data than documents themselves.

Structured authoring needs to be part of the thinking around any company’s knowledge management (KM) strategy in general and with a specific importance around how it feeds into the overall SOA strategy. Today, I don’t think that there has really been an intersection between KM and SOA in this respect.

Structured authoring professionals need to start looking beyond their traditional domain of technical publications and into other areas where XML authoring is relevant and appropriate in the enterprise. That’s becoming much more broadly deployed and considered outside of traditional domains of tech-docs.

There’s also this convergence that’s happening between structured documents, structured authoring, and application development, particularly as it relates to this notion of dynamic documents that we are talking about. The creation of business-critical documents becomes much more akin to application development processes, where you are essentially assembling various reusable fragments and components from across the enterprise, into a document that’s really treated more like an application than a monolithic artifact itself, an application that has its own life cycle and needs to be treated and governed in more of an adaptive centric way. So, it’s starting to really impact people’s role in thinking, both from the architect side and on the traditional structured authoring side.

Gardner: Sure, it’s really about people, process, and policy coming together, not just inside the domain of IT, but in the domain of where people actually do their work and where they have traditionally done work for generations.

Sorofman: Very true.

Gardner: Okay, I think I get it now. But to help better understand this it's not just "tell." A lot of times it helps to "show." Can you give me some examples in the real world, where people are starting to move towards these values, where there are some use-case scenarios around dynamic documents extending beyond the document function and getting into application development too?

Sorofman: Absolutely. There are three usage patterns I like to speak about that are illustrative of dynamic documents and how they are being applied today. The first I call "information sharing" sort of broadly. It’s the idea of one-to-many dissemination of information in the form of a document, to various distributed field-based personnel.

A good example of that is the IETM, any kind of business-critical technical manual or a publication that needs to be shared with a variety of different people and where there is a very high cost of that information being either poorly targeted or easily out of date.

This is the idea of bringing together all these different information sources mashed up into the single dynamic document that comes to life. So, as the source information changes, what you see in that document changes and it also has the ability to be semantically intelligent about its environments, about the person who is accessing it, so it can render a view of information that’s appropriate to the context of its usage.

The second example is really taking the same concept of dynamic documents and applying it to collaborative processes, where you need to bring together various stakeholders internally and externally toward the goal of getting some sort of team based process executed or completed.

Think about something like sales and operations planning (S&OP), where you have various stakeholders cross functionally come together periodically, maybe monthly or quarterly, to make trade-off decisions, horse-trading decisions, about which projects to invest in and which ones to disinvest in, how to optimally align, supply and demand.

That’s typically the sales and marketing group, the manufacturing group with a view of capacity and a view of inventory, and then the finance team with a view of a return on it's investment, return on assets, and internal rate of return. These teams are coming together to work on making these decisions, and they often do this by sharing documents. They pull reports from all their various systems of record, manufacturing execution systems, inventory control systems, ERP systems, supply chain, CRM.

Even though these systems have fairly authoritative trustworthy information within them, as soon as you pull a report, it’s frozen in time. So, these teams tend to wrestle with validating and reconciling all this disconnected and static information, before they can make decisions. The dynamic document allows all this information to come together as an authoritative reflection of all these different source systems, but still allows these teams to work in the format they are most comfortable with, which is to say, documents.

Gardner: Because there is a semantic and intelligent aspect of this, this content has been shared collaboratively and would present itself to each of these individuals through a different document format, based on what it is that they are doing within their traditional role.

Sorofman: That’s exactly right. It will serve itself up dynamically, based on what’s appropriate for stakeholders to see, based on their permission profile or on their role. It could be a different level of abstraction or a little different level of detail. It can actually change the information that’s being displayed, based on where it is in a workflow process. The document can actually become aware of its workflow lifecycle state and render a different information based on where it’s been, where it’s going, and where it is in the process.

Gardner: This is strikingly different than what's done by many organizations that I am aware of. They have one big spreadsheet that everyone shares, which really is sort of one-size-fits-all, which isn’t the way people really work.

Sorofman: Everyone has had some experience with spreadsheets gone wrong and the high cost and perverse consequences of trying to force-fit spreadsheets into critical planning process. So, I think most people can empathize with this specific challenge.

Gardner: Alright. Let’s talk about the business case for this. Now, it sounds good theoretically. We’ve certainly got a technology that can help this productivity improvement by extending data in the formats that people are familiar with. There is compliance, and regulatory and risk reduction as a result.

And, of course, as we mentioned earlier, there is the sharing and repurposing and reusing of this across the SOA value stream in the business. But, dollars and cents, how do people go and say, “Wow, this sounds like a good idea. I want to convince somebody to invest in it, but I need to talk to them about return on investment.”

Sorofman: You can make a business case for this sort of approach from a very basic to a much more sophisticated level. At the most basic level, the ROI around XML authoring is pretty straightforward. Rather than creating document authoring as sort of monolithic artifacts, creating them as reusable components helps to accelerate and reduce the cost of creating new documents and deliverables, and it makes information much more reusable. That has a cost implication and a time-to-market implication.

If, for example, you are launching a product that’s highly dependent on documentation -- and documentation is typically one of the things that we do at the end of the product launch cycle -- that becomes a bottleneck that can have implications for revenue that’s foregone, excessive cost, and missed deadlines, etc.

There is also an issue around localization, multi-format output, and multi-channel output of this various content, taking the content, translating it into different languages and into different output formats.

Gardner: Localization. So, you have the same document format, but the input and output can be in a variety of different languages.

Sorofman: That’s exactly right.

Gardner: That would save a lot of time and money. Instead of the full soup-to-nuts translation, you only have to translate exactly the metadata that’s required.

Sorofman: That’s exactly right, and that’s a tremendous ROI. There are many companies that look at the ROI of XML authoring exclusively from the perspective of localization, and it’s often said to have between a 40 and 60 percent cost impact on localization itself.

Gardner: In fact, you are automating a large portion of the translation process.

Sorofman: Yes. Also, think about the change time implications of what we are talking about. In the traditional monolithic model, when you need to make changes to documentation, you are making changes across all the various documents that consume information fragments in all the various formats, in all the various localized versions, and all the derivations and permutations of an information source. That becomes extremely complex, extremely costly, and error prone.

In the XML authoring world, you are authoring once, publishing many times, and maintaining a single native format. So, you are maintaining that one reusable component and allowing those changes to be propagated across all the various consuming documents and deliverables.

Gardner: And, because we are doing this separation, it also strikes me that there is a security benefit here. One of the things that troubles a lot of IT folk and keep them up at night is the idea that there are different versions and copies of full-blown documents and databases, in some cases, on each and every PC and laptop, some of which may disappear in an airport. It strikes me that by separating this out, what might only go into some notorious hands at the airport would be the form, but not the data.

Sorofman: It’s a great point.

Gardner: So, there’s a security benefit here as well, when you are able to control things, and not have all the dynamic data distributed at the end point, but, in a sense, communicated to that end point when it’s the right time.

Sorofman: Absolutely. I guess the benefits we are looking at are really these sorts of operational benefits of the XML authoring and how that impacts the bottom line and time to market etc. There are also bigger benefits that come from the actual consumption of dynamic documents, and how you ensure that you are only putting information in the hands of the people that need it, that it's always up-to-date.

That clearly has an implication for risk and compliance in many different application areas, and accelerating, improving, and optimizing business processes by eliminating the error introduction that comes from the re-keying of information between disconnected process steps, where documents are involved.

Gardner: So the human error factor goes down as well?

Sorofman: Dramatically.

Gardner: How does that work exactly?

Sorofman: Let me give you a quick example of one of the other usage patterns that’s worth speaking about. It's what I like to call "document process transformation." If you think about any business process flow, there are typically silos of automation, and these are the flows within the process that are highly tuned, very transactional, with virtually no human intervention.

They are highly automated, because they can be. Everything can be reduced down to a transaction and thus handled by machines, but then there are manual gaps between these silos of operations or automation that often eliminate, or at least erode, some of the benefits of automation.

This is typically highly human-centric phases of a process, often very document centric. It’s where people need to get involved. For example, if you think of a loan application, in the front end of the application there is a form. It’s very form based, and it’s about capturing information about the applicant.

Some of the information can be handled transactionally, so the form is able to send the information to a back-end system where it’s processed transactionally, but some of the information needs to be viewed and analyzed by human beings, who actually have to look at it in context and make a judgment about the applicant.

In the front end of the form, it becomes a transaction, and then it needs to be served up as a set of document renditions, based on the various personal roles within the process that needs to be viewed to make a judgment about the loan.

The document can actually morph as it moves through the process, based on what that person needs to see or what’s appropriate for them to see. At the end of the process, a judgment is made about the loan. It’s either approved or it’s rejected and it becomes a transaction again.

The information can be extracted from the document set itself automatically, pulled down to a back-end process, like "open the account," the account opening procedure, and then information can be extracted from the document set to serve a traditional publishing pipeline to send a custom acknowledgment letter back to the applicant, welcoming them to the bank, and letting them know that the loan has been approved.

So, you've gone from silos of automation separated by manual gaps, to a much more streamlined and straightened process, where you have transactions driving document renditions and document renditions driving transactions.

Gardner: This is a great example of why this is relevant for SOA. First off, you're talking about how the human input of the data needs to be improved -- and that’s the garbage-in, garbage-out value. If you are going to be reusing this data across multiple applications, you want to make sure that‘s good data to begin with. So, that’s one value.

The other is this controlled workflow, an event-driven workflow, which again is part of what people are trying to build that SOAs will support, these composite workflow process oriented types of activities that are very much core to any business.

Then, the last fascinating aspect is the notion that we are combining what needs to be a human judgment with what is going to be a computer-driven process. These dynamic documents in a sense giving little stop signs that say, “Stop, wait, let the human activity take place.” The human can relate back to the document, the document relates back to the business process, the business process is managed and directed through the SOA.

Sorofman: That’s exactly right. As long as people are involved, there will be documents, but traditionally documents have been fairly unintelligent and inefficient in how they have been authored, organized, managed, and used as a basis for consuming information. This is just what documents have always wanted to be.

Gardner: I dare say that documents have been under-appreciated in the context of SOA.

Sorofman: I couldn’t agree more.

Gardner: Well, great! Thanks for shedding some more light on these issues. Tell us a little bit about how JustSystems works its value in regard to the dynamic documents that are now holding much more relevance in a larger SOA.

Sorofman: JustSystems has two product lines that are very relevant to this discussion. The first is the product called XMetaL, which is one of the leading structured authoring and publishing solutions that provides the basis for creating valid XML content, as part of the authoring process. I mentioned this idea of being able to create valid XML, as opposed to monolithic document artifacts at author time. This provides a basis for both technical authors, but also business authors. It’s sort of the occasional contributor or the subject matter expert, the accidental author, the occasional author to create valid XML without ever seeing an angle bracket, so a very intuitive WYSIWYG environment for creating XML as a byproduct of a very intuitive authoring process.

That’s how you feed the beast, how you get the XMLs into the systems, to make it much more richly described and more reusable, this part of downstream processes.

On the other side of the equation, we have a product line called xfy, which is a document-centric composite application framework that allows you to bring together all these various information sources, structured and structured, and mash them up within a single dynamic document application.

It’s blurring the lines between documents and applications, providing the user experience that people appreciate from a document, but with the authoritative, dynamic and interactive information that has been most closely associated with traditional business applications, the document becomes the application.

Gardner: Of course, we are using XML, which is a standardized markup language. We are also going to be using vertical industry taxonomies and schemas that are shared, and, therefore, this is a fairly open opportunity to share and communicate and collaborate.

Sorofman: That’s right.

Gardner: Well great! Thanks again. We’ve been talking about XML empowerment of documents and how to extend Services-Oriented Architecture’s connection to people and process through these types of documents and structured authoring tools. To help us to understand this, we have been talking with Jake Sorofman. He is the senior vice president of marketing and business development at JustSystems North America. Thanks for joining us, Jake.

Sorofman: Thank you, Dana.

Gardner: This is Dana Gardner, principal analyst at Interarbor Solutions. You have been listening to a sponsored BriefingsDirect podcast. Thanks, and comeback next time.

Listen to the podcast here. Sponsor: JustSystems.

Transcript of BriefingsDirect podcast on XML structured authoring tools and dynamic documents’ role in SOA. Copyright Interarbor Solutions, LLC, 2005-2008. All rights reserved.