Thursday, October 01, 2009

Cloud Computing by Industry: Novel Ways to Collaborate Via Extended Business Processes

Transcript of a sponsored BriefingsDirect podcast examining how cloud computing methods promote innovative sharing and collaboration for industry-specific process efficiencies.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: Hewlett-Packard.

Free Offer: Get a complimentary copy of the new book Cloud Computing For Dummies courtesy of Hewlett-Packard at www.hp.com/go/cloudpodcastoffer.

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you’re listening to BriefingsDirect.

Today, we present a sponsored podcast discussion on how to make the most of cloud computing for innovative solving of industry-level problems. As enterprises seek to exploit cloud computing, business leaders are focused on new productivity benefits. Yet, the IT folks need to focus on the technology in order to propel those business solutions forward.

As enterprises confront cloud computing, they want to know what's going to enable new and potentially revolutionary business outcomes. How will business process innovation -- necessitated by the reset economy -- gain from using cloud-based services, models, and solutions?

It's as if the past benefits of Moore's Law, of leveraging the ongoing density of circuits to improve performance while also cutting costs, has now evolved to a cloud level, trying to (in the context of business problems) do more for far less.

Early examples of applying cloud to industry challenges, such as the recent GS1 Canada Food Recall Initiative, show that doing things in new ways can have huge payoffs.

We'll learn here about the HP Cloud Product Recall Platform that provides the underlying infrastructure for the GS1 Canada food recall solution, and we will dig deeper into what cloud computing means for companies in the manufacturing and distribution industries and the "new era" of Moore's Law.

Here to help explain the benefits of cloud computing and vertical business transformation, we welcome Mick Keyes, senior architect in the HP Chief Technology Office. Welcome, Mick.

Mick Keyes: Thank you, very much.

Gardner: We are also joined by Rebecca Lawson, director of Worldwide Cloud Marketing at HP. Hello, Rebecca.

Rebecca Lawson: Hello.

Gardner: And, we're also joined by Chris Coughlan, director of HP's Track and Trace Cloud Competency Center. Welcome to the show, Chris.

Chris Coughlan: Thanks, very much.

Gardner: I'd like to start with Rebecca, if I could. Tell us a little bit about the cloud vision, as it is understood at HP. Where does this fit in, in terms of the business, the platform, and the tension between the technology and the business outcomes?

Overused term

Lawson: Sure, I'm happy to. Everyone knows that "cloud" is a word that tends to get hugely overused. Instead of talking specifically about cloud, at HP we try to think about what kinds of problems our customers are trying to solve, and what are some new technologies that are here now, or that are coming down the pike, to help them solve problems that currently can't be solved with traditional business processing approaches.

Rather than the cloud being about just reducing costs, by moving workloads to somebody else's virtual machine, we take a customer point of view -- in this case, manufacturing -- to say, "What are the problems that manufacturers have that can't be solved by traditional supply chain or business processing the way that we know it today, with all the implicated integrations and such?"

That's where we're coming from, when we look at cloud services, finding new ways to solve problems. Most of those problems have to do with vast amounts of data that are traditionally very hard to access by the kinds of application architectures that we have seen over the last 20 years.

Gardner: So, we're talking about a managed exposure of information, knowledge, and things that people need to take proper actions on. I've also heard HP refer to what they are doing and how this works as an "ecosystem." Could you explain what you mean by that?

Most of those problems have to do with vast amounts of data that are traditionally very hard to access by the kinds of application architectures that we have seen over the last 20 years.



Lawson: As we move forward, we see that, different vertical markets -- for example, manufacturing or pharmaceuticals -- will start to have ecosystems evolve around them. These ecosystems will be a place or a dynamic that has technology-enabled services, cloud services that are accessible and sharable and help the collaboration and sharing across different constituents in that vertical market.

We think that, just as social networks have helped us all connect on a personal level with friends from the past and such, vertical ecosystems will serve business interests across large bodies of companies, organizations, or constituents, so that they can start to share, collaborate, and solve different kinds of issues that are germane to that industry.

A great example of that is what we're doing with the manufacturing industry around our collaboration with GS1, where we are solving problems related to traceability and recall.

Gardner: So, for these members within the ecosystem, their systems alone cannot accomplish what having a third party or cloud-based platform can accomplish in terms of cooperation, collaboration, coordinated and managed, and even governed business processes.

Lawson: That's right. In fact, I'll throw it over to Mick to talk about how this is really different and really how it serves the greater purpose of the manufacturing community. Mick?

Multiple entities

Keyes: A good example is the manufacturing industry, and indeed the whole linear type supply chain that is in use. If you look at supply chains, food is a good example. It's one of the more complicated ones, actually. You can have anywhere up to 15-20 different entities involved in a supply chain.

In reality, you've got a farmer out there growing some food. When he harvests that food, he's got to move it to different manufacturers, processors, wholesalers, transportation, and to retail, before it finally gets to the actual consumer itself. There is a lot of data being gathered at each stage of that supply chain.

In the traditional way we looked at how that supply chain has traceability, they would have the, infamous -- I would call it -- "one step up, one step down" exchange of data, which meant really that each entity in the supply chain exchanged information with the next one in line.

That's fine, but it's costly. Also, it doesn't allow for good visibility into the total supply chain, which is what the end goal actually is.

What we are saying to industry at the moment -- and this is our thesis here that we are actually developing -- is that, HP, with a cloud platform, will provide the hub, where people can either send data or allow us to access data. What a cloud will do is aggregate different piece of information to provide value to all elements of the supply chain to give greater visibility into the supply itself.

What a cloud will do is aggregate different piece of information to provide value to all elements of the supply chain to give greater visibility into the supply itself.



Food is one example, but you've got lots of other examples in different industries -- the pharmaceutical industry, of course. You've also got the aeronautical industry and the aerospace industry. It's any supply chain that's out there, Dana.

Gardner: Mick, you mentioned this hub and this platform. Is this just a blank canvas that these vertical industries can then come to and apply their needs or is there a helping hand, in addition to the strict technological fabric, that can apply some level of expertise and understanding into these verticals?

Keyes: If you look at the way we're defining the whole ecosystem, as Rebecca referred to around cloud computing, we have the cloud-optimized infrastructure, which HP has got a great pedigree in. Then, we're looking, from a platform point of view, at the next level. From this, we'll launch the different specific services.

In that platform, for example, we've got the components to cover data, analytics, software management, security, industry-specific type information, and developer type offerings as well. So, depending on what type of industry you're in, we're looking at this platform as being almost a repeatable type of offering, and you can start to lay out individual or specific industry services around this.

Gardner: The reason I asked is that there are a number of prominent cloud providers nowadays who do seem to provide mostly a blank canvas. It's very powerful. The cost benefits are there. It gives developers and architects something new to pursue, but there is not much in addition to the solution level there.

A little bit more

Keyes: When you offer or develop specific services and such for industry, you need a little bit more than being able to look at it from a technology point of view. Industry knowledge, we have found, is key, but also, when we talk to the businesses and each element of a supply chain -- and food is a good example, because it's global -- there are different cultural influences involved, such as the whole area of understanding governance and data, where it can and cannot be stored.

Technology is obviously a very important part of it, but how we look at producing services and who can consume the services is equally important. Also, we see this type of initiative as stimulating a lot of new innovation. When we use our platform to create certain pockets of data, for use of a better word, we are looking at how we can mashup different type of services.

Some companies will come with a good idea. There are other partners, excellent partners, who are developing very specific and good applications. We will use this hub and our business knowledge, as well, to look at the creation of new types of services and the mashup of different services.

It allows us also to talk to the business people in different parts of the supply chain and different industries to look at very fast, creative ways of offering new services for their industry.

There were a lot of health scares and food scares over the last year or so. We looked at that and said. "This is a very good opportunity to actually develop everything as a service."



Gardner: Chris Coughlan, tell us a little bit about your competency center, how you started, and perhaps illustrate with an example how this technological knowledge and appreciation of the business issues come together?

Coughlan: As follow-on from what Mick said, we have infrastructure as a service (IaaS), we have platform as a service (PaaS), and we have software as a service (SaaS). And, in the industry we're told was that there was going to be everything as a service. But really nobody started defining what you meant beyond SaaS.

There were a lot of health scares and food scares over the last year or so. We looked at that and said. "This is a very good opportunity to actually develop everything as a service."

We also came to the conclusion, which is very important, that there are two aspects of that. There has to be collaboration along all the various company supply chains, particularly if you want to recall something, or if you want to do track and trace. As well as that, there has to be standardization in what you are doing. So, that led to our relationship with GS1 and the development of the recall system.

Gardner: I spoke in my setup about both lowering cost and enabling new levels of productivity and innovation. Have you found that to be the case? Are you able to do both of those?

Chain of islands

Coughlan: Absolutely. If you think about it, the current recall systems in the food industry -- and Mick talked them – target from “farm to fork”, so to speak. Look at all the agencies. There's manufacturing, suppliers, retailers, and whatever. A piece of food can be caught anywhere within that supply chain, and each company and each unit in that supply chain is really behaving as an island in itself.

They might have their own systems, but then those systems are not linked. If there's a problem, you have to go from automated systems to manual systems, whatever. What we've done is we have linked all those systems up. We have agreed on a standard template from the GS1. This is the information that all those agents along the supply chain will share with each other, so that food can be recalled very quickly and very effectively.

If that's done, you can see that from the health and safety issue. You can see it from a contamination issue. You can see it from getting items off shelves and preventing items from being shipped. This can happen quite fast, as opposed to the system we have today.

Gardner: This is a payback that seems to have a very positive impact across that ecosystem, for the consumers, the suppliers, the creators, and then the brands, if they are involved.

Coughlan: Absolutely. First of all, as a consumer, it gives you a lot more confidence that the health and safety issues are being dealt with, because, in some cases, this is a life and death situation. The sooner you solve the problem, the sooner everybody knows about it. You have a better opportunity of potentially saving lives.

So we really look at it from a positive view also, about how this is creating benefits from a business point of view.



As well as that, you're looking at brand protection and you're also looking at removing from the supply chain things that could have further knock-on effects as well.

Keyes: Just to interject there. Those are very good points that Chris is making. We see a big appetite from different people in supply chains to get involved in this type of mechanism, because they look at it from a brand or profit center point of view. As a company, you'll be able to get greater visibility into your process or into your brand efforts right through the consumer.

In the older way supply chains worked, as Chris mentioned, it was linear -- one step up, one step down. The people at the lower end of the supply chain, for use of a better word, often weren't able to find out how the products were being used by consumers.

We have SaaS now, not just to any individual entity in the supply chain, but anybody who subscribes to our hub. We can aggregate all the information, and we're able to give them back very valuable information on how their product is used further up the supply chain. So we really look at it from a positive view also, about how this is creating benefits from a business point of view.

Gardner: So, a critical business driver, of course, is the public-safety issue. But, in putting into place this template of cloud process, we perhaps gain a business intelligence (BI) value over time with greater visibility across these different variables in the supply chain itself.

Addressing food safety

Keyes: Absolutely. There are quite a lot of activities you see around the world at the moment around greater focus on food safety. In the U.S., for example, HR 2749, a bill that's gone to Congress, is really excellent in how it looks to address the whole area of food safety.

If you look at that, it's leaning towards the concept of greater integration in supply chains. Regulatory bodies, healthcare bodies, and sectors like that will very quickly be able to address any public safety issues that happen.

We're also looking at how you integrate this into the whole social-networking arena, because that's information and data out there. People are looking to consume information, or get involved in information sharing to a certain degree. We see that as a cool component also that we can perhaps do some BI around and be able to offer information to industry, consumers, and the regulatory bodies fairly quickly.

The key to this is that this technology is not causing the manufacturers to do a lot of work.



Coughlan: The point there is that cloud is enabling a convergence between enterprises. It's enabling enterprise collaboration, first of all, and then it's going one step further, where it's enabling the convergence of that enterprise collaboration with Web 2.0.

You can overlay a whole pile of things --carbon footprints, dietary information, and ethical food. Not only is it going to be in the food area, as we said. It's going to be along every manufacturing supply chain -- pharmaceuticals, the motor industry, or whatever.

Gardner: Rebecca, do you have something you want to offer?

Lawson: The key to this is that this technology is not causing the manufacturers to do a lot of work. For example, if I am a peanut packaging person, I take peanuts from lots of different growers and I package them up. I send some to the peanut butter companies and some to the candy manufacturing companies or whatever.

I already have data in house about what I am doing. All I have to do to participate in this traceability example or a recall example is once a day cut a report, stream the data up into the cloud, and I am done.

It's not a lot of effort on my part to participate in the benefits of being in that traceability and recall ecosystem, because I and all the other people along that supply chain are all contributing the relevant data that we already have. That's going to serve a greater whole, and we can all tap into that data as well.

Viewing the flow

So, for example, maybe there is a peanut outbreak, and I, as the peanut packaging person, can quickly go and kind of see what the flow was across the different participants of growers, retailers, consumers, and all that. The cloud technology allows us to do that, and that's why we designed it this way.

The platform that HP created in this whole ecosystem is geared towards harnessing data and information that's pretty much already there and being able to access it for key questions, which would have been nearly impossible to answer, say five years ago, when the technologies were just not around to do that.

It's a win-win-win for individual companies, which can now reduce their insurance exposure, because they've got their processes covered. They have the data. It's already shared. So, it's a major step forward for manufacturing. We think this kind of a model is not just for manufacturing. This just happens to be one good use case that we can all relate to as consumers, because everybody is afraid of a Salmonella outbreak. It affects all lives. But, it's applicable to other industries as well.

Gardner: Of course, a recent example would be the flu outbreak, as well. So, there are lots of different ways in which a common currency of shared data and information can be very critical and important.

I could care less how powerful a server is. What I care about are the problems that I am trying to solve.



I also want to look at the importance of that common currency, which, in this case, is standardized service calls and application programming interfaces (APIs), and what we have come to be familiar with as Web services is now enabling this cloud synergy across these ecosystems.

I wonder if anyone would like to take a stab at my premise that, in the past, we have looked for productivity from increased cycles in the silicon and on the hardware and in IT itself. But, is there a new possibility for a higher level of Moore's Law, so to speak, in applying these cloud approaches to productivity? Does anyone share my enthusiasm for that?

Lawson: Absolutely. In fact, I could care less how powerful a server is. What I care about are the problems that I am trying to solve. If I'm in the environmental world, if I'm government, or if I'm a financial services organization, I want to be able to creatively think about how I serve my customers.

These new technologies are allowing HP's customers to solve problems much differently than they did before, using a wider expanse of currency, as you said, which is information. Information is the currency of our era.

Structured vs, unstructured

One of the big shifts going on is that information in the past 5, 10, or 20 years has been largely held in very structured databases. That's a really good thing for certain kinds of data, but there is other data now that's just streaming into the Internet, streaming into the cloud, which is held in a more unstructured fashion.

We can now deal with that data. We can now run search and query across semistructured or unstructured data and get to some interesting results really quickly, as opposed to more traditional ways of holding certain kinds of data in a relational database. We don't think that it's going away. We just see that there is a whole new currency coming in through new ways to access information.

Coughlan: I'm a great believer in applying Moore's Law to a lot of things beyond technology -- to society, to productivity, as you said, and whatever. It's the underlying technology that originally defines Moore's Law, which actually then drives the productivity, the change in society, etc.

But, you've heard of another law called Metcalfe's Law, where he talks about the power of the network. We are bringing in the power of collaboration. What you have then are two of these nonlinear laws, which are instituting change, reducing price, doubling capacity, etc. You've even got a reinforcing thing there, which might even put Moore's Law even faster than Moore predicted himself.

Gardner: A part of this has to be, of course, cooperation and trust. What is it about the platform for manufacturing that HP has developed that enables that trust and that places this hub, this third-party, in a position where all the members of the ecosystem feel that they are protected?

We look to GS1 as the trusted advisor out there, with industry, with governments, around safety, around standards, and on traceability.



Coughlan: This is one of the reasons that we partnered with GS1 in this whole space. You're right, Dana. It would be something that industry wants to know immediately. Why would we trust an IT provider, for example, to be the trusted advisor to integrate all the different elements of the supply chain?

We're pretty much aware of that. In our discussions with GS1, the international standards body, is trusted by industry. This is their great strength. They are neutral. They are in 110 different countries. They have done a lot of work about getting uniform standards about how different systems can integrate, especially in this whole area of supply chain management.

We look to GS1 as the trusted advisor out there, with industry, with governments, around safety, around standards, and on traceability. They're not a solution provider, but they will go to best in class with their ideas.

They have asked the industry for ideas. They have gone to the industry and explained the process, for example, of how recall, as an example, should work and how traceability should work. So, we feel that to partner with somebody like GS1 is key to getting trust in the industry to apply these types of systems.

Gardner: Do you expect to see additional partnerships, and should standards bodies be thinking about moving towards partners in the cloud, so that they can extend their role as a trusted advisor, as a neutral third-party, but be able to execute on that now at a higher abstraction?

Win-win situation

Keyes: Absolutely. This is a win-win for everybody here. There are lots of really good partners out there who have, for example, point solutions that are in industry at the moment. We feel there are a lot of benefits to these partners through using GS1 standards.

Let's say that most of them do at the moment and they are all compliant, but they can work with our traceability hubs and to try and see whether they can help exchange information. In return, we'll be able to supply information and publish information through their systems back to industry as well.

GS1 is important in this also in getting together the industry, not just the actual manufacturers or the retailers, but also the technology people in the industry, so there will be uniform standards. We all know from developing traditional systems and tightly coupled systems in manufacturing and the supply chain that you need an easier matter of collaboration. GS1 has done an excellent job in the industry defining what these standards should look like.

Gardner: I know we've been focused on manufacturing, but not to go too far off the beaten track, there's also this need for greater cooperation between public and private sectors across regulatory issues. Have we seen anything moving along those lines, a trusted partnership between a manufacturing platform like HP has provided, where some sort of a public agency might then reach out to these private ecosystems?

Keyes: If you don't want to dwell on the food area, often what you find is that governments bring out laws and regulations, and they say industry must apply these laws. Often, you get a bit of a standoff, where industry would immediately say, "Okay. This is government telling us what to do, etc."

Industry is now looking at this type of model to take a preemptive step and to show that they are also active in the whole area of food safety.



In our journey of what we've been trying to do around this food industry, a lot of time we talk directly to industries themselves. Industry now also sees what the issues are and they agree with what the governments and the regulatory bodies are trying to do.

Industry is now looking at this type of model to take a preemptive step and to show that they are also active in the whole area of food safety. It's in their interests to do it, but now I think they have a mechanism, which industry, government, and regulatory bodies can actually use.

For example, if you look at the recall project that we've been involved in, we're taking data and accessing data in industry and in retailers also, but we're looking at a service that we can publish for industry. We call it visibility type services, where, at a glance, they can look at where all elements of the recall might be and what industries are actually being affected.

We're very keen to share services or offer services to different regulatory bodies, be it government, or be it directly with consumers, consumer bodies as well, we have been pretty active in discussing this with.

Gardner: Thank you, Mick. Chris, do you have any insights as well in terms of this public-private device?

Variety of clouds

Coughlan: Mick has said most of it there and Rebecca spoke earlier on about the ecosystem. As things begin to develop, you will be able to see public clouds, private clouds, and hybrid clouds. Then, you'll have a cloud portal accessing those under various circumstances, to solve various problems, or to get various pieces of information.

I see third-party point solutions feeding into those clouds. That's one of the areas that we offer -- third-party solutions -- be it in the food industry or other industries. They feed into our cloud, and that information can be either private information or collaborative information, where they define where they are going to do the collaboration, or it could be public information.

So, it would mean the private cloud, where some of the information could go into the public cloud, and other information could be a hybrid type of cloud.

Gardner: Rebecca, it seems like we could go on for hours about all these wonderful use-case scenarios and potential innovation improvements on process and the crossing of divides. But, the ecosystem is not just in the supply chain.

Right now, a lot of the industry is talking about cloud, and a lot of folks are focused on things like IaaS, virtual machines as a service, and things like that.



It also needs, I suppose, to be pulled together in terms of the cloud infrastructure, and the players that need to come together in order to enable these higher level business benefits. It strikes me that there are not that many companies that can be in a position of pulling together the ecosystem on the delivery side of these services.

Lawson: That's true, and what's different about what we are doing is we're taking a top-down approach. Right now, a lot of the industry is talking about cloud, and a lot of folks are focused on things like IaaS, virtual machines as a service, and things like that.

But you can switch it around and say, "How can we apply technology in a new way and build out the platform to support the services that industries need?" Then, for those services you build out the right kind of infrastructure and scale out an infrastructure basis on which all of that can run very smoothly.

Working backward

Now, you have a really good organizing principle to say, "If we're going to solve this problem of traceability, food track and trace, and recall, how are we going to solve that problem? Everything really drives from there, as opposed to saying, "What's the cheapest platform on which we can run some kind of food traceability?" That's just coming at it backward.

In fact, a good analogy to what we are doing with these vertical ecosystems is a well-known use case around Salesforce.com and the Force.com platform that generated around it.

Most folks realize that salesforce.com started with a sales-force automation product. Then, it broadened into a customer relationship management (CRM) product, and then, before you knew it, they had a platform on which they built the community of service or application providers, their App Exchange. That community is enabled by their underlying platform. That community serves a horizontal function for sales and marketing oriented or adjacent types of services.

If you pull that analogy out into an industry like manufacturing, transportation, or financial services, it's the same sort of thing. You want that platform of commonality, so different contingents can come and leverage the adjacencies to whatever it is that they are doing.

We really see that this ecosystem approach is the way to think about it, and vertical is the way to think about it, although, obviously, different verticals will blend together. We're working on similar projects in the transportation arena, where manufacturing can cross over quite quickly into public transportation and add lots of new development. So we are pretty excited about all these new opportunities.

We really see that this ecosystem approach is the way to think about it, and vertical is the way to think about it, although, obviously, different verticals will blend together.



Gardner: So, we actually can start thinking about pulling together ecosystems of ecosystems?

Keyes: Absolutely. We look at what we're doing at the moment around food and how that might affect the whole healthcare area as well. There are a lot of new innovations coming out in the biomedical area as well, of how we can expand things like food, pharmaceutical, or drugs to the whole health system. As you said, Dana, we see that as a very important area of collaboration between different ecosystems.

Lawson: One more point is that the ecosystem implies that it's not just about the technology. It's about the people. So, different aspects of the ecosystem are going to be human. They may be machine. They may be bits of code. There are conditions and tons of events. The ecosystem is a more holistic approach, in which you have the infrastructure, development and runtime environments, and technology-enabled services.

Gardner: If I'm a member of an ecosystem -- be it in the manufacturing, vertical, health, food recall, regulatory, or public sector -- and these concepts resonate with me, how do I get started? If I'm in a standards body of some sort, where do I go to say, "What's the partnership potential for me?"

Lawson: The first thing you can do is call HP and take a look at what we have done in our Galway Center of Expertise around traceability -- track and trace -- and we would be happy to show you that. You can take a look under the covers and see how applicable it is to your situation.

Gardner: Very good. We've been taking a look at how the new productivity levels can be exploited vis-à-vis cloud computing -- not just at the technological level, but at the process level of finding partnerships and standards and approaches that pull together ecosystems of business, potentially across business and the public sector.

Helping us to understand better the potential for cloud computing as a business tool, and how HP, and most recently GS1 Canada have pulled together a Food Recall Platform based on the HP Cloud Product Recall Platform, we have been joined by Mick Keyes. He is the senior architect in the HP Office of the Chief Technology Officer. Thank you, Mick.

Keyes: Thank you.

Gardner: We've also been joined by Rebecca Lawson, director of Worldwide Cloud Marketing at HP. Thanks, Rebecca.

Lawson: Thank you very much.

Gardner: And also, Chris Coughlan, director of HP's Track, Trace, and Cloud Competency Center. Thank you so much, Chris.

Coughlan: Thank you.

Gardner: This is Dana Gardner, principal analyst at Interarbor Solutions. You've been listening to a sponsored BriefingsDirect podcast. Thanks for listening, and come back next time.

Free Offer: Get a complimentary copy of the new book Cloud Computing For Dummies courtesy of Hewlett-Packard at www.hp.com/go/cloudpodcastoffer.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: Hewlett-Packard.

Transcript of a sponsored BriefingsDirect podcast examining how cloud computing methods promote innovative sharing and collaboration for industry-specific process efficiencies. Copyright Interarbor Solutions, LLC, 2005-2009. All rights reserved.

Wednesday, September 30, 2009

Doing Nothing Can Be Costliest IT Course When Legacy Systems and Applications Are Involved

Transcript of a BriefingsDirect podcast on the risks and drawbacks of not investing wisely in application modernization and data center transformation.

Listen to podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Learn more. Sponsor: Hewlett Packard.

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you’re listening to BriefingsDirect.

Today, we present a sponsored podcast discussion on the high, and sometimes underappreciated, cost for many enterprises of doing nothing about aging, monolithic applications. Not making a choice about legacy mainframe and poorly used applications is, in effect, making a choice not to transform and modernize the applications and their supporting systems.

Not doing anything is a choice to embrace an ongoing cost structure that may well prevent significant new spending for IT innovations. It’s a choice to suspend applications on, perhaps, ossified platforms and make their reuse and integration difficult, complex, and costly.

Doing nothing is a choice that, in a recession, hurts companies in multiple ways, because successful transformation is the lifeblood of near and long-term productivity improvements.

Here to help us better understand the perils of continuing to do nothing about aging legacy and mainframe applications, we’re joined by four IT transformation experts from Hewlett-Packard (HP). Please join me in welcoming our guests. First, Brad Hipps, product marketer for Application Lifecycle Management (ALM) and Applications Portfolio Software at HP. Welcome, Brad.

Brad Hipps: Thank you.

Gardner: Also, John Pickett from Enterprise Storage and Server Marketing at HP. Hello, John.

John Pickett: Hi. Welcome.

Gardner: Paul Evans, worldwide marketing lead on Applications Transformation at HP. Hello, Paul.

Paul Evans: Hello, Dana.

Gardner: And, Steve Woods, application transformation analyst and distinguished software engineer at EDS, now called HP Enterprise Services. Good to have you with us, Steve.

Steve Woods: Thank you, Dana.

Gardner: Let me start off by going to Paul. The recession has had a number of effects on people, as well as budgets, but I wonder what, in particular, the tight cost structures have had on this notion of tolerating mainframe and legacy applications?

Cost hasn't changed

Evans: Dana, what we’re seeing is that the cost of legacy systems and the cost of supporting the mainframe hasn’t changed in 12 months. What has changed is the available cash that companies have to spend on IT, as, over time, that cash amount may have either been frozen or is being reduced. That puts even more pressure on the IT department and the CIO in how to spend that money, where to spend that money, and how to ensure alignment between what the business wants to do and where the technology needs to go.

Given the fact that we knew already that only about 10 percent of an IT budget was spent on innovation before, the problem is that that becomes squeezed and squeezed. Our concern is that there is a cost of doing nothing. People eventually end up spending their whole IT budgets on maintenance and upgrades and virtually nothing on innovation.

At a time when competitiveness is needed more than it was a year ago, there has to be a shift in the way we spend our IT dollars and where we spend our IT dollars. That means looking at the legacy software environments and the underpinning infrastructure. It’s absolutely a necessity.

Gardner: So, clearly, there is a shift in the economic impetus. I want to go to Steve Woods. As an analyst looking at these issues, what’s changed technically in terms of reducing something that may have been a hurdle to overcome for application transformation?

Woods: For years, the biggest hurdle was that most customers would say they didn’t really have to make a decision, because the performance wasn’t there. The performance reliability wasn't there. That is there now. There is really no excuse not to move because of performance reliability issues.

What's still there, and is changing today, is the ability to look at a legacy source code application. We have the tools now to look at the code and visualize it in ways that are very compelling. That’s typically one of the biggest obstacles. If you look at a legacy application and the number of lines of code and number of people that are maintaining it, it’s usually obvious that large portions of the application haven’t really changed much. There's a lot of library code and that sort of thing.

That’s really important. We’ve been straight with our customers that we have the ability to help them understand a large terrain of code that they might be afraid to move forward. Maybe they simply don’t understand it. Maybe the people who originally developed it have moved on, and because nobody really maintains it, they have fear of going in the areas of the system.

Also, what has changed is the growth of architectural components, such as extract transform and load (ETL) tools, data integration tools, and reporting tools. When we look at a large body of, say, 10 million lines of COBOL and we find that three million lines of that code is doing reporting or maybe two million is doing ETL work, we typically suggest they move that asymmetrically to a new platform that does not use handwritten code.

That’s really risk aversion -- doing it very incrementally with low intrusion, and that’s also where the best return on investment (ROI) picture can be portrayed. You can incrementally get your ROI, as you move the reports and the data transformation jobs over to the new platform. So, that’s really what’s changed. These tools have matured so that we have the performance and we also have the tools to help them understand their legacy systems today.

Gardner: Now, one area where economics and technology come together quite well is the hardware. Let’s go to John with regards to virtualization and reducing the cost of storage. How has that changed the penalty increase for doing nothing?

Functionality gap

Pickett: Typically, when we take a look at the high-end of applications that are going to be moving over and sitting on a legacy system, many times they’re sitting on a mainframe platform. With that, one of the things that have changed over the last several years is the functionality gap between what exists in the past 5 or 10 years ago in the mainframe. That gap has not only been closed, but, in some cases, open systems exceed what’s available on the mainframe.

So, just from a functionality standpoint, there is certainly plenty of capability there, but to hit on the cost of doing nothing, and implementing what you currently have today is that it’s not only the high cost of the platform. As a matter of fact, one of our customers who had moved from a high-end mainframe environment onto an Integrity Superdome, calculated that if you were to take their cost savings and to apply that to playing golf at one of the premier golf places in the world, Pebble Beach, you could golf every day with three friends for 42 years, 10 months, and a couple of days.

It’s not only a matter of cost, but it’s also factoring in the power and cooling as well. Certainly, what we’ve seen is that the cost savings that can be applied on the infrastructure side are then applied back into modernizing the application.

Gardner: I suppose the true cost benefits wouldn’t be realized until after some sort of a transformation. Back to Paul Evans. Are there any indications from folks who have done this transformation as to how substantial their savings can be?

Evans: There are many documented cases that HP can provide, and, I think, other vendors can provide as well, In terms of looking at their applications and the underpinning infrastructure, as John was talking about, there are so many documented cases that point people in the direction that there are real cost savings to be made here.

There's also a flip side to this. Some research that McKinsey did earlier in the year took a sample of 100 companies as they went into the recession. They were brand leadership companies. Coming out of the recession, only 60 of those companies were still in a leadership position. Forty percent of those companies just dropped by the wayside. It doesn’t mean they went out of business. Some did. Some got acquired, but others just lost their brand leadership.

That is a huge price to pay. Now, not all of that has to do with application transformation, but we firmly believe that it is so pivotal to improve services and revenue generation opportunities that, in tough times, need to be stronger and stronger.

What we would say to organizations is, "Take a hard look at this, because doing nothing could be absolutely the wrong thing to do. Having a competitive differentiation that you continue to exploit and continue to provide customers with improving level of service is to keep those customers at a tough time, which means they’ll be your customers when you come out of the recession."

Gardner: Let’s go to Brad. I’m also curious on a strategic level about flexibility and agility, are there prices to be paid that we should be considering in terms of lock in, fragility, or applications that don’t easily lend themselves to a wider process.

'Agility' an overused term

Hipps: This term "agility" is the right term to use, but it gets used so often that people tend to forget what it means. The reality of today’s modern organization -- and this is contrasted even from 5, certainly 10 years ago -- is that when we look at applications, they are everywhere. There has been an application explosion.

When I started in the applications business, we were working on a handful of applications that organizations had. That was the extent of the application in the business. It was one part of it, but it was not total. Now, in every modern enterprise, applications really are total -- big, small, medium size. They are all over the place.

When we start talking about application transformation and we assign that trend to agility, what we’re acknowledging is that for the business to make any change today in the way it does business, in any new market initiative, in any competitive threat it wants to respond to, there is going to be an application -- very likely "applications," plural, that are going to need to be either built or changed to support whatever that new initiative is.

The fact of the matter is that changing or creating the applications to support the business initiative becomes the long pole to realizing whatever it is that initiative is. If that’s the case, you begin to say, "Great. What are the things that I can do to shrink that time or shrink that pole that stands between me and getting this initiative realized in the market space?”

From an application transformation perspective, we then take that as a context for everything that’s motivating a business with regard to its application. The decisions that you're going to make to transform your applications should all be pointed at and informed by shrinking the amount of time that takes you to turn around and realize some business initiative.

So, in 500 words or less, that's what we’re seeking with agility. Following pretty closely behind that, you can begin to see why there is a promise in cloud. It saves me a lot of infrastructural headaches. It’s supposed to obviate a lot of the challenges that I have around just standing up the application and getting it ready, let alone having to build the application itself. So I think that is the view of transformation in terms of agility and why we’re seeing things like cloud. These other things really start to point the direction to greater agility.

Gardner: It sounds as if there is a penalty to be paid or a risk to be incurred by being locked

That pool of data technology only gets bigger and bigger the more changes that I have coming in and the more changes that I'm trying to do.

into the past.

Hipps: That’s right, and you then take the reverse of that. You say, "Fine. If I want to keep doing things as is, that means that every day or every month that goes by, I add another application or I make bigger my current application pool using older technologies that I know take me longer to make changes in.

In the most dramatic terms, it only gets worse the longer I wait. That pool of data technology only gets bigger and bigger the more changes that I have coming in and the more changes that I'm trying to do. It’s almost as though I’ve got this ball and chain that I’ve attached to my ankle. I'm just letting that ball part get bigger and bigger. There is a very real agility cost, even setting aside what your competition may be doing.

Gardner: So, the inevitability of transformation goes from a long horizon to a much nearer and dearer issue. Let’s go back to Steve Woods of EDS. What are some misconceptions about starting on this journey? Is this really something that’s going to highly disrupt an organization or are there steps to do it incrementally? What might hold people back that shouldn't?

More than one path

Woods: I think probably one of the biggest misconceptions is when somebody has a large legacy application written in a second-generation language such as COBOL or perhaps PL/1 and they look at the code and imagine the future with still having handwritten code. But, they imagine maybe it’ll be in Java or C# or .Net, and they don’t pick the next step and say, "If I had to look at the system and rebuild it today, would I do it the same way?" That’s what you are doing if you just imagine one path to modernization.

Some of the code they have in their business logic might find their way into some classes in Java and some classes in .Net. What we prefer to do is a functional breakdown on what the code is actually doing functionally and then try to imagine what the options are that we have going forward. Some of that will become handwritten code and some of it will move to those sorts of implementations.

So, we really like to look at what the code is doing and imagine other areas that we could possibly implement those changes in. If we do that, then we have a much better approach to moving them. The worse thing to do -- and a lot of customers have this impression -- is to automatically translate the code from COBOL into Java.

Java and C# are very efficient languages to generate a function point, where there’s a measure of

It’s looking at the code, at what you want to do from a business process standpoint, and looking at the underlying platform.

functionality. Java takes about eight or ten lines of code. In COBOL, it takes about a 100 lines.

Typically, when you translate automatically from COBOL to Java, you still get pretty much the same amount of code. In actuality, you’ve taking the maintenance headache and making it even larger by doing this automated translation. So, we prefer to take a much more thoughtful approach, look at what the options are, and put together an incremental modernization strategy.

Gardner: Paul Evans, this really isn’t so much pulling the plug on the mainframe, which may give people some shivers. They might not know what to expect over a period of decades or what might happen when they pull the plug.

Evans: We don't profess that people unplug mainframes. If they want to, they may plug in an HP system in its place. We’d love them to. But, being very pragmatic, which is what we like to be, it's looking at what Steve was talking about. It’s looking at the code, at what you want to do from a business process standpoint, and looking at the underlying platform.

It's understanding what quality of service you need to deliver and then understand the options available. In even base technologies like this is, with microprocessors, the power that can be delivered these days means we can do all sorts of things at prices, speed, size, power output, and CO2 emissions that we could only dream of a few years ago. This power enables us to do all sorts of things.

The days where there was this walled-off area in a data center, which no other technology could match, are long gone. Now, the emphasis has been on consolidation and virtualization. There is also a big focus on legacy modernization. CIOs and IT directors, or whatever they might be, do understand there’s an awful lot of money spent on maintaining, as Steve said, handwritten legacy code that today run the organization and need to continue to provide these business processes.

Bite-size chunks

There are far faster, cheaper, and better ways to do that, but it has to be something that is planned for. It has to be something that is executed flawlessly. There's a long-term view, but you take bite-sized chunks out of it along the way, so that you get the results you need. You can feed those good results back into the system and then you get an upward spiral of people seeing what is truly possible with today’s technologies.

Gardner: John Pickett, are there any other misconceptions or perhaps under-appreciated points of information from the enterprise storage and server perspective?

Pickett: Typically, when we see a legacy system, what we hear, in a marketing sense, is that the often high-end -- and I’ll just use that as an example -- mainframes could be used as a consolidation factor. What we find is that if you're going to be moving applications or you’re modernizing applications onto an open-system environment to take advantage of the full gamut of tools and open system applications that are out there, you're not going to be doing that on a legacy environment. We see that the more efficient way of going down that path is onto an open-standard server platform.

Also, some of the other misconceptions that we see, again in a marketing sense, are that a mainframe is very efficient. However, if you compare that to a high-end HP system, for example, and just take a look at the heat output -- which we know is very important -- there is more heat. The difference in heat between a mainframe and an Integrity Superdome, for example, is enough to power a two-burner gas grill, a Weber grill. So, there's some significant heat there.

On the energy side, we see that the Superdome consumes 42 percent less energy. So, it's a very

. . . Some of the other misconceptions that we see, again in a marketing sense, are that a mainframe is very efficient.

efficient way of handling the operating-system environment, when you do modernize these applications.

Gardner: Brad Hipps, when we talk about modernizing, we’re not just modernizing applications. It’s really modernizing the architecture. What benefits, perhaps underappreciated ones, come with that move?

Hipps: I tend to think that in application transformation in most ways they’re breaking up and distributing that which was previously self-contained and closed.

Whether you're looking at moving to some sort of mainframe processing to distributed processing, from distributed processing to virtualization, whether you are talking about the application team themselves, which now are some combination of in-house, near-shore, offshore, outsourced sort of a distribution of the teams from sort of the single building to all around the world, certainly the architectures themselves from being these sort of monolithic and fairly brittle things that are now sort of services driven things.

You can look at any one of those trends and you can begin to speak about benefits, whether it’s leveraging a better global cost basis or on the architectural side, the fundamental element we’re trying to do is to say, "Let’s move away from a world in which everything is handcrafted."

Assembly-line model

Let’s get much closer to the assembly-line model, where I have a series of preexisting trustworthy components and I know where they are, I know what they do, and my work now becomes really a matter of assembling those. They can take any variety of shapes on my need because of the components I have created.

We're getting back to this idea of lower cost and increased agility. We can only imagine how certain car manufacturers would be doing, if they were handcrafting every car. We moved to the assembly line for a reason, and software typically has lagged what we see in other engineering disciplines. Here we’re finally going to catch up. We're finally be going to recognize that we can take an assembly line approach in the creation of application, as well, with all the intended benefits.

Gardner: And, when you standardize the architecture, instead of having to make sure there is a skillset located where the systems are, you can perhaps bring the systems to where the different skills are?

Hipps: That’s right. You can begin to divorce your resources from the asset that they are creating, and that’s another huge thing that we see. And, it's true, whether you're talking about a service or a component of an application or whether you're talking about a test asset. Whatever the case may be, we can envision a series of assets that make an application successful. Now, those can be distributed and geographically divorced from the owners.

Gardner: Where this has been a "nice to have" or "something on the back-burner" activity,

The pressure it’s bringing to bear on people is that the time is up when people just continue to spend their dollars on maintaining the applications . . . They can't just continue to pour money after that.

we're starting to see a top priority emerge. I’ve heard of some new Forrester research that has shown that legacy transformation is becoming the number one priority. Paul, can you offer some more insight on that?

Evans: That’s research that we're seeing as well, Dana, and I don’t know why. ... The point is that this may not be what organizations "want" to do.

They turn to the CIO and say, "If we give you $10 million, what is that you'd really like to do." What they're actually saying is this is what they know they've got to do. So, there is a difference between what they like and what they've got to do.

That goes back to when we started in the current economic situation. The pressure it’s bringing to bear on people is that the time is up when people just continue to spend their dollars on maintaining the applications, as Steve and Brad talked about and the infrastructure that John’s talked about. They can't just continue to pour money after that.

There has to be a bright point. Someone has got to say, “Stop. This is crazy. There are better ways to do this.” What the Forrester research is pointing is that if you go around to a worldwide audience and talk to a thousand people in influential positions, they're now saying, "This is what we 'have' to do, not what we 'want' to do. We're going to do this, and we're going to take time out and we're going to do it properly. We're going to take cost out of what we are doing today and it’s not going to come back."

Flipping the ratio

All the things that Steve and Brad have talked about in terms of handwritten code, once we have done it, once we have removed that handwritten code, that code that is too big for what it needs to be in terms to get the job done. Once we have done it once, it’s out and it’s finished with and then we can start looking at economics that are totally different going forward, where we can actually flip this ratio.

Today, we may spend 80 percent or 90 percent of our IT budget on maintenance, and 10 percent on innovation. What we want to do is flip it. We're not going to flip it in a year or maybe even two, but we have got to take steps. If we don’t start taking steps, it will never go away.

Hipps: I've got just one thing to add to that in terms of the aura of inevitability that is coming with the transformation. When you look at IT over the last 30 years, you can see that, fairly consistently, you can pick your time frame, and somewhere in neighborhood of every seven to nine years, there has been sort of an equivalent wave of modernization. The last major one we went through was late '90s or early 2000s, with the combination of Y2K and Web 1.0. So, sure enough, here we are, right on time with the next wave.

What’s interesting is this now number-one priority hasn’t reached the stage of inevitability. I

When you look at IT over the last 30 years, you can see that, fairly consistently, you can pick your time frame, and somewhere in neighborhood of every seven to nine years, there has been sort of an equivalent wave of modernization.

look back and think about what organizations in 2003 were still saying, "No, I refuse the web. I refuse the network world. It’s not going to happen. It’s a passing fancy," and whatever the case maybe. Inasmuch as there were organizations doing that, I suspect they're not around anymore, or they're around much smaller than they were. I do think that’s where we are now.

Cloud is reasonably new, but outsourcing is another component where transformation has been around long enough that most people have been able to look it square the eye and figure out, "You know what. There is real benefit here. Yes, there are some things I need to do on my side to realize that benefit. There is no such thing as a free lunch, but there is a real benefit here and it’s I am going to suffer, if not next year, then three years from now, if I don’t start getting my act together now."

Gardner: John Pickett, are there any messages from the boosters of mainframes that perhaps are no longer factors or are even misleading?

Pickett: There are certainly a couple of those. In the past, the mainframe was thought to be the harbinger of RAS -- reliability, availability, and serviceability. Many of those features exist on open systems today. It’s not something that is dedicated just to the high-end of the mainframe environment. They are out there on systems that are open-system platforms, significantly cheaper. In many cases, the RAS of these systems far exceeds what we’ll see on the mainframe.

That’s just one piece. Other misconceptions and things that you typically saw historically have been on the mainframe side, such as being able to drive a business-based objective or to be able to prioritize resources for different applications or different groups of users. Well, that’s something that has existed for a number of years on the open system side -- things such as backup and recovery and being able to provide very high levels of disaster recovery.

Misleading misconception

A misconception that this is something that can only be done in a mainframe environment, is not only misleading, but also not making the move to an open-system platform, continues to drive IT budget unnecessarily into an infrastructure that could be applied to either the application modernization that we have been talking about here or into the skills -- people resources within the data center.

Gardner: We seem to have a firm handle on the cost benefits over time. Certainly, we have a total cost picture, comparing older systems to the newer systems. Are there more qualitative, or what we might call "soft benefits," in terms of the competitiveness of an organization? Do we have any examples of that?

Evans: What we have to think about is the target audience out there. More and more people have access to technology. We have the generation now coming up that wants it now and wants it off the Web. They are used to using social networking tools that people have become accustomed to. So, it's one of the soft, squidgy areas as people go through this transformation.

I think that we can put hard dollars -- or pounds or euros -- against this for the moment, the inclusion of Web 2.0 or Enterprise 2.0 capabilities into applications. We have customers who are now trying that, some of it inside the firewall and some of it beyond. One, this can provide a much richer experience for the user. Secondly, you begin to address an audience that is used to analyzing these things in their day-to-day life anyway.

Why, when they step into the world of the enterprise, do they have to step back 50 years in terms

More and more people have access to technology. We have the generation now coming up that wants it now and wants it off the Web.

of capability? You just can’t imagine that certain things that people require are being done in batch mode anymore. The real-time enterprises are what people now expect and want.

So, as people go through this transformation, not only can they do all the plethora of things we have talked about in terms of handwritten code, mainframes, structure, and service-oriented architecture (SOA), but they can also start taking steps towards how they can really get these applications in line and embed them within an intimate culture.

If they start to take on board some of the newer concepts around cloud to experiment they have to understand that people aren’t going to just make this big leap of faith. At the end of the day, it's enterprise apps. We make things, apply things and count things -- and people have got to continue to do that. At the same time, they need to take these pragmatic steps to introduce these newer technologies that really can help them not only retain their current customer base, but attract new customers as well.

Gardner: Paul, when organizations go through this transformation, modernize, and go to open systems, does that translate into some sort of a business benefit, in terms of making that business itself more agile, maybe in a mergers and acquisition sense? Would somebody resist buying a company because they've got a big mainframe as an albatross around its neck?

Fit for purpose

Evans: Definitely, to have your IT fit for purpose, is something that is part of the inherent health of the organization. For organizations whose IT is way behind where it is today, it's definitely part of the health check.

To some degree, if you don’t want to get taken over or merged or acquired, maybe you just let your IT sag to where it is today, with mainframes and legacy apps, and nobody would want you. But then, you’re back to where we were earlier. You become one of those 40 percent of the companies that disappear off the face of the planet. So, it’s a sort of a double-edged sword, you make yourself attractive and you could get merged or acquired. On the other hand, you don’t do it and you’re going to go out of business. I still think I prefer the former rather than the latter.

Gardner: Let’s talk more specifically about what HP is bringing to the table. We’ve flushed out this issue quite a bit. Is there a long history at HP of modernization?

Evans: There are two things. There is what we have done internally, within the organization in the company. We’ve had to sort of eat our own dog food, in the sense that there are companies that were merged and companies that were acquired -- HP, Compaq, Digital, EDS, whatever.

It’s just not acceptable anymore to run these as totally separate IT organizations. You have to

When you take a look at the history of what we've been able to do, migrating legacy applications onto an open system platform, we actually have a long history of that.

quickly understand how to get this to be an integrated enterprise. It’s been well documented what we have done internally, in terms of taking massive amount of cash out of our IT operations, and yet, at the same time, innovating and providing a better service, while reducing our applications portfolio from something like 15,000 to 3,000.

So, all of these things going at the same time, and that has been achieved within HP. Now, you could argue that we don't have mainframes, so maybe it’s easier. Maybe that’s true, but, at the same time, modernization has been growing, and now we're right up there in the forefront of what organizations need to do to make themselves cost-effective, agile, and flexible, going forward.

Gardner: John Pickett, what about the issue around standards, neutrality, embracing heterogeneity, community and open source? Are these issues that HP has found some benefits from?

Pickett: Without a doubt. When you take a look at the history of what we've been able to do, migrating legacy applications onto an open system platform, we actually have a long history of that. We continue to not only see success, but we’re seeing acceleration in those areas.

A couple of drivers that we ended up seeing are really making the case for customers, not only the significant cost savings that we have talked about earlier. So, we're talking 50 percent or 70 percent total cost of ownership (TCO) savings driving from a legacy of mainframe environment over to an HP environment.

Additional savings

In addition to that, you also have the power savings. Simply by moving, the amount of energy that saved is enough to light 80 houses for one year. We’ve already talked about the heat and the space savings. It’s about a third of what you’re going to be seeing for a high-end mainframe environment for a similar system from HP with similar capabilities.

Why that’s important is because if customers are running out of data-center room and they’re looking at increasing their compute capacity, but they don’t have room within their data center, it just makes sense to go with a more efficient, more densely packed power system, with less heat and energy than what you’ll see on a legacy environment.

Gardner: Brad Hipps, about this issue about of being able to sell from a fairly neutral perspective, based on a solutions value, does that bring something to the table?

Hipps: We alluded earlier to the issue of lock in. If we’re going to, as we do, fly under the banner of bringing flexibility and agility to an organization, it’s tough to wave that banner without being pretty open in who you’re going to play with and where.

Organizations have a very fine eye for what this is going to mean for me not just six months from now, but two years from now, and what it’s going to mean to successors in line in the organization. They don’t want to be painted into a corner. That’s something that HP is very cognizant of, and has been very good about.

This may be a little bit overly optimistic, but you have to be able to check that box. If you’re going to make a credible argument to any enterprise IT organization, you have to show your openness and you have to check the box that says we’re not going to paint you into a corner.

Gardner: Steve Woods, for those folks who need to get going on this, where do you get started? We mentioned that iterative nature, but there must be perhaps low-hanging fruit, demonstrations of value that then set up a longer record of success.

Woods: Absolutely. What we find with our customers is that there are various levels in the processes of understanding their legacy systems. Often, we find some of them are quite mature and have gone down the road quite a bit. We offer some assessments based upon single applications and also portfolio of applications. We do have a modernization assessment and we do have a portfolio assessment. We also offer a best-shore assessment to ensure that you are using the correct resources.

Often, we find that we walk in, and the customers just don’t know anything about what their

We have the visual intelligence tools that very quickly allow us to see inside the system, see the duplicate source code, and provide them with high level cost estimates.

options are. They haven’t done any sort of analysis thus far. In those cases, we offer what we’re calling a Modernization Opportunity Workshop.

It's a very quick, usually 4-8 hour, on-site, and it takes about four weeks to deliver the entire package. We use some tools that I have created at HP that look at the clone code within the application. It’s very important to understand the pattens of the clone code and have visualizations. We have the visual intelligence tools that very quickly allow us to see inside the system, see the duplicate source code, and provide them with high level cost estimates.

We use a tool called COCOMO and we use Monte Carlo simulation. We’re able very quickly to give them a pretty high-level, 30-page report that indicates the size. Often, size is something that is completely misunderstood. We have been into customers who tell us they have four million lines of code, and we actually count the code as only 400,000 lines of code. So, it’s important to start with a stake in the ground and understand exactly where you’re at with the size.

We also do functionality composition support to understand that. That’s all delivered with very little impact. We know the subject matter experts are very busy, and we try to lessen the impact of doing that. That’s one of the places we can start, when the customer just has some uncertainty and they're not even sure where to start.

Gardner: We’ve been discussing the high penalties that can come with inaction around applications and legacy systems. We’ve been talking about how that factors into the economy and the technological shifts around the open systems and other choices that offer a path to agility and multiple-sourcing options.

I want to thank our panelists today for our discussion about the high costs and risks inherent in doing nothing around legacy systems. We’ve been joined by Brad Hipps, product marketer for Application Lifecycle Management and Applications Portfolio Software at HP. Thank you Brad.

Hipps: Thank you.

Gardner: John Pickett, Enterprise Storage and Server Marketing at HP. Thank you, John.

Pickett: Thank you Dana.

Gardner: Paul Evans, Worldwide Marketing Lead on Applications Transformation at HP. Thank you, Paul.

Evans: Thanks Dana.

Gardner: And Steve Woods, applications transformation analyst and distinguished software engineer at EDS. Thank you Steve.

Woods: Thank you Dana.

Gardner: This is Dana Gardner, principal analyst at Interarbor Solutions. You’ve been listening to a sponsored BriefingsDirect podcast. Thanks for listening and come back next time.

Listen to podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Learn more. Sponsor: Hewlett Packard.

Transcript of a BriefingsDirect podcast on the risks and drawbacks of not investing wisely in application modernization and data center transformation. Copyright Interarbor Solutions, LLC, 2005-2009. All rights reserved.

Monday, September 21, 2009

Part 1 of 4: Web Data Services Extend Business Intelligence Depth and Breadth Across Social, Mobile, Web Domains

Transcript of first in a series of sponsored BriefingsDirect podcasts with Kapow Technologies on Web Data Services and how harnessing the explosion of Web-based information inside and outside the enterprise buttresses the value and power of business intelligence.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Learn more. Sponsor: Kapow Technologies.

See popular event speaker Howard Dresner's latest book, Profiles in Performance: Business Intelligence Journeys and the Roadmap for Change, or visit his website.

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you’re listening to BriefingsDirect.

Today, we present a sponsored podcast discussion on the future of business intelligence (BI) -- on bringing more information from more sources into an analytic process, and thereby getting more actionable intelligence out.

The explosion of information from across the Web, from mobile devices, inside of social networks, and from the extended business processes that organizations are now employing all provide an opportunity, but they also provide a challenge.

This information can play a critical role in allowing organizations to gather and refine analytics into new market strategies, better buying decisions, and to be the first into new business development opportunities. The challenge is in getting at these Web data services and bringing them into play with existing BI tools and traditional data sets.

This is the first in a series of podcasts, looking at the future of BI and how Web data services can be brought to bear on better business outcomes.

So, what are Web data services and how can they be acquired? Furthermore, what is the future of BI when these extended data sources are made into strong components of the forecasts and analytics that enterprises need to survive the recession and also to best exploit the growth that follows?

Here to help us explain the benefits of Web data services and BI is Howard Dresner, president and founder of Dresner Advisory Services. Welcome to the show, Howard.

Howard Dresner: Thanks, Dana. It's great to be here today.

Gardner: We're also joined by Ron Yu, vice president of marketing at Kapow Technologies. Thanks for joining, Ron.

Ron Yu: Hi, Dana. Great to be with you today.

Gardner: Howard, let me start with you. We've certainly heard a lot about BI over the past several years. There's a very strong trend and lots of investments are being made. How does this, in fact, help companies during the downturn that we are unfortunately still in and then prepare for an upside?

Empowering end users

Dresner: BI is really about empowering end users, as well as their respective organizations, with insight, the ability to develop perspective. In a downturn, what better time is there to have some understanding of some of the forces that are driving the business?

Of course, it's always useful to have the benefit of insight and perspective, even in good times. But, it tends to go from being more outward-focused during good times, focused on markets and acquiring customers and so forth, to being more introspective or internally focused during the bad times, understanding efficiencies and how one can be more productive.

So, BI always has merit and in a downturn it's even more relevant, because we are really less tolerant of being able to make mistakes. We have to execute with even greater precision, and that's really what BI helps us do.

Gardner: Well, if we're looking either internally at our situation or externally at our opportunities, the more information we have at our disposal the stronger our analytical return.

It is a moving target, because the world continues to evolve. There are lots of information sources.



Dresner: Certainly, one would hope so. If you're trying to develop perspective, bringing as much relevant data or information to bear is a valuable thing to do. A lot of organizations focus just on lots of information. I think that you need to focus on the right information to help the organization and individuals carry out the mission of that organization.

Gardner: And that crucial definition of "right information" has changed or is a moving target. How do you keep track of what's the right stuff?

Dresner: It is a moving target, because the world continues to evolve. There are lots of information sources. When I first started covering this beat 20 years ago, the available information was largely just internal stores, corporate stores, or databases of information. Now, a lot of the information that ought to be used, and in many cases, is being used, is not just internal information, but is external as well.

There are syndicated sources, but also the entire World Wide Web, where we can learn about our customers and our competitors, as well as a whole host of sources that ought to considered, if we want to be effective in pursuing new markets or even serving our existing customers.

Gardner: Ron Yu, we've certainly seen an increase in business processes that are now developed from components beyond just a packaged application set. We've seen a mixture of Web, mobile, and other end points being brought to bear on how people interact with their businesses and these processes.

Give me a sense on the extended scope of BI and how do we get at what is now part and parcel with the extended enterprises.

The right data

Yu: I fully agree with Howard. It's all about the right data and, given the current global and market conditions, enterprises have cut really deep -- from the line of business, but also into the IT organizations. However, they're still challenged with ways to drive more efficiencies, while also trying to innovate.

The challenges that are being presented are monumental where traditional BI methods and tools are really providing powerful analytical capabilities. At the same time, they're increasingly constrained by limited access to not only relevant data, but how to get timely access to data.

What we see are pockets of departmental use cases, where marketing departments and product managers are starting to look outside in public data sources to bring in valuable information, so they can find out how the products and services are doing in the market.

Gardner: Howard, we began this discussion with a lofty goal of defining the future of BI. I wonder if you think that the innovation to come from BI activities is a function of the analytics engine or the tools, or is it a function of getting at more, but relevant, information and bringing that to bear.

Dresner: It's an interesting question. One of the things that I focus upon in my second book, which is about to be published next month, is performance-directed culture and the underpinning or the substrate of a performance-directed culture. I won't go into great detail right now, but it has to do with common trust in the information and the availability and currency of the information, as a way to help the organization align with the mission.

The future of BI is not just about the tools and technology. It's great to have tools and technology. I certainly am a fan of technology, being somewhat of a gadget fiend, but that's not going to solve your organization's problems and it's not going to help them align with the mission.

What is going to help them align with the mission is making sure that they have timely, relevant, and complete information, as well as the proper culture to help them support the mission of the enterprise.

Having all the gadgetry is great. Certainly, making the tools more intuitive is a useful and worthwhile thing to do, but it's only as good as the underlying content and insight to support those end users. The future is about focusing on the information and those insights that can empower the individuals, their respective departments, and the enterprise to stay aligned with the mission of that organization.

Other trends afoot

Gardner: The trend and interest in BI is not isolated. There are other complementary, or at least coincidental, mega-trends afoot. One of them, from my perspective, is this whole notion of community, rather than just company, individual, or monolithic thinking. We are expanding into ecosystems.

Cloud computing is becoming a popular notion nowadays. People are thinking about how to cross organizational boundaries, how to access resources, perhaps faster better cheaper, from across organizational boundaries.

This also brings in this opportunity to start melding, mashing up, and comparing and contrasting data sets across these organizational boundaries. Is there a mega-trend that, from your perspective, Howard, we need to start thinking about BI as a data set-joined function?

Dresner: I fall back on Tom Malone's work, The Future of Work, his book from 2004, where he talks about organizations. Because of the reduced cost of communications, organizations will start to move, and are moving, towards looser bonds, democratized structures, and even market-based structures -- and he cites a number of examples in his book.

The way that you hold together an organization, this loosely bound organization, is through the notion of BI and performance management, which means we certainly have to compare, I wouldn't say data per se, but certainly various measures. We have to share data. We have to combine data and exchange data to get the job done -- whatever that job is. As needs be, we can break those bonds and form new bonds to get the job done.

For every application that IT or line of business develops, it just creates another data silo and another information silo. You have another place that information is disconnected from others.



This doesn’t mean that the future of business is a bunch of small micro-organizations coming together. It really applies to any organization that wants to be agile and entrepreneurial in nature. The underlying foundation of that has to be data and BI in order to function.

Gardner: So, it's about how these organizations relate to one another. Ron, from your perspective, what are some of the essential problems that need to be solved on allowing companies to better understand themselves and then to have this permeability at a process level, a content data, and BI level with other players?

Yu: The term I'd like to use is really about inclusive BI. Inclusive BI essentially includes new and external data sources for departmental applications, but that's only the beginning. Inclusive BI is a completely new mindset. For every application that IT or line of business develops, it just creates another data silo and another information silo. You have another place that information is disconnected from others.

Critical decision-making requires, as Howard was saying earlier, that all business information is easily leveraged whenever it's needed. But today, each application is separate and not joined. This makes the line of business and decision- making very difficult, and it's not in real time.

An easier way

As this dynamic business environment continues to grow, it’s completely infeasible for IT to update their existing data warehouses or to build a new data mart. That can't be the solution. There has to be an easier way to access and extract data exactly where it resides, without having to move data back and forth from data bases, data marts, and data warehouses, which effectively becomes snapshot.

When line of business is working with these data snapshots, by definition it's out of date. Catalytic CIOs and forward looking information architects understand this dilemma and, given that most enterprises are already Web-enabled, they are turning to Web data services to build bridges across all these data silos.

Gardner: Another trend we mentioned, the permeability of the organization, is this involvement -- people being participants in the social networks, having a great deal of publishing going on, putting content out there that can be very valuable to a company. End users seem to want to tell companies what they want, if the companies are willing to listen. We have this opportunity now to create dialogue and conversation, rather than simply looking at the sales receipts.

Tell me how this whole social phenomena of the community and the sharing fits into Web data services?

Web data services provides immediate access to the delivery of this critical data into the business user's BI environment, so that the right timely decisions can be made



Yu: There is effectively a new class of BI applications as we have been discussing, that depends on a completely different set of data sources. Web data services is about this agile access and delivery of the right data at the right time.

With different business pressures that are surfacing everyday, this leads to a continuous need for more and more data sources. But, as Howard was talking about earlier, how do you handle all of that?

Web data services provides immediate access to the delivery of this critical data into the business user's BI environment, so that the right timely decisions can be made. It effectively takes these dashboards, reporting, and analytics to the next level for critical decision-making. So when we look deeper into this and how is this actually playing out, it's all about early and precise predictions.

Let's talk about a few examples. Government agencies are using Web data services to combat terrorism. So, you can be certain that they have all the state-of-the-art analysis tools, spatial mapping, etc. Web data services effectively turbo-charges these analyst tools and is giving them the highest precision in their threat analysis.

These intelligence agencies have access to open-source intelligence, social networks, blogs, forums, even Twitter feeds, and can see exactly what's happening in real time. They can do this predictive analysis and are much better positioned than ever to avert horrible acts of terrorism like 9/11.

Gardner: Howard, do you think, to Ron’s point, that we need to sidestep IT and the traditional purveyors of BI? Is this something that can be done by the end users themselves?

Competency centers

Dresner: It's a very interesting question, and a provocative one too, I might add. But, sidestep IT? Not all IT organizations are inflexible. Some of them certainly are. One of the things that I have advocated for years is the notion of competency centers, certainly in larger organizations. The idea of a competency center is to get the skills in a place, where they can do the most good and where they can really focus on being expedient.

Delivering something to the end user a year after they ask for it really isn't terribly useful. You need to be as agile as possible to respond to ever-changing business needs. There are a very few businesses out there that are static, where things aren’t moving very quickly. In most organizations and most markets, things move pretty darn quickly, and you have to be able to respond to them.

If you don't respond to the users quickly, they find a way to solve their problems themselves, and that really has become an issue in many organizations. I’d like to say that it's a minority, but it's not. It's a majority of them, where IT is going down a slightly a different path, sometimes a dramatically different path, than the end users.

Surprisingly, there are some IT organizations that are pretty well aligned and they are responsive. So, it's not a situation where the end users need to completely discount IT, but some IT organizations have become pretty inflexible. They are focused myopically on some internal sources and are not being responsive to the end user.

To the extent that they can find new tools like Web data services to help them be more effective and more efficient, they are totally open to giving line of business self-service capabilities.



You need to be careful not to suffer from what I call BI myopia, where we are focused just on our internal corporate systems or our financial systems. We need to be responsive. We need to be inclusive of information that can respond to the user's needs as quickly as possible, and sometimes the competency center is the right approach.

I have instances where the users do wrest control and, in my latest book, I have four very interesting case studies. Some are focused on organizations, where it was more IT driven. In other instances, it was business operations or finance driven.

Yu: There is, in most cases, a middle ground, and IT certainly isn't looking for more things to do. To the extent that they can find new tools like Web data services to help them be more effective and more efficient, they are totally open to giving line of business self-service capabilities.

Gardner: Ron, whether it's the IT department and a fully sanctioned tool and approach that they are supporting or whether it's self-service, we can't just open up the fire hose and have all of this content dump into our business and analytics activities.

What do you bring to the table in terms of not only getting access to Web data services, but also cleansing them, vetting them, putting them in the right format, and making sure it's secure and their privacy issues are being adhered to? What's the value add to go beyond access into a qualitative set of highly valued assets?

Start with the use case

Yu: Sometimes, the problem we face, when we talk about BI, is that we immediately start talking about the software, the servers, and the things that we needed to build. BI really starts with the business use case.

What is it that the line of business is trying to do and can we develop the right facilities in order to work on that project? Yet, if those projects don't become so overbearing that you just create IT project gridlock, then I think we have something new to say.

For example, in leading financial services companies, what they're looking for is on this theme of early and precise predictions. How can you leverage information sources that are publicly available, like weather information, to be able to assess the precipitation and rainfall and even the water levels of lakes that directly contribute to hydroelectricity?

If we can gather all that information, and develop a BI system that can aggregate all this information and provide the analytical capabilities, then you can make very important decisions about trading on energy commodities and investment decisions.

Web data services effectively automates this access and extraction of the data and metadata and things of that nature, so that IT doesn't have to go and build a brand new separate BI system every time line of business comes up with a new business scenario.



Web data services effectively automates this access and extraction of the data and metadata and things of that nature, so that IT doesn't have to go and build a brand new separate BI system every time line of business comes up with a new business scenario.

Gardner: Again, to this notion of the fire hose, you are not just opening up the spigot. You're actually adding some value and helping people manage and control this flow, right?

Yu: Exactly. It's about the preciseness of the data source that the line of business already understands. They want to access it, because they're working with that data, they're viewing that data, and they're seeing it through their own applications every single day.

But, that data is buried deep within the application in the database and the only way that they can do this through the traditional ways is through opening up a new IT ticket and asking their database, their data warehouse, or their application to be updated. That just is very time-consuming and very expensive for everyone involved.

Gardner: To your point earlier, you end up getting a significant latency, and it's probably precisely the kind of Web services data that you want to get closer to real time in order to analyze what's going on.

Voice of the customer

Yu: That's exactly the case. The voice of the customer provides huge financial and exposure protection for product vendors. For example, if a tire manufacturer had the ability to monitor consumer sentiment, they would be able to investigate and even issue early recalls well before tragic events happen, which would create even larger financial loses and huge damage on the brand.

Gardner: Ron, help me understand a little bit better what Kapow Technologies brings in terms of this Web data services support. How does that also relate to a larger BI solution that incorporates Web data services.

Yu: We're going a little bit into the technical side of things now. Effectively, Kapow Web Data Server, which is our product, is a platform that provides IT and some line of business users, who actually have more of a technical aptitude, the ability to visually interact with the data sources through the Web, HTML and the Ajax front-end of an application or Web page or a Web portal.

Effectively, you visually program and give instructions through point-and-click, which gives you precise navigation through all of the forms and as deep as you want to go into that Website or Web application.

As you point and click, you can give instructions about extracting the data and even enriching the data. For example, going to LinkedIn, you see that there are certain images that are assigned to specific data. With our product, you can interpret those graphical images and give them a value.

Our product effectively gives you that precise surgical navigation and extraction of any data from exactly the application that you're working with . . .



Our product effectively gives you that precise surgical navigation and extraction of any data from exactly the application that you're working with to create an RSS feed, a REST service, or, in a case of traditional BI, even loading it directly into a SQL database with a one-button deployment.

There is no programming involved. So, you can imagine how incredibly productive this is for IT. You don't have to waste time writing SQL scripts, application programming interfaces (APIs), and things of that nature. It enables that easy access and moves on to the higher value of what IT can deliver, which is on the application and presentation side.

Gardner: Howard, in your work with your clients and your research for your new book, did you encounter any examples that you can recall where folks have taken this to heart and moved beyond the traditional content types that BI has supported? What sort of experience, paybacks, and benefits have they enjoyed?

Not just internal sources

Dresner: The answer is yes. There are a number of good examples. Obviously, I encourage everybody to order a copy of the new book, which is out next month. But, including other sources than just internal sources gives you a better perspective. It creates a much more interesting and rich tapestry of the business and the market in which it lives.

One of the organizations I dealt with is in the hospitality business. Understanding their market, understanding what their competition is doing, what offers that they are providing means that they have to go to those Websites, as well as accessing some social networking sites.

They have to understand what's the customer sentiment is out there and what sort of offers their competition is offering on a Sunday night, for example, in order for them to remain competitive. You have to understand the changing trends, if you want to be a “hip hotel chain.” What does that mean? What's changing socially in those particularly geographies and markets that you play in that you need to be aware of and respond to.

The same thing is true in other industries. Another one of the organizations I worked with is in the healthcare industry. So understanding your patient requirements is important, if you want to be a more patient-oriented organization. What are their changing needs? What are their desires? What are those things that they expect from their service provider? You are not going to get that from your internal database?

Providing access to external content in conjunction with the content from your internal systems gives you a greater perspective. How many times have we heard, "Gee, if I'd only known that, I could have made a better decision or I could have framed the decision-making process more effectively?" That's really where we are in the history of BI right now.

We need to provide a better perspective, more complete and more timely perspective, in order to frame the decision-making processes. Going back to my original point, and really the central point of the book, how do we get everybody in the organization aligned with the mission to make sure that we're all fulfilling our particular role within the organization and using things like BI and the right sorts of data to achieve that purpose?

But, when you look outside the firewall -- and I'm talking about all these public data sources and even partners -- how do you collaborate better with your partners? All of these things are Web enabled.



Yu: I agree, Howard, and I think that's just the tip of the iceberg. If we look at the spirit of what corporate performance management or enterprise performance management is supposed to deliver, BI systems are really dealing with operational data and financial data within the firewall. But, when you look outside the firewall -- and I'm talking about all these public data sources and even partners -- how do you collaborate better with your partners? All of these things are Web enabled.

How do you bring things together from outside the firewall and integrate them with the operational and financial data? That challenge will really be a huge payoff, once IT organizations and CIOs can leverage Web data services for this huge payoff within that enterprise, whether it's the next generation of BI for business-to-employee (B2E) applications, business-to-business (B2B) with their partners, or even business-to-consumer (B2C) applications.

Gardner: Ron, I wonder if you have any examples, folks that have gone out and gathered these Web data services? What sort of uses have they put them to and what paybacks have they encountered?

Partners and B2B

Yu: We've talked a lot about public Web data sources. Let's talk about partners and B2B. One of the Fortune 500 financial services companies was required by regulatory compliance to report on their treasury transactions on 10,000 treasury transactions per day.

They had several analysts fully dedicated to logging in to each of their top 100 banking partners and extracting information, loading it into an Excel spreadsheet, and then normalizing the data and cleansing the data You know that when you use manual efforts, you will never get precise around the data quality, but that was the best facility that they had.

Then, they would take that Excel spreadsheet, load that into a database, and put a BI tool on top of that to provide their transactional dashboard. They spent three years evaluating technologies and trying to build the solution on their own and they failed.

So they came to Kapow Technologies and implemented a proof of concept within three weeks. They were able to get three of their top banking partners to develop a BI dashboard to monitor and manage these transactions and the full deployment in three months. Now, they are looking to expand that to other aspects of their business.

Gardner: I think we've learned a lot here about Web data services. Ron, where do you see it going in the future? How does this move beyond the vision that we already have developed here?

Yu: As Howard has been advocating about getting the right data, once you get the data access right, where the data is accurate, noise-free and timely, then the future of BI will really be about automated decision making.

We got a taste with some of the examples that I talked about with financial services and working with the partners, but also investment decisions and things like that. In the same way that we've seen that in financial decisions around making buy/sell decisions in an automated predictive way, there is this same opportunity that exists across all industries.

Gardner: Howard, do you agree that future BI is increasingly an automated affair?

Dresner: There are certainly places where we ought to be automating BI. Decision automation certainly. But, to my way of thinking, BI is involved in empowering users and making them smarter. There is a tremendous amount of room for improvement there.

As I said, I've been on this beat for 20 years now, and certainly have seen improvements in the tools, across the board, from the bottom of the stack all the way to the top, and we can certainly see increased penetrations in the use.

The next hurdle is applying the technology a little bit more effectively. That's really where we have fallen far short, not understanding why we are implementing the technology -- let's give everybody BI and a data warehouse and hope for the best. Not that there hasn't been any goodness associated with it, but certainly not one that is requisite with the investments that have been made.

Going back to what I said, earlier in the broadcast, the focus upon the performance-directed cultures and using the technology as an enabler to support those cultures is really where I think organizations need to apply their thinking.

Gardner: I'm afraid we'll have to leave it there. We've been discussing how Web data services play a critical role in allowing companies to gather and refine their analytics to engage in better market strategies and better buying decisions and to join and explore business development opportunities. Helping us to deal in a future BI and the role of Web data services, we've been joined by Howard Dresner, president and founder of Dresner Advisory Services. Thanks so much, Howard.

Dresner: My pleasure. Thanks for having me.

Gardner: Also, we have been joined by Ron Yu, vice president of marketing at Kapow Technologies. Thank you, Ron.

Yu: Thanks, Dana. I had a great time.

Gardner: This is Dana Gardner, principal analyst at Interarbor Solutions. You've been listening to a sponsored BriefingsDirect podcast. Thanks for listening, and come back next time.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Learn more. Sponsor: Kapow Technologies.

See popular event speaker Howard Dresner's latest book, Profiles in Performance: Business Intelligence Journeys and the Roadmap for Change, or visit his website.

Transcript of first in a series of sponsored BriefingsDirect podcasts with Kapow Technologies on Web Data Services and how harnessing the explosion of Web-based information inside and outside the enterprise buttresses the value and power of business intelligence. In Part Two, Kapow co-founder and CTO Stefan Andreasen and Forrester analyst Jim Kobielus discuss how Web data services provide ease of access to data from a variety of sources. Copyright Interarbor Solutions, LLC, 2005-2009. All rights reserved.