Monday, October 05, 2009

Part 2 of 4: Web Data Services Provide Ease of Data Access and Distribution from Variety of Sources, Destinations

Transcript of a sponsored BriefingsDirect podcast, one of a series on web data services, with Kapow Technologies, with a focus on information management for business intelligence.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: Kapow Technologies.

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you’re listening to BriefingsDirect.

Today, we present a sponsored podcast discussion on how to make the most of web data services for business intelligence (BI). As enterprises seek to gain better insights into their markets, processes, and business development opportunities, they face a daunting challenge -- how to identify, gather, cleanse, and manage all of the relevant data and content being generated across the Web.

In Part 1 of our series we discussed how external data has grown in both volume and importance across internal Internet, social networks, portals, and applications in recent years. As the recession forces the need to identify and evaluate new revenue sources, businesses need to capture such web data services for their BI to work better and fuller.

Enterprises need to know what's going on and what's being said about their markets across those markets. They need to share those web data service inferences quickly and easily across their internal users. The more relevant and useful content that enters into BI tools, the more powerful the BI outcomes -- especially as we look outside the enterprise for fast shifting trends and business opportunities.

In this podcast, Part 2 of the series with Kapow Technologies, we identify how BI and web data services come together, and explore such additional subjects as text analytics and cloud computing.

So, how to get started and how to affordably bring web data services to BI and business consumers as intelligence and insights? Here to help us explain the benefits of web data services and BI, is Jim Kobielus, senior analyst at Forrester Research.

Jim Kobielus: Hi, Dana. Hello, everybody,

Gardner: We're also joined by Stefan Andreasen, co-founder and chief technology officer at Kapow Technologies. Welcome, Stefan.

Stefan Andreasen: Thank you, Dana. I'm glad to be here.

Gardner: Jim, let's start with you. Let's take a look at what's going on in the wider BI field. Is it true that the more content you bring into BI the better, or are there trade-offs, and how do we manage those tradeoffs?

The more the better

Kobielus: It's true that the more relevant content you bring into your analytic environment the better, in terms of having a single view or access in a unified fashion to all the information that might be relevant to any possible decision you might make within any business area. But, clearly, there are lots of caveats, "gotchas," and trade-offs there.

One of these is that it becomes very expensive to discover, to capture, and to do all the relevant transformation, cleansing, storage, and delivery of all of that content. Obviously, from the point of view of laying in bandwidth, buying servers, and implementing storage, it becomes very expensive, especially as you bring more unstructured information from your content management system (CMS) or various applications from desktops and from social networks.

So, the more information of various sorts that you bring into your BI or analytic environment, it becomes more expensive from a dollars-and-cents standpoint. It also becomes a real burden from the point of view of the end user, a consumer of this information. They are swamped. There's all manner of information.

If you don't implement your BI environment, your advanced analytic environment, or applications in a way that helps them to be more productive, they're just going to be swamped. They're not going to know what to do with it -- what's relevant or not relevant, what's the master reference, what's the golden record versus what's just pure noise.

So, there is that whole cost on productivity, if you don't bring together all these disparate sources in a unified way, and then package them up and deliver them in a way that feeds directly into decision processes throughout your organization, whether HR, finance, or the like.

Gardner: So, as we look outside the organization to gain insights into what market challenges organizations face and how they need to shift and track customer preferences, we need to be mindful that the fire hose can't just be turned on. We need to bring in some tools and technologies to help us get the right information and put it in a format that's consumable.

Kobielus: Yes, filter the fire hose. Filtering the fire hose is where this topic of web data services for BI comes in. Web data services describes that end-to-end analytic information pipe-lining process. It's really a fire hose that you filter at various points, so that the end users turn on their tap and they're not blown away by a massive stream. Rather, it's a stream of liquid intelligence that is palatable and consumable.

Gardner: Stefan, from your perspective in working with customers, how wide and deep do they want to go when they look to web data services? What are we actually talking about in terms of the type of content?

Andreasen: Referring back to your original question, where you talk about whether we need more content, and whether that improves the analysis and results that analysts are getting, it's all about, as Jim also mentioned, the relevance and timeliness of the data.

There is a fire hose of data out there, but some of that data is flowing easily, but some of it might only be dripping and some might be inaccessible at all. Maybe I should explain the concept.

Think about it this way. The relevant data for your BI applications is located in various places. One is in your internal business applications. Another is your software-as-a-service (SaaS) business application, like Salesforce, etc. Others are at your business partners, your retailers, or your suppliers. Another one is at government. The last one is on the World Wide Web in those tens of millions of applications and data sources. There is very often some relevant information there.

Accessible via browser

Today, all of this data that I just described is more or less accessible in a web browser. Web data services allow you to access all these data sources, using the interface that the web browser is already using. It delivers that result in a real-time, relative, and relevant way into SQL databases, directly into BI tools, or to even service enabled and encapsulated data. It delivers the benefits that IT can now better serve the analysts need for new data, which is almost always the case.

BI projects happen in two ways. One is that you make a completely new BI. You get a completely new BI system, and then make brand-new reports, and new data sources. That's the typical BI project.

What's even more important is that incremental daily improvement of existing reports. Analysts sit there, they find some new data source, they have their report, and they say, "It would be really good, if I could add this column of data to my report, maybe replace this data, or if I could get this amount of data in real-time rather than just once a week." So it's those kinds of improvements that web data services also really can help with.

Gardner: Jim Kobielus, it sounds like we've got two nice opportunities here. One is the investments that have already been made in BI internally, largely for structured data. Now, we have this need to look externally and to look at the newer formats internally around web content and browser-based content. We need to pull these together.

Kobielus: There are a lot of trends. One of them is, of course, self-service mashups by end users of their own reports, their own dashboards, and their own views of data from various sources, as well as their data warehouses, data marts, OLAP cubes and the like.

But, another one gets to what you're asking about, Dana, in terms of trends in BI. At Forrester, we see traditional BI as a basic analytics environment, with ad-hoc query, OLAP, and the like. That's traditional BI -- it's the core of pretty much every enterprise's environment.

Advanced analytics, building on that initial investment and getting to this notion of an incremental add-on environment is really where a lot of established BI users are going. Advanced analytics means building on those core reporting, querying, and those other features with such tools as data mining and text analytics, but also complex event processing (CEP) with a front-end interactive visualization layer that often enables mashups of their own views by the end users.

When we talk about advanced analytics, that gets to this notion of converging structured and unstructured information in a more unified way. Then, that all builds on your core BI investment -- smashing the silos between data mining and text mining that many organizations have implemented for good reasons. These are separate projects, probably separate users, separate sources, separate tools, and separate vendors.

We see a strong push in the industry towards smashing those silos and bringing them all together. A big driver of that trend is that users, the enterprises, are demanding unified access to market intelligence and customer intelligence that's bubbling up from this massive Web 2.0 infrastructure, social networks, blogs, Twitter and the like.

Relevant to ongoing activities

That's very monetizable and very useful content to them in determining customer sentiment, in determining a lot of things that are relevant to their ongoing sales, marketing, and customer service activities.

Gardner: So, we're not only trying to bring the best of traditional BI with this large pool of valuable information from web data services. We're also trying to extend the benefits of BI beyond just the people who can write a good SQL query, the proverbial folks in the white lab coats behind the glass windows. We're trying to bring those BI analytics out to a much larger class of people in the organization.

Kobielus: Exactly. SQL queries are the core of traditional BI and data warehousing in terms of the core access language. Increasingly, in the whole advanced analytics space, SQL is becoming just one of many access techniques.

One might, in some ways, describe the overall trend as toward more service-oriented architecture (SOA), oriented access of disparate sources through the same standard interfaces that are used everywhere else for SOA applications. In other words, WS/XML, WSDL, SOAP, and much more.

So, SOA is coming to advanced analytics, or is already there. SOA, in the analytics environment, is enabled through a capability that many data federation vendors provide. It's called a "semantic virtualization layer." Basically, it's an on-demand, unified roll up of disparate sources.

Increasingly, in the whole advanced analytics space, SQL is becoming just one of many access techniques.



It transforms them all to a common set of schemas and objects, which are then wrapped in SOA interfaces and presented to the developer as a unified API or service contract for accessing all this disparate data. SOA really is the new SQL for this new environment.

Gardner: Stefan, what is holding back organizations from being able to bring more of this real-time, highly actionable information vis-à-vis web services? What's preventing them from bringing this into use with their BI and analytics activity?

Andreasen: First, let me comment on what Jim said, and then try to answer your question. Jim's comment about SOA as common to BI is really spot on.

The world is more diverse

Traditionally, for BI, we've been trying to gather all the data into one unified, centralized repository, and accessing the data from there. But, the world is getting more diverse and the data is spread in more and different silos. What companies realize today is that we need to get service-level access to the data, where they reside, rather than trying to assemble them all.

So, tomorrow's data stores of BI, and today's as well -- and I'll give you an example -- is really a combination of accessing data in your central data repositories and then accessing them where they reside. Let me just explain that by an example.

One Fortune 500 financial services company spent three years trying to build a BI application that would access data from their business partners. The business partners are big banks spread all over the U.S. The effort failed, but they had to solve this problem, because it was a legal and regulatory necessity for them.

So, they had to do it with brute force. Basically, they had analysts logging into their business partners' web sites and business applications, and copying and pasting those data into Excel to deliver those reports.

Finally, we got in contact with them, and we solved that problem. Web data services can encapsulate or wrap the data silos that were residing with their business partners into services -- SOAP services, REST services, etc. -- and thereby get automated access to the data directly into the BI tool. So, the problem they tried to solve for three years could now be solved with data services, and is running really successfully in production today.

This is also where web data services technology comes into play. Who knows best what data they want? It's the analysts, right? But who delivers the data? It's the IT department.



Kobielus: Dana, before we go to the next question, I want to extend what Stefan said, because that's very important to understand this whole space. This new paradigm, where SOA is already here in advanced analytics, is enabled by mashup. I published a report recently called Mighty Mashups that talks about this trend.

You need two core things in your infrastructure to make this happen. One is data mashups. In the back end, in the infrastructure, you need to have orchestrated integration, transformations, consolidation, and joining among disparate data sets. Then, you expose those composite data objects as services through SOA.

Then, in the front end, you need to enable end users to have access to these composite data objects through a registry, or whatever you call it, that's integrated into the environments where the user actually does work, whether it's their browsers/portal, Excel, or Microsoft Office environment. So, it's the presentation mashup on the user front end, and data mashup -- a.k.a. composite data objects -- on the back end to make this vision a reality.

Gardner: So, what's been holding back this ability to use a variety of different data types, content types, and data services in relation to BI has been proprietary formats, high cost and complexity, laborious manual processes, perhaps even spreadsheets, and a little older way of presenting information. Is that fair, Stefan?

Andreasen: I think so, yes. This is also where web data services technology comes into play. Who knows best what data they want? It's the analysts, right? But who delivers the data? It's the IT department.

Tools are lacking

Today, the IT department often lacks tools to deliver those custom feeds that the line of business is asking for. But, with web data services, you can actually deliver these feeds. The data that IT is asking for is almost always data they already know, see, and work with in the business applications, with the business partners, etc. They work with the data. They see them in the browsers, but they cannot get the custom feeds. With the web data services product, IT can deliver those custom feeds in a very short time.

Let me use an example here again. This is a real story. Suppose I am the CEO of one of the largest network equipment manufacturers in the world. I am running a really complex business, where I need to understand the sales figures and the distribution model. I possibly have hundreds of different systems and variables I need to look at to run my business.

Another fact is I am busy. I travel a lot. I'm often in the airport or where I don't have access to my systems. When I finally get access, I have to open my laptop, get on the 'Net’, and pull out my report.

What we did here was we took our product, service enabled the relevant reports, built a Blackberry front end to that, and delivered that in three hours, from start to end. So, suddenly, in a very agile fashion, the CEO could reach his target and look at his data anywhere he had wireless access.

Gardner: It must be very frustrating for these analysts, business managers, and business development people to be able to see content and data out on the web through their browser, but not be able to get it into context with their internal BI systems, and get those dashboards and views that allow a much fuller appreciation of what's really going on.

So, breaking down this barrier and giving them the key to the house, or actually giving IT a way to deliver the key to the house, is critical for the agility of BI going forward.



Andreasen: It's almost absurd. Think about it. I'm an analyst and I work with the data. I feel I own the data. I type the data in. Then, when I need it in my report, I cannot get it there. It's like owning the house, but not having the key to the house. So, breaking down this barrier and giving them the key to the house, or actually giving IT a way to deliver the key to the house, is critical for the agility of BI going forward.

Kobielus: I agree. Here's an important point I want to make as well. The key to making this all happen, making this mashup vision of reality in the final analysis, is expanding the flexibility of your data or source discovery capabilities within the infrastructure.

Most organizations that have a BI environment have one or more data warehouses aggregating and storing the data and they've got pre-configured connections and loading of data from specific sources into those data warehouses. Most users who are looking at reports in their BI environment are looking only at data that's pre-connected, pre-integrated, pre-processed by their IT department.

The user feels frustration, because they go on the Web and into Google and can see the whole universe of information that's out there. So, for a mashup vision to be reality, organizations have got to go the next step.

Much broader range

It's good to have these pre-configured connections through extract, transform and load (ETL) and the like into their data warehouse from various sources. But, there should also be ideally feeds in from various data aggregators. There are many commercial data aggregators out there who can provide discovery of a much broader range of data types -- financial, regulatory, and what not.

Also, within this ideal environment there should be user-driven source discovery through search, through pub-sub, and a variety of means. If all these source-discovery capabilities are provided in a unified environment with common tooling and interfaces, and are all feeding information and allowing users to dynamically update the information sets available to them in real-time, then that's the nirvana.

That means your analytic environment is continuously refreshed with information that's most relevant to end users and the decisions they are making now.

Gardner: So, we've identified the problem, and that's bringing the best of web services and web data into the best of what BI does and then expanding the purview of that beyond the white lab coats crowd, into the people who can take action on it. That's great. But, with the fire hose, we can't just start allowing this access to these data services without what the IT department considers critical. That is to keep the cost down, because we're still in recession and the budgets are tight.

We also need to have governance. We need to have manageability. We need to make the IT people feel like they can be responsible in opening up this filtered fire hose. So how do we do that, Stefan? How do we move from pure web static to an enterprise-caliber web data services?

The way our product works is that it allows you to instruct our system how to interact with a web application, just the same way as the line of business user.



Andreasen: Thank you for mentioning that. Jim, to get back to you on mashups, that's really relevant. Let's just look at the realities in IT departments today. They're probably understaffed. They've probably got budget cuts, but they have more demand from lines of business, and they probably also have more systems they have to maintain. So, they're being pushed from all sides.

What's really necessary here is a new way of solving this problem. This is where Kapow and web data services come in, as a disruptive new way of solving a problem of delivering the data -- the real-time relevant data that the analyst needs.

The way it works is that, when you work with the data in a browser, you see it visually, you click on it, and you navigate tables and so on. The way our product works is that it allows you to instruct our system how to interact with a web application, just the same way as the line of business user.

This means that you access and work with the data in the world in which the end users see the data. It's all with no coding. It's all visual, all point and click. Any IT person can, with our product, turn data that you see in a browser into a real feed, a custom feed, virtually in minutes or in a few hours for something that would typically take days, weeks, or months -- or may even be impossible.

Hand in hand

So a mashup is really an agile business application, a situational application. How can you make situational BI without agile data, without situational data? They basically go hand in hand. For mashups to deliver on the promise, you really need a way to deliver the data feeds in a very agile fashion.

Gardner: But what about governance and security?

Andreasen: Web data services access the data in the way you do from a web browser. All data resides in a database somewhere -- inside your firewall, at a customer, at a partner, or somewhere. That database is very secure. There's no way to access the database, without going through tedious processes and procedures to open a hole in that firewall.

The beauty with web data services is that it's really accessing the data through the application front end, using credentials and encryptions that are already in place and approved. You're using the existing security mechanism to access the data, rather than opening up new security holes, with all the risk that that includes.

Gardner: Jim, from some of the reports that you've done recently, what are customers, the enterprise customers, telling you about what they need in terms of better access to web data services, but also mindful about the requirements of IT around security and governability and so forth?

Kobielus: Right, right. The core theme I'm hearing is that mashups, user self-service development, and maintenance of user disparate data are very, very important, for lots of reasons. One, of course, is speeding delivery of analytics and allowing users to personalize it, and so forth. But, mashups without IT control is essentially chaos. And, mashups without governance is an invitation to chaos.

. . . users should be able to mashup and create their own reports and dashboards, but, from the perspective of the companies that employ them, they should only be able to mashup from company-sanctioned sources . . .



What does governance mean in this environment? Well, it means that users should be able to mashup and create their own reports and dashboards, but, from the perspective of the companies that employ them, they should only be able to mashup from company-sanctioned sources, such as data warehouses data marts, and external sources.

They should be able to only mashup that data, tables, records, or fields that they have authorized access to. They should only be able to mashup within the bounds of particular templates, reports, and dashboards that are sanctioned by the company and maintained by IT. There should be ongoing monitoring of access, utilization, and refreshes.

Then, users should be able to share their mashups with other users to create ever more composite mashups, but they should only be able to share data analytics that the recipient has authorized access to.

Now, this sounds like fascism, but it really isn't, because in practice what goes on is that users are usually given a long leash in a mashup environment to be able to pull in external data, when need be, with IT being able to monitor the utilization or the access of that data.

Fundamentally, governance comes down to the fact that all the applications are stored within a metadata environment -- repositories, and so forth -- that are under management by IT. So, that's the final piece in the mashup governance equation.

Gardner: I think I'm hearing you say that you really should have an intermediary between all of that web data and your BI analytics and the people making the decisions, not only for those technical reasons, but also to vet the quality of the data.

It’s in IT’s interest

Kobielus: Exactly. This is in IT's interest, and they know that. IT wants to insource as much of the development and maintenance of reports and dashboards and the like as they can get away with, which means it's pushed down to the end user to do the maintenance themselves on their own views.

IT is more than happy to go toward mashup, if there is the ability for them to keep their eyes and ears open, to set the boundaries of the sandbox, and insource to end users.

Gardner: Stefan, I want to go back to you, if I could. We talked about how to bring this into IT, but we also need to bring in to this the role of the developer, because we're just not talking about integration, we're also talking about presentation.

Does what Kapow brings to the table also allow those developers to get a task about trying to expose web data services within the context of applications, views, different audit presentation, dashboards, and what not? What's the role of the developer in this?

Andreasen: That's very important. We talked about this fire hose before. When I see a fire hose in front of me, I imagine the analyst can now open this fire hose and all the data in the world just splashing in their face, and that's really not the case. web data services allows the developer to incite the IT department to much more quickly develop and deliver those custom feeds or those custom web services that the analysts need in the BI tools.

Also, on governance, the reality is that the data that has value is data that comes from business partners, from government, or from sources where you have a business relationship, and therefore can govern it.



Also, on governance, the reality is that the data that has value is data that comes from business partners, from government, or from sources where you have a business relationship, and therefore can govern it. But, for various reasons, you cannot rewrite those applications, you cannot access those SQL databases in a traditional way. web data services is a way to access data from trusted sources, but access them in a much more agile way.

Gardner: Those services are coming across in a standardized format that developers can work with using existing tools.

Andreasen: Yes, that's very important. Web data services deliver the data into your standard data warehouse, into your standard SQL databases. Or, as I said earlier, it can wrap those applications into SOAP services, REST services, RSS feeds, and even .NET and Java API, so you get the API or you get the data access exactly the way you need it in your BI tool, in your data mining environment, etc.

Gardner: We've established the need. We've looked at the value of increasing BI's purview. We've looked at the larger trends around SOA and bringing lots of different data types into an architecture that can then be leveraged for BI and analytics. We've looked at the need for extending this to business processes outside the organization, as well as data types inside. We've looked at the role of the developer.

Are there examples, Stefan, of people who are actually doing this, who have been early adopters, who have taken the step of recognizing an intermediary and the tool and platform set to manage web data services in the context of BI? And, if they've done that, what are the paybacks, what are the metrics of success?

Andreasen: One of our early adopters is Audi. They've been using our product for five years. What was important for them was that, traditionally, it could take three to six months for them to get access to some data. But, with the Kapow Web Data Server, they were able to access data and create these custom feeds in a much shorter fashion, days rather than months.

What the business needs

They have been using it successfully for five years. They are growing with it, they're getting a lot of benefit around it, and couldn't imagine running the IT department without web data services today, because it gives them the way to deliver this agile custom data feeds that the business needs.

Gardner: Jim Kobielus, looking to the future, it seems to me that there is going to be more types of data coming from external sources. Perhaps, more of the internal data that companies have used in traditional applications -- BI and integration -- might find itself being housed in server farms, otherwise known as clouds, either on-premises, on some third-party grid or utility fabric, or some hybrid of the two.

When we factor in the movement and expected direction of cloud computing, how does that then bear down on the requirements for managed, governed, and IT-caliber, mission-critical caliber web data service tools?

Kobielus: It simplifies it and complicates it. It simplifies to some degree or enables this vision of self-service BI mashup, with automated source discovery, to come to fruition. You need a lot of compute power, you need a lot of data storage to do things like high volume, real-time text analytics.

A lot of that is going to have to be outsourced to public clouds that are scalable. They can scale out petabytes worth of data or can scale out some massive server farms to do semantic analysis and transformations and the like. So, the storage and the processing for most visions have to be outsourced to cloud providers. To some degree it makes it possible to realize this vision on the back end, at the web data services and data mashup side.

Public clouds are essentially silos from each other . . . They don't necessarily interoperate out of the box with your existing premises data environment, if you're an enterprise.



It also complicates it, because now you're introducing more silos. Public clouds are essentially silos from each other. There is Amazon, and there is the Windows SQL data or Azure, Then, of course, there is Google and a variety of others that are providing clouds that don't interoperate well, or at all, with each other. They don't necessarily interoperate out of the box with your existing premises data environment, if you're an enterprise.

So, the governance of all these disparate functions, the coordination of security, and the encryption and so forth across all these environments, as well as the coordination of the data archiving and auditing need to be worked out by each organization that goes this route with a disparate and motley assortment of internal and external platforms that are managing various functions within this analytic cloud.

In other words, it could complicate this whole equation considerably, unless you have one predominant public cloud partner that can do all the data integration, all the cleansing, all the transforms, all the warehousing in their cloud, and can provide you also with this SOA abstraction layer, the semantic virtualization layer, and can also ideally host your advanced analytics applications, like your data mining, in that environment.

It can do it all for you in a very streamlined way, with a common governance, security administration, and data modeling toolset. Remember, end users are a big part of this equation here. The end users can then pick up these cloud-based tools to mash up data within this unified cloud and mash it up in a way that makes sense to end users, not the professional black belt data modelers.

That vision cannot be realized right now with the commercial cloud offerings in the analytic market. I think it will take about two to three to five years for the cloud providers to go this route. It's not there yet.

Gardner: We're about out of time. I want to take the same question to Stefan about the cloud computing angle and the mixed sourcing for applications, datasets, and business processes. It seems to me this would be an opportunity for Kapow.

No master hub

Andreasen: Absolutely. What I don't see is one big vendor that solves all your data needs and becomes like the master hub for all information and data on the Web. History has shown that the way that companies compete with each other is to differentiate themselves.

If everybody was using the same provider and the same kind of data, they couldn't differentiate. This is really, I think, what companies realize today -- unless we do something different and better, than our competitors, we are not going to win this game.

What's important with web data services is hosting the tools and the facilities to access the data, but allowing the customers to create in a self-service fashion the custom data feeds they need. Our product fits perfectly into that world as well. We already have many of our customers using out product in the cloud. We become a tool where they can create ad hoc, on demand, or as necessary data feeds, and to share them with anybody else that needs them.

Kobielus: I've got one more point. In this ecosystem that's emerging, there's a strong role for providers of tooling specifically focused on self-service mashup and also for what's often called on-demand analytical sandboxing, which could be used by end users to create their own analytic workspace, and pull information.

What's important with web data services is hosting the tools and the facilities to access the data, but allowing the customers to create in a self-service fashion the custom data feeds they need.



Those that can provide the tooling that works in front of whatever the organization's preferred data management or data federation or data warehousing or BI vendor might be. So there's a plenty of opportunity for the likes of Kapow, and many others in this space too, for complementary solutions that are integrated with any of the leading data federation and cloud analytic solutions that are out there.

Gardner: Very good. I'm afraid we'll have to leave it there. We've been discussing the requirements around bringing web data services into BI, but doing so in a mission-critical fashion that's amenable to the IT department.

I want to thank our guests. We've been joined by Jim Kobielus, senior analyst at Forrester Research. Thanks, Jim.

Kobielus: Sure, no problem.

Gardner: We've also been joined by Stefan Andreasen. He's the co-founder and chief technology officer at Kapow Technologies. Thank you so much, Stefan.

Andreasen: Thank you everyone for a great discussion.

Gardner: This is Dana Gardner, principal analyst at Interarbor Solutions, and you've been listening to a sponsored BriefingsDirect podcast. This is just part of a series of four podcasts on the subjects around web data services and BI.

We look forward to future discussions on text analytics, cloud computing, and the role of BI in the future. Thanks for listening, and come back next time.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: Kapow Technologies.

Transcript of a sponsored BriefingsDirect podcast, one of a series on web data services, with Kapow Technologies, with a focus on information management for business intelligence. Copyright Interarbor Solutions, LLC, 2005-2009. All rights reserved.

Thursday, October 01, 2009

Cloud Computing by Industry: Novel Ways to Collaborate Via Extended Business Processes

Transcript of a sponsored BriefingsDirect podcast examining how cloud computing methods promote innovative sharing and collaboration for industry-specific process efficiencies.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: Hewlett-Packard.

Free Offer: Get a complimentary copy of the new book Cloud Computing For Dummies courtesy of Hewlett-Packard at www.hp.com/go/cloudpodcastoffer.

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you’re listening to BriefingsDirect.

Today, we present a sponsored podcast discussion on how to make the most of cloud computing for innovative solving of industry-level problems. As enterprises seek to exploit cloud computing, business leaders are focused on new productivity benefits. Yet, the IT folks need to focus on the technology in order to propel those business solutions forward.

As enterprises confront cloud computing, they want to know what's going to enable new and potentially revolutionary business outcomes. How will business process innovation -- necessitated by the reset economy -- gain from using cloud-based services, models, and solutions?

It's as if the past benefits of Moore's Law, of leveraging the ongoing density of circuits to improve performance while also cutting costs, has now evolved to a cloud level, trying to (in the context of business problems) do more for far less.

Early examples of applying cloud to industry challenges, such as the recent GS1 Canada Food Recall Initiative, show that doing things in new ways can have huge payoffs.

We'll learn here about the HP Cloud Product Recall Platform that provides the underlying infrastructure for the GS1 Canada food recall solution, and we will dig deeper into what cloud computing means for companies in the manufacturing and distribution industries and the "new era" of Moore's Law.

Here to help explain the benefits of cloud computing and vertical business transformation, we welcome Mick Keyes, senior architect in the HP Chief Technology Office. Welcome, Mick.

Mick Keyes: Thank you, very much.

Gardner: We are also joined by Rebecca Lawson, director of Worldwide Cloud Marketing at HP. Hello, Rebecca.

Rebecca Lawson: Hello.

Gardner: And, we're also joined by Chris Coughlan, director of HP's Track and Trace Cloud Competency Center. Welcome to the show, Chris.

Chris Coughlan: Thanks, very much.

Gardner: I'd like to start with Rebecca, if I could. Tell us a little bit about the cloud vision, as it is understood at HP. Where does this fit in, in terms of the business, the platform, and the tension between the technology and the business outcomes?

Overused term

Lawson: Sure, I'm happy to. Everyone knows that "cloud" is a word that tends to get hugely overused. Instead of talking specifically about cloud, at HP we try to think about what kinds of problems our customers are trying to solve, and what are some new technologies that are here now, or that are coming down the pike, to help them solve problems that currently can't be solved with traditional business processing approaches.

Rather than the cloud being about just reducing costs, by moving workloads to somebody else's virtual machine, we take a customer point of view -- in this case, manufacturing -- to say, "What are the problems that manufacturers have that can't be solved by traditional supply chain or business processing the way that we know it today, with all the implicated integrations and such?"

That's where we're coming from, when we look at cloud services, finding new ways to solve problems. Most of those problems have to do with vast amounts of data that are traditionally very hard to access by the kinds of application architectures that we have seen over the last 20 years.

Gardner: So, we're talking about a managed exposure of information, knowledge, and things that people need to take proper actions on. I've also heard HP refer to what they are doing and how this works as an "ecosystem." Could you explain what you mean by that?

Most of those problems have to do with vast amounts of data that are traditionally very hard to access by the kinds of application architectures that we have seen over the last 20 years.



Lawson: As we move forward, we see that, different vertical markets -- for example, manufacturing or pharmaceuticals -- will start to have ecosystems evolve around them. These ecosystems will be a place or a dynamic that has technology-enabled services, cloud services that are accessible and sharable and help the collaboration and sharing across different constituents in that vertical market.

We think that, just as social networks have helped us all connect on a personal level with friends from the past and such, vertical ecosystems will serve business interests across large bodies of companies, organizations, or constituents, so that they can start to share, collaborate, and solve different kinds of issues that are germane to that industry.

A great example of that is what we're doing with the manufacturing industry around our collaboration with GS1, where we are solving problems related to traceability and recall.

Gardner: So, for these members within the ecosystem, their systems alone cannot accomplish what having a third party or cloud-based platform can accomplish in terms of cooperation, collaboration, coordinated and managed, and even governed business processes.

Lawson: That's right. In fact, I'll throw it over to Mick to talk about how this is really different and really how it serves the greater purpose of the manufacturing community. Mick?

Multiple entities

Keyes: A good example is the manufacturing industry, and indeed the whole linear type supply chain that is in use. If you look at supply chains, food is a good example. It's one of the more complicated ones, actually. You can have anywhere up to 15-20 different entities involved in a supply chain.

In reality, you've got a farmer out there growing some food. When he harvests that food, he's got to move it to different manufacturers, processors, wholesalers, transportation, and to retail, before it finally gets to the actual consumer itself. There is a lot of data being gathered at each stage of that supply chain.

In the traditional way we looked at how that supply chain has traceability, they would have the, infamous -- I would call it -- "one step up, one step down" exchange of data, which meant really that each entity in the supply chain exchanged information with the next one in line.

That's fine, but it's costly. Also, it doesn't allow for good visibility into the total supply chain, which is what the end goal actually is.

What we are saying to industry at the moment -- and this is our thesis here that we are actually developing -- is that, HP, with a cloud platform, will provide the hub, where people can either send data or allow us to access data. What a cloud will do is aggregate different piece of information to provide value to all elements of the supply chain to give greater visibility into the supply itself.

What a cloud will do is aggregate different piece of information to provide value to all elements of the supply chain to give greater visibility into the supply itself.



Food is one example, but you've got lots of other examples in different industries -- the pharmaceutical industry, of course. You've also got the aeronautical industry and the aerospace industry. It's any supply chain that's out there, Dana.

Gardner: Mick, you mentioned this hub and this platform. Is this just a blank canvas that these vertical industries can then come to and apply their needs or is there a helping hand, in addition to the strict technological fabric, that can apply some level of expertise and understanding into these verticals?

Keyes: If you look at the way we're defining the whole ecosystem, as Rebecca referred to around cloud computing, we have the cloud-optimized infrastructure, which HP has got a great pedigree in. Then, we're looking, from a platform point of view, at the next level. From this, we'll launch the different specific services.

In that platform, for example, we've got the components to cover data, analytics, software management, security, industry-specific type information, and developer type offerings as well. So, depending on what type of industry you're in, we're looking at this platform as being almost a repeatable type of offering, and you can start to lay out individual or specific industry services around this.

Gardner: The reason I asked is that there are a number of prominent cloud providers nowadays who do seem to provide mostly a blank canvas. It's very powerful. The cost benefits are there. It gives developers and architects something new to pursue, but there is not much in addition to the solution level there.

A little bit more

Keyes: When you offer or develop specific services and such for industry, you need a little bit more than being able to look at it from a technology point of view. Industry knowledge, we have found, is key, but also, when we talk to the businesses and each element of a supply chain -- and food is a good example, because it's global -- there are different cultural influences involved, such as the whole area of understanding governance and data, where it can and cannot be stored.

Technology is obviously a very important part of it, but how we look at producing services and who can consume the services is equally important. Also, we see this type of initiative as stimulating a lot of new innovation. When we use our platform to create certain pockets of data, for use of a better word, we are looking at how we can mashup different type of services.

Some companies will come with a good idea. There are other partners, excellent partners, who are developing very specific and good applications. We will use this hub and our business knowledge, as well, to look at the creation of new types of services and the mashup of different services.

It allows us also to talk to the business people in different parts of the supply chain and different industries to look at very fast, creative ways of offering new services for their industry.

There were a lot of health scares and food scares over the last year or so. We looked at that and said. "This is a very good opportunity to actually develop everything as a service."



Gardner: Chris Coughlan, tell us a little bit about your competency center, how you started, and perhaps illustrate with an example how this technological knowledge and appreciation of the business issues come together?

Coughlan: As follow-on from what Mick said, we have infrastructure as a service (IaaS), we have platform as a service (PaaS), and we have software as a service (SaaS). And, in the industry we're told was that there was going to be everything as a service. But really nobody started defining what you meant beyond SaaS.

There were a lot of health scares and food scares over the last year or so. We looked at that and said. "This is a very good opportunity to actually develop everything as a service."

We also came to the conclusion, which is very important, that there are two aspects of that. There has to be collaboration along all the various company supply chains, particularly if you want to recall something, or if you want to do track and trace. As well as that, there has to be standardization in what you are doing. So, that led to our relationship with GS1 and the development of the recall system.

Gardner: I spoke in my setup about both lowering cost and enabling new levels of productivity and innovation. Have you found that to be the case? Are you able to do both of those?

Chain of islands

Coughlan: Absolutely. If you think about it, the current recall systems in the food industry -- and Mick talked them – target from “farm to fork”, so to speak. Look at all the agencies. There's manufacturing, suppliers, retailers, and whatever. A piece of food can be caught anywhere within that supply chain, and each company and each unit in that supply chain is really behaving as an island in itself.

They might have their own systems, but then those systems are not linked. If there's a problem, you have to go from automated systems to manual systems, whatever. What we've done is we have linked all those systems up. We have agreed on a standard template from the GS1. This is the information that all those agents along the supply chain will share with each other, so that food can be recalled very quickly and very effectively.

If that's done, you can see that from the health and safety issue. You can see it from a contamination issue. You can see it from getting items off shelves and preventing items from being shipped. This can happen quite fast, as opposed to the system we have today.

Gardner: This is a payback that seems to have a very positive impact across that ecosystem, for the consumers, the suppliers, the creators, and then the brands, if they are involved.

Coughlan: Absolutely. First of all, as a consumer, it gives you a lot more confidence that the health and safety issues are being dealt with, because, in some cases, this is a life and death situation. The sooner you solve the problem, the sooner everybody knows about it. You have a better opportunity of potentially saving lives.

So we really look at it from a positive view also, about how this is creating benefits from a business point of view.



As well as that, you're looking at brand protection and you're also looking at removing from the supply chain things that could have further knock-on effects as well.

Keyes: Just to interject there. Those are very good points that Chris is making. We see a big appetite from different people in supply chains to get involved in this type of mechanism, because they look at it from a brand or profit center point of view. As a company, you'll be able to get greater visibility into your process or into your brand efforts right through the consumer.

In the older way supply chains worked, as Chris mentioned, it was linear -- one step up, one step down. The people at the lower end of the supply chain, for use of a better word, often weren't able to find out how the products were being used by consumers.

We have SaaS now, not just to any individual entity in the supply chain, but anybody who subscribes to our hub. We can aggregate all the information, and we're able to give them back very valuable information on how their product is used further up the supply chain. So we really look at it from a positive view also, about how this is creating benefits from a business point of view.

Gardner: So, a critical business driver, of course, is the public-safety issue. But, in putting into place this template of cloud process, we perhaps gain a business intelligence (BI) value over time with greater visibility across these different variables in the supply chain itself.

Addressing food safety

Keyes: Absolutely. There are quite a lot of activities you see around the world at the moment around greater focus on food safety. In the U.S., for example, HR 2749, a bill that's gone to Congress, is really excellent in how it looks to address the whole area of food safety.

If you look at that, it's leaning towards the concept of greater integration in supply chains. Regulatory bodies, healthcare bodies, and sectors like that will very quickly be able to address any public safety issues that happen.

We're also looking at how you integrate this into the whole social-networking arena, because that's information and data out there. People are looking to consume information, or get involved in information sharing to a certain degree. We see that as a cool component also that we can perhaps do some BI around and be able to offer information to industry, consumers, and the regulatory bodies fairly quickly.

The key to this is that this technology is not causing the manufacturers to do a lot of work.



Coughlan: The point there is that cloud is enabling a convergence between enterprises. It's enabling enterprise collaboration, first of all, and then it's going one step further, where it's enabling the convergence of that enterprise collaboration with Web 2.0.

You can overlay a whole pile of things --carbon footprints, dietary information, and ethical food. Not only is it going to be in the food area, as we said. It's going to be along every manufacturing supply chain -- pharmaceuticals, the motor industry, or whatever.

Gardner: Rebecca, do you have something you want to offer?

Lawson: The key to this is that this technology is not causing the manufacturers to do a lot of work. For example, if I am a peanut packaging person, I take peanuts from lots of different growers and I package them up. I send some to the peanut butter companies and some to the candy manufacturing companies or whatever.

I already have data in house about what I am doing. All I have to do to participate in this traceability example or a recall example is once a day cut a report, stream the data up into the cloud, and I am done.

It's not a lot of effort on my part to participate in the benefits of being in that traceability and recall ecosystem, because I and all the other people along that supply chain are all contributing the relevant data that we already have. That's going to serve a greater whole, and we can all tap into that data as well.

Viewing the flow

So, for example, maybe there is a peanut outbreak, and I, as the peanut packaging person, can quickly go and kind of see what the flow was across the different participants of growers, retailers, consumers, and all that. The cloud technology allows us to do that, and that's why we designed it this way.

The platform that HP created in this whole ecosystem is geared towards harnessing data and information that's pretty much already there and being able to access it for key questions, which would have been nearly impossible to answer, say five years ago, when the technologies were just not around to do that.

It's a win-win-win for individual companies, which can now reduce their insurance exposure, because they've got their processes covered. They have the data. It's already shared. So, it's a major step forward for manufacturing. We think this kind of a model is not just for manufacturing. This just happens to be one good use case that we can all relate to as consumers, because everybody is afraid of a Salmonella outbreak. It affects all lives. But, it's applicable to other industries as well.

Gardner: Of course, a recent example would be the flu outbreak, as well. So, there are lots of different ways in which a common currency of shared data and information can be very critical and important.

I could care less how powerful a server is. What I care about are the problems that I am trying to solve.



I also want to look at the importance of that common currency, which, in this case, is standardized service calls and application programming interfaces (APIs), and what we have come to be familiar with as Web services is now enabling this cloud synergy across these ecosystems.

I wonder if anyone would like to take a stab at my premise that, in the past, we have looked for productivity from increased cycles in the silicon and on the hardware and in IT itself. But, is there a new possibility for a higher level of Moore's Law, so to speak, in applying these cloud approaches to productivity? Does anyone share my enthusiasm for that?

Lawson: Absolutely. In fact, I could care less how powerful a server is. What I care about are the problems that I am trying to solve. If I'm in the environmental world, if I'm government, or if I'm a financial services organization, I want to be able to creatively think about how I serve my customers.

These new technologies are allowing HP's customers to solve problems much differently than they did before, using a wider expanse of currency, as you said, which is information. Information is the currency of our era.

Structured vs, unstructured

One of the big shifts going on is that information in the past 5, 10, or 20 years has been largely held in very structured databases. That's a really good thing for certain kinds of data, but there is other data now that's just streaming into the Internet, streaming into the cloud, which is held in a more unstructured fashion.

We can now deal with that data. We can now run search and query across semistructured or unstructured data and get to some interesting results really quickly, as opposed to more traditional ways of holding certain kinds of data in a relational database. We don't think that it's going away. We just see that there is a whole new currency coming in through new ways to access information.

Coughlan: I'm a great believer in applying Moore's Law to a lot of things beyond technology -- to society, to productivity, as you said, and whatever. It's the underlying technology that originally defines Moore's Law, which actually then drives the productivity, the change in society, etc.

But, you've heard of another law called Metcalfe's Law, where he talks about the power of the network. We are bringing in the power of collaboration. What you have then are two of these nonlinear laws, which are instituting change, reducing price, doubling capacity, etc. You've even got a reinforcing thing there, which might even put Moore's Law even faster than Moore predicted himself.

Gardner: A part of this has to be, of course, cooperation and trust. What is it about the platform for manufacturing that HP has developed that enables that trust and that places this hub, this third-party, in a position where all the members of the ecosystem feel that they are protected?

We look to GS1 as the trusted advisor out there, with industry, with governments, around safety, around standards, and on traceability.



Coughlan: This is one of the reasons that we partnered with GS1 in this whole space. You're right, Dana. It would be something that industry wants to know immediately. Why would we trust an IT provider, for example, to be the trusted advisor to integrate all the different elements of the supply chain?

We're pretty much aware of that. In our discussions with GS1, the international standards body, is trusted by industry. This is their great strength. They are neutral. They are in 110 different countries. They have done a lot of work about getting uniform standards about how different systems can integrate, especially in this whole area of supply chain management.

We look to GS1 as the trusted advisor out there, with industry, with governments, around safety, around standards, and on traceability. They're not a solution provider, but they will go to best in class with their ideas.

They have asked the industry for ideas. They have gone to the industry and explained the process, for example, of how recall, as an example, should work and how traceability should work. So, we feel that to partner with somebody like GS1 is key to getting trust in the industry to apply these types of systems.

Gardner: Do you expect to see additional partnerships, and should standards bodies be thinking about moving towards partners in the cloud, so that they can extend their role as a trusted advisor, as a neutral third-party, but be able to execute on that now at a higher abstraction?

Win-win situation

Keyes: Absolutely. This is a win-win for everybody here. There are lots of really good partners out there who have, for example, point solutions that are in industry at the moment. We feel there are a lot of benefits to these partners through using GS1 standards.

Let's say that most of them do at the moment and they are all compliant, but they can work with our traceability hubs and to try and see whether they can help exchange information. In return, we'll be able to supply information and publish information through their systems back to industry as well.

GS1 is important in this also in getting together the industry, not just the actual manufacturers or the retailers, but also the technology people in the industry, so there will be uniform standards. We all know from developing traditional systems and tightly coupled systems in manufacturing and the supply chain that you need an easier matter of collaboration. GS1 has done an excellent job in the industry defining what these standards should look like.

Gardner: I know we've been focused on manufacturing, but not to go too far off the beaten track, there's also this need for greater cooperation between public and private sectors across regulatory issues. Have we seen anything moving along those lines, a trusted partnership between a manufacturing platform like HP has provided, where some sort of a public agency might then reach out to these private ecosystems?

Keyes: If you don't want to dwell on the food area, often what you find is that governments bring out laws and regulations, and they say industry must apply these laws. Often, you get a bit of a standoff, where industry would immediately say, "Okay. This is government telling us what to do, etc."

Industry is now looking at this type of model to take a preemptive step and to show that they are also active in the whole area of food safety.



In our journey of what we've been trying to do around this food industry, a lot of time we talk directly to industries themselves. Industry now also sees what the issues are and they agree with what the governments and the regulatory bodies are trying to do.

Industry is now looking at this type of model to take a preemptive step and to show that they are also active in the whole area of food safety. It's in their interests to do it, but now I think they have a mechanism, which industry, government, and regulatory bodies can actually use.

For example, if you look at the recall project that we've been involved in, we're taking data and accessing data in industry and in retailers also, but we're looking at a service that we can publish for industry. We call it visibility type services, where, at a glance, they can look at where all elements of the recall might be and what industries are actually being affected.

We're very keen to share services or offer services to different regulatory bodies, be it government, or be it directly with consumers, consumer bodies as well, we have been pretty active in discussing this with.

Gardner: Thank you, Mick. Chris, do you have any insights as well in terms of this public-private device?

Variety of clouds

Coughlan: Mick has said most of it there and Rebecca spoke earlier on about the ecosystem. As things begin to develop, you will be able to see public clouds, private clouds, and hybrid clouds. Then, you'll have a cloud portal accessing those under various circumstances, to solve various problems, or to get various pieces of information.

I see third-party point solutions feeding into those clouds. That's one of the areas that we offer -- third-party solutions -- be it in the food industry or other industries. They feed into our cloud, and that information can be either private information or collaborative information, where they define where they are going to do the collaboration, or it could be public information.

So, it would mean the private cloud, where some of the information could go into the public cloud, and other information could be a hybrid type of cloud.

Gardner: Rebecca, it seems like we could go on for hours about all these wonderful use-case scenarios and potential innovation improvements on process and the crossing of divides. But, the ecosystem is not just in the supply chain.

Right now, a lot of the industry is talking about cloud, and a lot of folks are focused on things like IaaS, virtual machines as a service, and things like that.



It also needs, I suppose, to be pulled together in terms of the cloud infrastructure, and the players that need to come together in order to enable these higher level business benefits. It strikes me that there are not that many companies that can be in a position of pulling together the ecosystem on the delivery side of these services.

Lawson: That's true, and what's different about what we are doing is we're taking a top-down approach. Right now, a lot of the industry is talking about cloud, and a lot of folks are focused on things like IaaS, virtual machines as a service, and things like that.

But you can switch it around and say, "How can we apply technology in a new way and build out the platform to support the services that industries need?" Then, for those services you build out the right kind of infrastructure and scale out an infrastructure basis on which all of that can run very smoothly.

Working backward

Now, you have a really good organizing principle to say, "If we're going to solve this problem of traceability, food track and trace, and recall, how are we going to solve that problem? Everything really drives from there, as opposed to saying, "What's the cheapest platform on which we can run some kind of food traceability?" That's just coming at it backward.

In fact, a good analogy to what we are doing with these vertical ecosystems is a well-known use case around Salesforce.com and the Force.com platform that generated around it.

Most folks realize that salesforce.com started with a sales-force automation product. Then, it broadened into a customer relationship management (CRM) product, and then, before you knew it, they had a platform on which they built the community of service or application providers, their App Exchange. That community is enabled by their underlying platform. That community serves a horizontal function for sales and marketing oriented or adjacent types of services.

If you pull that analogy out into an industry like manufacturing, transportation, or financial services, it's the same sort of thing. You want that platform of commonality, so different contingents can come and leverage the adjacencies to whatever it is that they are doing.

We really see that this ecosystem approach is the way to think about it, and vertical is the way to think about it, although, obviously, different verticals will blend together. We're working on similar projects in the transportation arena, where manufacturing can cross over quite quickly into public transportation and add lots of new development. So we are pretty excited about all these new opportunities.

We really see that this ecosystem approach is the way to think about it, and vertical is the way to think about it, although, obviously, different verticals will blend together.



Gardner: So, we actually can start thinking about pulling together ecosystems of ecosystems?

Keyes: Absolutely. We look at what we're doing at the moment around food and how that might affect the whole healthcare area as well. There are a lot of new innovations coming out in the biomedical area as well, of how we can expand things like food, pharmaceutical, or drugs to the whole health system. As you said, Dana, we see that as a very important area of collaboration between different ecosystems.

Lawson: One more point is that the ecosystem implies that it's not just about the technology. It's about the people. So, different aspects of the ecosystem are going to be human. They may be machine. They may be bits of code. There are conditions and tons of events. The ecosystem is a more holistic approach, in which you have the infrastructure, development and runtime environments, and technology-enabled services.

Gardner: If I'm a member of an ecosystem -- be it in the manufacturing, vertical, health, food recall, regulatory, or public sector -- and these concepts resonate with me, how do I get started? If I'm in a standards body of some sort, where do I go to say, "What's the partnership potential for me?"

Lawson: The first thing you can do is call HP and take a look at what we have done in our Galway Center of Expertise around traceability -- track and trace -- and we would be happy to show you that. You can take a look under the covers and see how applicable it is to your situation.

Gardner: Very good. We've been taking a look at how the new productivity levels can be exploited vis-à-vis cloud computing -- not just at the technological level, but at the process level of finding partnerships and standards and approaches that pull together ecosystems of business, potentially across business and the public sector.

Helping us to understand better the potential for cloud computing as a business tool, and how HP, and most recently GS1 Canada have pulled together a Food Recall Platform based on the HP Cloud Product Recall Platform, we have been joined by Mick Keyes. He is the senior architect in the HP Office of the Chief Technology Officer. Thank you, Mick.

Keyes: Thank you.

Gardner: We've also been joined by Rebecca Lawson, director of Worldwide Cloud Marketing at HP. Thanks, Rebecca.

Lawson: Thank you very much.

Gardner: And also, Chris Coughlan, director of HP's Track, Trace, and Cloud Competency Center. Thank you so much, Chris.

Coughlan: Thank you.

Gardner: This is Dana Gardner, principal analyst at Interarbor Solutions. You've been listening to a sponsored BriefingsDirect podcast. Thanks for listening, and come back next time.

Free Offer: Get a complimentary copy of the new book Cloud Computing For Dummies courtesy of Hewlett-Packard at www.hp.com/go/cloudpodcastoffer.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: Hewlett-Packard.

Transcript of a sponsored BriefingsDirect podcast examining how cloud computing methods promote innovative sharing and collaboration for industry-specific process efficiencies. Copyright Interarbor Solutions, LLC, 2005-2009. All rights reserved.

Wednesday, September 30, 2009

Doing Nothing Can Be Costliest IT Course When Legacy Systems and Applications Are Involved

Transcript of a BriefingsDirect podcast on the risks and drawbacks of not investing wisely in application modernization and data center transformation.

Listen to podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Learn more. Sponsor: Hewlett Packard.

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you’re listening to BriefingsDirect.

Today, we present a sponsored podcast discussion on the high, and sometimes underappreciated, cost for many enterprises of doing nothing about aging, monolithic applications. Not making a choice about legacy mainframe and poorly used applications is, in effect, making a choice not to transform and modernize the applications and their supporting systems.

Not doing anything is a choice to embrace an ongoing cost structure that may well prevent significant new spending for IT innovations. It’s a choice to suspend applications on, perhaps, ossified platforms and make their reuse and integration difficult, complex, and costly.

Doing nothing is a choice that, in a recession, hurts companies in multiple ways, because successful transformation is the lifeblood of near and long-term productivity improvements.

Here to help us better understand the perils of continuing to do nothing about aging legacy and mainframe applications, we’re joined by four IT transformation experts from Hewlett-Packard (HP). Please join me in welcoming our guests. First, Brad Hipps, product marketer for Application Lifecycle Management (ALM) and Applications Portfolio Software at HP. Welcome, Brad.

Brad Hipps: Thank you.

Gardner: Also, John Pickett from Enterprise Storage and Server Marketing at HP. Hello, John.

John Pickett: Hi. Welcome.

Gardner: Paul Evans, worldwide marketing lead on Applications Transformation at HP. Hello, Paul.

Paul Evans: Hello, Dana.

Gardner: And, Steve Woods, application transformation analyst and distinguished software engineer at EDS, now called HP Enterprise Services. Good to have you with us, Steve.

Steve Woods: Thank you, Dana.

Gardner: Let me start off by going to Paul. The recession has had a number of effects on people, as well as budgets, but I wonder what, in particular, the tight cost structures have had on this notion of tolerating mainframe and legacy applications?

Cost hasn't changed

Evans: Dana, what we’re seeing is that the cost of legacy systems and the cost of supporting the mainframe hasn’t changed in 12 months. What has changed is the available cash that companies have to spend on IT, as, over time, that cash amount may have either been frozen or is being reduced. That puts even more pressure on the IT department and the CIO in how to spend that money, where to spend that money, and how to ensure alignment between what the business wants to do and where the technology needs to go.

Given the fact that we knew already that only about 10 percent of an IT budget was spent on innovation before, the problem is that that becomes squeezed and squeezed. Our concern is that there is a cost of doing nothing. People eventually end up spending their whole IT budgets on maintenance and upgrades and virtually nothing on innovation.

At a time when competitiveness is needed more than it was a year ago, there has to be a shift in the way we spend our IT dollars and where we spend our IT dollars. That means looking at the legacy software environments and the underpinning infrastructure. It’s absolutely a necessity.

Gardner: So, clearly, there is a shift in the economic impetus. I want to go to Steve Woods. As an analyst looking at these issues, what’s changed technically in terms of reducing something that may have been a hurdle to overcome for application transformation?

Woods: For years, the biggest hurdle was that most customers would say they didn’t really have to make a decision, because the performance wasn’t there. The performance reliability wasn't there. That is there now. There is really no excuse not to move because of performance reliability issues.

What's still there, and is changing today, is the ability to look at a legacy source code application. We have the tools now to look at the code and visualize it in ways that are very compelling. That’s typically one of the biggest obstacles. If you look at a legacy application and the number of lines of code and number of people that are maintaining it, it’s usually obvious that large portions of the application haven’t really changed much. There's a lot of library code and that sort of thing.

That’s really important. We’ve been straight with our customers that we have the ability to help them understand a large terrain of code that they might be afraid to move forward. Maybe they simply don’t understand it. Maybe the people who originally developed it have moved on, and because nobody really maintains it, they have fear of going in the areas of the system.

Also, what has changed is the growth of architectural components, such as extract transform and load (ETL) tools, data integration tools, and reporting tools. When we look at a large body of, say, 10 million lines of COBOL and we find that three million lines of that code is doing reporting or maybe two million is doing ETL work, we typically suggest they move that asymmetrically to a new platform that does not use handwritten code.

That’s really risk aversion -- doing it very incrementally with low intrusion, and that’s also where the best return on investment (ROI) picture can be portrayed. You can incrementally get your ROI, as you move the reports and the data transformation jobs over to the new platform. So, that’s really what’s changed. These tools have matured so that we have the performance and we also have the tools to help them understand their legacy systems today.

Gardner: Now, one area where economics and technology come together quite well is the hardware. Let’s go to John with regards to virtualization and reducing the cost of storage. How has that changed the penalty increase for doing nothing?

Functionality gap

Pickett: Typically, when we take a look at the high-end of applications that are going to be moving over and sitting on a legacy system, many times they’re sitting on a mainframe platform. With that, one of the things that have changed over the last several years is the functionality gap between what exists in the past 5 or 10 years ago in the mainframe. That gap has not only been closed, but, in some cases, open systems exceed what’s available on the mainframe.

So, just from a functionality standpoint, there is certainly plenty of capability there, but to hit on the cost of doing nothing, and implementing what you currently have today is that it’s not only the high cost of the platform. As a matter of fact, one of our customers who had moved from a high-end mainframe environment onto an Integrity Superdome, calculated that if you were to take their cost savings and to apply that to playing golf at one of the premier golf places in the world, Pebble Beach, you could golf every day with three friends for 42 years, 10 months, and a couple of days.

It’s not only a matter of cost, but it’s also factoring in the power and cooling as well. Certainly, what we’ve seen is that the cost savings that can be applied on the infrastructure side are then applied back into modernizing the application.

Gardner: I suppose the true cost benefits wouldn’t be realized until after some sort of a transformation. Back to Paul Evans. Are there any indications from folks who have done this transformation as to how substantial their savings can be?

Evans: There are many documented cases that HP can provide, and, I think, other vendors can provide as well, In terms of looking at their applications and the underpinning infrastructure, as John was talking about, there are so many documented cases that point people in the direction that there are real cost savings to be made here.

There's also a flip side to this. Some research that McKinsey did earlier in the year took a sample of 100 companies as they went into the recession. They were brand leadership companies. Coming out of the recession, only 60 of those companies were still in a leadership position. Forty percent of those companies just dropped by the wayside. It doesn’t mean they went out of business. Some did. Some got acquired, but others just lost their brand leadership.

That is a huge price to pay. Now, not all of that has to do with application transformation, but we firmly believe that it is so pivotal to improve services and revenue generation opportunities that, in tough times, need to be stronger and stronger.

What we would say to organizations is, "Take a hard look at this, because doing nothing could be absolutely the wrong thing to do. Having a competitive differentiation that you continue to exploit and continue to provide customers with improving level of service is to keep those customers at a tough time, which means they’ll be your customers when you come out of the recession."

Gardner: Let’s go to Brad. I’m also curious on a strategic level about flexibility and agility, are there prices to be paid that we should be considering in terms of lock in, fragility, or applications that don’t easily lend themselves to a wider process.

'Agility' an overused term

Hipps: This term "agility" is the right term to use, but it gets used so often that people tend to forget what it means. The reality of today’s modern organization -- and this is contrasted even from 5, certainly 10 years ago -- is that when we look at applications, they are everywhere. There has been an application explosion.

When I started in the applications business, we were working on a handful of applications that organizations had. That was the extent of the application in the business. It was one part of it, but it was not total. Now, in every modern enterprise, applications really are total -- big, small, medium size. They are all over the place.

When we start talking about application transformation and we assign that trend to agility, what we’re acknowledging is that for the business to make any change today in the way it does business, in any new market initiative, in any competitive threat it wants to respond to, there is going to be an application -- very likely "applications," plural, that are going to need to be either built or changed to support whatever that new initiative is.

The fact of the matter is that changing or creating the applications to support the business initiative becomes the long pole to realizing whatever it is that initiative is. If that’s the case, you begin to say, "Great. What are the things that I can do to shrink that time or shrink that pole that stands between me and getting this initiative realized in the market space?”

From an application transformation perspective, we then take that as a context for everything that’s motivating a business with regard to its application. The decisions that you're going to make to transform your applications should all be pointed at and informed by shrinking the amount of time that takes you to turn around and realize some business initiative.

So, in 500 words or less, that's what we’re seeking with agility. Following pretty closely behind that, you can begin to see why there is a promise in cloud. It saves me a lot of infrastructural headaches. It’s supposed to obviate a lot of the challenges that I have around just standing up the application and getting it ready, let alone having to build the application itself. So I think that is the view of transformation in terms of agility and why we’re seeing things like cloud. These other things really start to point the direction to greater agility.

Gardner: It sounds as if there is a penalty to be paid or a risk to be incurred by being locked

That pool of data technology only gets bigger and bigger the more changes that I have coming in and the more changes that I'm trying to do.

into the past.

Hipps: That’s right, and you then take the reverse of that. You say, "Fine. If I want to keep doing things as is, that means that every day or every month that goes by, I add another application or I make bigger my current application pool using older technologies that I know take me longer to make changes in.

In the most dramatic terms, it only gets worse the longer I wait. That pool of data technology only gets bigger and bigger the more changes that I have coming in and the more changes that I'm trying to do. It’s almost as though I’ve got this ball and chain that I’ve attached to my ankle. I'm just letting that ball part get bigger and bigger. There is a very real agility cost, even setting aside what your competition may be doing.

Gardner: So, the inevitability of transformation goes from a long horizon to a much nearer and dearer issue. Let’s go back to Steve Woods of EDS. What are some misconceptions about starting on this journey? Is this really something that’s going to highly disrupt an organization or are there steps to do it incrementally? What might hold people back that shouldn't?

More than one path

Woods: I think probably one of the biggest misconceptions is when somebody has a large legacy application written in a second-generation language such as COBOL or perhaps PL/1 and they look at the code and imagine the future with still having handwritten code. But, they imagine maybe it’ll be in Java or C# or .Net, and they don’t pick the next step and say, "If I had to look at the system and rebuild it today, would I do it the same way?" That’s what you are doing if you just imagine one path to modernization.

Some of the code they have in their business logic might find their way into some classes in Java and some classes in .Net. What we prefer to do is a functional breakdown on what the code is actually doing functionally and then try to imagine what the options are that we have going forward. Some of that will become handwritten code and some of it will move to those sorts of implementations.

So, we really like to look at what the code is doing and imagine other areas that we could possibly implement those changes in. If we do that, then we have a much better approach to moving them. The worse thing to do -- and a lot of customers have this impression -- is to automatically translate the code from COBOL into Java.

Java and C# are very efficient languages to generate a function point, where there’s a measure of

It’s looking at the code, at what you want to do from a business process standpoint, and looking at the underlying platform.

functionality. Java takes about eight or ten lines of code. In COBOL, it takes about a 100 lines.

Typically, when you translate automatically from COBOL to Java, you still get pretty much the same amount of code. In actuality, you’ve taking the maintenance headache and making it even larger by doing this automated translation. So, we prefer to take a much more thoughtful approach, look at what the options are, and put together an incremental modernization strategy.

Gardner: Paul Evans, this really isn’t so much pulling the plug on the mainframe, which may give people some shivers. They might not know what to expect over a period of decades or what might happen when they pull the plug.

Evans: We don't profess that people unplug mainframes. If they want to, they may plug in an HP system in its place. We’d love them to. But, being very pragmatic, which is what we like to be, it's looking at what Steve was talking about. It’s looking at the code, at what you want to do from a business process standpoint, and looking at the underlying platform.

It's understanding what quality of service you need to deliver and then understand the options available. In even base technologies like this is, with microprocessors, the power that can be delivered these days means we can do all sorts of things at prices, speed, size, power output, and CO2 emissions that we could only dream of a few years ago. This power enables us to do all sorts of things.

The days where there was this walled-off area in a data center, which no other technology could match, are long gone. Now, the emphasis has been on consolidation and virtualization. There is also a big focus on legacy modernization. CIOs and IT directors, or whatever they might be, do understand there’s an awful lot of money spent on maintaining, as Steve said, handwritten legacy code that today run the organization and need to continue to provide these business processes.

Bite-size chunks

There are far faster, cheaper, and better ways to do that, but it has to be something that is planned for. It has to be something that is executed flawlessly. There's a long-term view, but you take bite-sized chunks out of it along the way, so that you get the results you need. You can feed those good results back into the system and then you get an upward spiral of people seeing what is truly possible with today’s technologies.

Gardner: John Pickett, are there any other misconceptions or perhaps under-appreciated points of information from the enterprise storage and server perspective?

Pickett: Typically, when we see a legacy system, what we hear, in a marketing sense, is that the often high-end -- and I’ll just use that as an example -- mainframes could be used as a consolidation factor. What we find is that if you're going to be moving applications or you’re modernizing applications onto an open-system environment to take advantage of the full gamut of tools and open system applications that are out there, you're not going to be doing that on a legacy environment. We see that the more efficient way of going down that path is onto an open-standard server platform.

Also, some of the other misconceptions that we see, again in a marketing sense, are that a mainframe is very efficient. However, if you compare that to a high-end HP system, for example, and just take a look at the heat output -- which we know is very important -- there is more heat. The difference in heat between a mainframe and an Integrity Superdome, for example, is enough to power a two-burner gas grill, a Weber grill. So, there's some significant heat there.

On the energy side, we see that the Superdome consumes 42 percent less energy. So, it's a very

. . . Some of the other misconceptions that we see, again in a marketing sense, are that a mainframe is very efficient.

efficient way of handling the operating-system environment, when you do modernize these applications.

Gardner: Brad Hipps, when we talk about modernizing, we’re not just modernizing applications. It’s really modernizing the architecture. What benefits, perhaps underappreciated ones, come with that move?

Hipps: I tend to think that in application transformation in most ways they’re breaking up and distributing that which was previously self-contained and closed.

Whether you're looking at moving to some sort of mainframe processing to distributed processing, from distributed processing to virtualization, whether you are talking about the application team themselves, which now are some combination of in-house, near-shore, offshore, outsourced sort of a distribution of the teams from sort of the single building to all around the world, certainly the architectures themselves from being these sort of monolithic and fairly brittle things that are now sort of services driven things.

You can look at any one of those trends and you can begin to speak about benefits, whether it’s leveraging a better global cost basis or on the architectural side, the fundamental element we’re trying to do is to say, "Let’s move away from a world in which everything is handcrafted."

Assembly-line model

Let’s get much closer to the assembly-line model, where I have a series of preexisting trustworthy components and I know where they are, I know what they do, and my work now becomes really a matter of assembling those. They can take any variety of shapes on my need because of the components I have created.

We're getting back to this idea of lower cost and increased agility. We can only imagine how certain car manufacturers would be doing, if they were handcrafting every car. We moved to the assembly line for a reason, and software typically has lagged what we see in other engineering disciplines. Here we’re finally going to catch up. We're finally be going to recognize that we can take an assembly line approach in the creation of application, as well, with all the intended benefits.

Gardner: And, when you standardize the architecture, instead of having to make sure there is a skillset located where the systems are, you can perhaps bring the systems to where the different skills are?

Hipps: That’s right. You can begin to divorce your resources from the asset that they are creating, and that’s another huge thing that we see. And, it's true, whether you're talking about a service or a component of an application or whether you're talking about a test asset. Whatever the case may be, we can envision a series of assets that make an application successful. Now, those can be distributed and geographically divorced from the owners.

Gardner: Where this has been a "nice to have" or "something on the back-burner" activity,

The pressure it’s bringing to bear on people is that the time is up when people just continue to spend their dollars on maintaining the applications . . . They can't just continue to pour money after that.

we're starting to see a top priority emerge. I’ve heard of some new Forrester research that has shown that legacy transformation is becoming the number one priority. Paul, can you offer some more insight on that?

Evans: That’s research that we're seeing as well, Dana, and I don’t know why. ... The point is that this may not be what organizations "want" to do.

They turn to the CIO and say, "If we give you $10 million, what is that you'd really like to do." What they're actually saying is this is what they know they've got to do. So, there is a difference between what they like and what they've got to do.

That goes back to when we started in the current economic situation. The pressure it’s bringing to bear on people is that the time is up when people just continue to spend their dollars on maintaining the applications, as Steve and Brad talked about and the infrastructure that John’s talked about. They can't just continue to pour money after that.

There has to be a bright point. Someone has got to say, “Stop. This is crazy. There are better ways to do this.” What the Forrester research is pointing is that if you go around to a worldwide audience and talk to a thousand people in influential positions, they're now saying, "This is what we 'have' to do, not what we 'want' to do. We're going to do this, and we're going to take time out and we're going to do it properly. We're going to take cost out of what we are doing today and it’s not going to come back."

Flipping the ratio

All the things that Steve and Brad have talked about in terms of handwritten code, once we have done it, once we have removed that handwritten code, that code that is too big for what it needs to be in terms to get the job done. Once we have done it once, it’s out and it’s finished with and then we can start looking at economics that are totally different going forward, where we can actually flip this ratio.

Today, we may spend 80 percent or 90 percent of our IT budget on maintenance, and 10 percent on innovation. What we want to do is flip it. We're not going to flip it in a year or maybe even two, but we have got to take steps. If we don’t start taking steps, it will never go away.

Hipps: I've got just one thing to add to that in terms of the aura of inevitability that is coming with the transformation. When you look at IT over the last 30 years, you can see that, fairly consistently, you can pick your time frame, and somewhere in neighborhood of every seven to nine years, there has been sort of an equivalent wave of modernization. The last major one we went through was late '90s or early 2000s, with the combination of Y2K and Web 1.0. So, sure enough, here we are, right on time with the next wave.

What’s interesting is this now number-one priority hasn’t reached the stage of inevitability. I

When you look at IT over the last 30 years, you can see that, fairly consistently, you can pick your time frame, and somewhere in neighborhood of every seven to nine years, there has been sort of an equivalent wave of modernization.

look back and think about what organizations in 2003 were still saying, "No, I refuse the web. I refuse the network world. It’s not going to happen. It’s a passing fancy," and whatever the case maybe. Inasmuch as there were organizations doing that, I suspect they're not around anymore, or they're around much smaller than they were. I do think that’s where we are now.

Cloud is reasonably new, but outsourcing is another component where transformation has been around long enough that most people have been able to look it square the eye and figure out, "You know what. There is real benefit here. Yes, there are some things I need to do on my side to realize that benefit. There is no such thing as a free lunch, but there is a real benefit here and it’s I am going to suffer, if not next year, then three years from now, if I don’t start getting my act together now."

Gardner: John Pickett, are there any messages from the boosters of mainframes that perhaps are no longer factors or are even misleading?

Pickett: There are certainly a couple of those. In the past, the mainframe was thought to be the harbinger of RAS -- reliability, availability, and serviceability. Many of those features exist on open systems today. It’s not something that is dedicated just to the high-end of the mainframe environment. They are out there on systems that are open-system platforms, significantly cheaper. In many cases, the RAS of these systems far exceeds what we’ll see on the mainframe.

That’s just one piece. Other misconceptions and things that you typically saw historically have been on the mainframe side, such as being able to drive a business-based objective or to be able to prioritize resources for different applications or different groups of users. Well, that’s something that has existed for a number of years on the open system side -- things such as backup and recovery and being able to provide very high levels of disaster recovery.

Misleading misconception

A misconception that this is something that can only be done in a mainframe environment, is not only misleading, but also not making the move to an open-system platform, continues to drive IT budget unnecessarily into an infrastructure that could be applied to either the application modernization that we have been talking about here or into the skills -- people resources within the data center.

Gardner: We seem to have a firm handle on the cost benefits over time. Certainly, we have a total cost picture, comparing older systems to the newer systems. Are there more qualitative, or what we might call "soft benefits," in terms of the competitiveness of an organization? Do we have any examples of that?

Evans: What we have to think about is the target audience out there. More and more people have access to technology. We have the generation now coming up that wants it now and wants it off the Web. They are used to using social networking tools that people have become accustomed to. So, it's one of the soft, squidgy areas as people go through this transformation.

I think that we can put hard dollars -- or pounds or euros -- against this for the moment, the inclusion of Web 2.0 or Enterprise 2.0 capabilities into applications. We have customers who are now trying that, some of it inside the firewall and some of it beyond. One, this can provide a much richer experience for the user. Secondly, you begin to address an audience that is used to analyzing these things in their day-to-day life anyway.

Why, when they step into the world of the enterprise, do they have to step back 50 years in terms

More and more people have access to technology. We have the generation now coming up that wants it now and wants it off the Web.

of capability? You just can’t imagine that certain things that people require are being done in batch mode anymore. The real-time enterprises are what people now expect and want.

So, as people go through this transformation, not only can they do all the plethora of things we have talked about in terms of handwritten code, mainframes, structure, and service-oriented architecture (SOA), but they can also start taking steps towards how they can really get these applications in line and embed them within an intimate culture.

If they start to take on board some of the newer concepts around cloud to experiment they have to understand that people aren’t going to just make this big leap of faith. At the end of the day, it's enterprise apps. We make things, apply things and count things -- and people have got to continue to do that. At the same time, they need to take these pragmatic steps to introduce these newer technologies that really can help them not only retain their current customer base, but attract new customers as well.

Gardner: Paul, when organizations go through this transformation, modernize, and go to open systems, does that translate into some sort of a business benefit, in terms of making that business itself more agile, maybe in a mergers and acquisition sense? Would somebody resist buying a company because they've got a big mainframe as an albatross around its neck?

Fit for purpose

Evans: Definitely, to have your IT fit for purpose, is something that is part of the inherent health of the organization. For organizations whose IT is way behind where it is today, it's definitely part of the health check.

To some degree, if you don’t want to get taken over or merged or acquired, maybe you just let your IT sag to where it is today, with mainframes and legacy apps, and nobody would want you. But then, you’re back to where we were earlier. You become one of those 40 percent of the companies that disappear off the face of the planet. So, it’s a sort of a double-edged sword, you make yourself attractive and you could get merged or acquired. On the other hand, you don’t do it and you’re going to go out of business. I still think I prefer the former rather than the latter.

Gardner: Let’s talk more specifically about what HP is bringing to the table. We’ve flushed out this issue quite a bit. Is there a long history at HP of modernization?

Evans: There are two things. There is what we have done internally, within the organization in the company. We’ve had to sort of eat our own dog food, in the sense that there are companies that were merged and companies that were acquired -- HP, Compaq, Digital, EDS, whatever.

It’s just not acceptable anymore to run these as totally separate IT organizations. You have to

When you take a look at the history of what we've been able to do, migrating legacy applications onto an open system platform, we actually have a long history of that.

quickly understand how to get this to be an integrated enterprise. It’s been well documented what we have done internally, in terms of taking massive amount of cash out of our IT operations, and yet, at the same time, innovating and providing a better service, while reducing our applications portfolio from something like 15,000 to 3,000.

So, all of these things going at the same time, and that has been achieved within HP. Now, you could argue that we don't have mainframes, so maybe it’s easier. Maybe that’s true, but, at the same time, modernization has been growing, and now we're right up there in the forefront of what organizations need to do to make themselves cost-effective, agile, and flexible, going forward.

Gardner: John Pickett, what about the issue around standards, neutrality, embracing heterogeneity, community and open source? Are these issues that HP has found some benefits from?

Pickett: Without a doubt. When you take a look at the history of what we've been able to do, migrating legacy applications onto an open system platform, we actually have a long history of that. We continue to not only see success, but we’re seeing acceleration in those areas.

A couple of drivers that we ended up seeing are really making the case for customers, not only the significant cost savings that we have talked about earlier. So, we're talking 50 percent or 70 percent total cost of ownership (TCO) savings driving from a legacy of mainframe environment over to an HP environment.

Additional savings

In addition to that, you also have the power savings. Simply by moving, the amount of energy that saved is enough to light 80 houses for one year. We’ve already talked about the heat and the space savings. It’s about a third of what you’re going to be seeing for a high-end mainframe environment for a similar system from HP with similar capabilities.

Why that’s important is because if customers are running out of data-center room and they’re looking at increasing their compute capacity, but they don’t have room within their data center, it just makes sense to go with a more efficient, more densely packed power system, with less heat and energy than what you’ll see on a legacy environment.

Gardner: Brad Hipps, about this issue about of being able to sell from a fairly neutral perspective, based on a solutions value, does that bring something to the table?

Hipps: We alluded earlier to the issue of lock in. If we’re going to, as we do, fly under the banner of bringing flexibility and agility to an organization, it’s tough to wave that banner without being pretty open in who you’re going to play with and where.

Organizations have a very fine eye for what this is going to mean for me not just six months from now, but two years from now, and what it’s going to mean to successors in line in the organization. They don’t want to be painted into a corner. That’s something that HP is very cognizant of, and has been very good about.

This may be a little bit overly optimistic, but you have to be able to check that box. If you’re going to make a credible argument to any enterprise IT organization, you have to show your openness and you have to check the box that says we’re not going to paint you into a corner.

Gardner: Steve Woods, for those folks who need to get going on this, where do you get started? We mentioned that iterative nature, but there must be perhaps low-hanging fruit, demonstrations of value that then set up a longer record of success.

Woods: Absolutely. What we find with our customers is that there are various levels in the processes of understanding their legacy systems. Often, we find some of them are quite mature and have gone down the road quite a bit. We offer some assessments based upon single applications and also portfolio of applications. We do have a modernization assessment and we do have a portfolio assessment. We also offer a best-shore assessment to ensure that you are using the correct resources.

Often, we find that we walk in, and the customers just don’t know anything about what their

We have the visual intelligence tools that very quickly allow us to see inside the system, see the duplicate source code, and provide them with high level cost estimates.

options are. They haven’t done any sort of analysis thus far. In those cases, we offer what we’re calling a Modernization Opportunity Workshop.

It's a very quick, usually 4-8 hour, on-site, and it takes about four weeks to deliver the entire package. We use some tools that I have created at HP that look at the clone code within the application. It’s very important to understand the pattens of the clone code and have visualizations. We have the visual intelligence tools that very quickly allow us to see inside the system, see the duplicate source code, and provide them with high level cost estimates.

We use a tool called COCOMO and we use Monte Carlo simulation. We’re able very quickly to give them a pretty high-level, 30-page report that indicates the size. Often, size is something that is completely misunderstood. We have been into customers who tell us they have four million lines of code, and we actually count the code as only 400,000 lines of code. So, it’s important to start with a stake in the ground and understand exactly where you’re at with the size.

We also do functionality composition support to understand that. That’s all delivered with very little impact. We know the subject matter experts are very busy, and we try to lessen the impact of doing that. That’s one of the places we can start, when the customer just has some uncertainty and they're not even sure where to start.

Gardner: We’ve been discussing the high penalties that can come with inaction around applications and legacy systems. We’ve been talking about how that factors into the economy and the technological shifts around the open systems and other choices that offer a path to agility and multiple-sourcing options.

I want to thank our panelists today for our discussion about the high costs and risks inherent in doing nothing around legacy systems. We’ve been joined by Brad Hipps, product marketer for Application Lifecycle Management and Applications Portfolio Software at HP. Thank you Brad.

Hipps: Thank you.

Gardner: John Pickett, Enterprise Storage and Server Marketing at HP. Thank you, John.

Pickett: Thank you Dana.

Gardner: Paul Evans, Worldwide Marketing Lead on Applications Transformation at HP. Thank you, Paul.

Evans: Thanks Dana.

Gardner: And Steve Woods, applications transformation analyst and distinguished software engineer at EDS. Thank you Steve.

Woods: Thank you Dana.

Gardner: This is Dana Gardner, principal analyst at Interarbor Solutions. You’ve been listening to a sponsored BriefingsDirect podcast. Thanks for listening and come back next time.

Listen to podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Learn more. Sponsor: Hewlett Packard.

Transcript of a BriefingsDirect podcast on the risks and drawbacks of not investing wisely in application modernization and data center transformation. Copyright Interarbor Solutions, LLC, 2005-2009. All rights reserved.