Monday, October 05, 2009

HP Roadmap Dramatically Reduces Energy Consumption Across Data Centers

Transcript of a sponsored BriefingsDirect podcast on strategies for achieving IT energy efficiency.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Learn more. Sponsor: Hewlett-Packard.

Gain more insights into data center transformation best practices by downloading free whitepapers at http://www.hp.com/go/dctpodcastwhitepapers.

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you’re listening to BriefingsDirect.

Today, we present a sponsored podcast discussion on significantly reducing energy consumption across data centers. Producing meaningful, long-term energy savings in IT operations depends on a strategic planning and execution process.

The goal is to seek out long-term gains from prudent, short-term investments, whenever possible. It makes little sense to invest piecemeal in areas that offer poor returns, when a careful cost-benefit analysis for each specific enterprise can identify the true wellsprings of IT energy conservation.

In this discussion, we'll examine four major areas that result in the most energy policy bang for the buck -- virtualization, application modernization, data-center infrastructure best practices, and properly planning and building out new data-center facilities.

By focusing on these major areas, but with a strict appreciation of the current and preceding IT patterns and specific requirements for each data center, real energy savings -- and productivity gains -- are in the offing.

To help us learn more about significantly reducing energy consumption across data centers, we are joined by two experts from HP. Please welcome John Bennett, worldwide director, Data Center Transformation Solutions at HP. Thanks for joining, John.

John Bennett: Delighted to be here with you today, Dana. Thanks.

Gardner: We are also joined by Ian Jagger, worldwide marketing manager for Data Center Services at HP. Good to have you with us, Ian.

Ian Jagger: And, equally happy to be here, Dana.

Gardner: John Bennett, let's start with you, if you don't mind. Just upfront, are there certain mistakes that energy-minded planners often make, or are there perhaps some common misconceptions that trip up those who are beginning this energy journey?

Bennett: I don't know if there are things that I would characterize as missteps or misconceptions.

We, as an industry, are full of advice around best practices for what people should be taking a look at. We provide these wonderful lists of things that they should pay attention to -- things like hot and cold aisles, running your data center hotter, and modernizing your infrastructure, consolidating it, virtualizing it, and things of that ilk.

The mistakes that customers do make is that they have this laundry list and, without any further insight into what will matter the most to them, they start implementing these things.

The real opportunity is to take a step back and assess the return from any one of these individual best practices. Which one should I do first and why? What's the technology case and what's the business case for them? That's an area that people seem to really struggle with.

Gardner: So, there needs to be some sort of a rationalization for how you approach this, not necessarily on a linear, or even what comes to mind first, but something that adds that strategic benefit.

Cherry picking quick wins

Bennett: I am not even sure I'd characterize it as strategic yet. It's just understanding the business value and cherry picking the quick wins and the highest return ones first.

Gardner: Let's go and do some cherry picking. What are some of the top, must-do items that won't vary very much from data center to data center? Are there certain universals that one needs to consider?

Bennett: We know very well that modern infrastructure, modern servers, modern storage, and modern networking items are much more energy efficient than their predecessors from even two or three years ago.

So, consolidation and modernization, which reduces the number of units you have, and then multiplying that with virtualization, can result in significant decreases in server and storage-unit counts, which goes a long way toward affecting energy consumption from an infrastructure point of view.

That can be augmented, by the way, by doing application modernization, so you can eliminate legacy systems and infrastructure and move some of those services to a shared infrastructure as well.

On the facility side, and we are probably better off asking Ian to go through this list, running a

There has been this well-published list in terms of what are best practices, and therefore, the manager's responsible. Be it IT or facilities, managers have a lot to implement on those best practices.

data center hotter is one of the most obvious ones. I saw a survey just the other day on the Web. It highlighted the fact that people are running their data centers too cold. You should sweat in a data center.

Lot of techniques like hot and cold aisles, looking at how you provide power to the racks and the infrastructure are all things that can be done, but the list is well understood.

Because he is more insightful in this and experienced in this than I am, I'll ask Ian to identify some of the top best practices from the facilities and the infrastructure side, as well.

Jagger: Going back to the original point that John made, we have had the tendency in the past to look at cooling or energy efficiency coming from the technology side of the business and the industry. More recently, thankfully, we are tending to look at that in a more converged view between IT technology, the facility itself, and the interplay between the two.

But, you're right. There has been this well-published list in terms of what are best practices, and therefore, the manager's responsible. Be it IT or facilities, managers have a lot to implement on those best practices. Starting with the easy ones first, such as hot and cold aisles, blanking panels, being tidy with respect to cabling -- have cabling run under the floor, and items like that doesn't, as you alluded to, necessarily provide the best return on investment (ROI), simply because it's a best practice.

Areas of focus

When we undertake energy analysis for our customers, we tend to find the areas of focus would be around air management and environmental control -- very much to the point you mentioned about turning up the heat with respect to handling units -- and also recommendations around electrical systems and uninterruptable power supply (UPS).

Those are the areas of primary focus, and it can drill down from there on a case-by-case basis as to what works for each particular customer.

Gardner: Ian, what causes the variability from site to site? Clearly, there are some common things here that we have talked about, but what is it specifically that differentiates organizations, and they need to be mindful that they can't just follow a routine and expect to get the same results?

Jagger: Each customer has a different situation from the next, depending on how the infrastructure is laid out, the age of the data center, and even the climatic location of the data center. All of these have enormous impact on the customer's individual situation.

If we look at the total energy picture and the infrastructure itself -- in particular, the server and storage environment -- one of the fundamental objectives for virtualization is to dramatically increase the utilization of the assets you have.



But there are instances where, for example, we could say to a customer, "Shut down some of your computer-room air conditioners (CRACs)," and we would identify which ones that should be shut down and how many of them. That clearly would create some significant savings. It doesn't cost anything to do that. Clearly, the ROI is much higher, because there is no capital expenditure that is required to shut down CRACs. That would be one good example.

Another example is placing floor grilles correctly, which would be on anybody's best practice list, and can have a significant impact in the scheme of things. So case-by-case would be the answer, Dana.

Gardner: Given that we have some best practices and some variability from organization to organization, let's look at these four basic areas and then drill down into each one. John Bennett, virtualization. What are the big implications for this? Why is this so important when we think about the total energy picture?

Bennett: If we look at the total energy picture and the infrastructure itself -- in particular, the server and storage environment -- one of the fundamental objectives for virtualization is to dramatically increase the utilization of the assets you have.

High utilization

This is especially a factor for industry standard servers. Historically, whether it's mainframes, HP-UX systems, or HP Integrity NonStop systems, customers are very accustomed to running those at very high utilization rates -- 70, 80, 90 percent plus.

With x86 servers, we see utilization rates typically in the 10 percent range. So, while there are a lot interesting benefits that come from virtualization from an energy efficiency point of view, we're basically eliminating the need for a lot of server units by making much better use of a smaller number of units.

This can be further improved, as I mentioned earlier, by taking a look at the applications portfolio and doing application modernization, which has two benefits from an energy point of view.

One of them is that it allows the new applications to run on a modern infrastructure environment, so it can participate in the shared environment. Secondly, it allows you to eliminate legacy systems, sometimes very old systems, where very old is anywhere from 5 to 10 years in age or more, and eliminate the power consumption that those systems require.

You're going from 10 or 20 old servers to perhaps a couple of servers running much more efficiently. And, with modernization at play, you can actually increase that multiplication.



Those are the benefits of virtualization, and very clearly anyone dealing with either energy cost issues or energy constraint issues or with a green mandate needs to be looking very seriously at virtualization.

Gardner: What sorts of paybacks are typical with virtualization? Is this a rounding error, a significant change, or is there some significant variability in terms of how it pans out?

Bennett: No, it's significant. It's not a rounding error. We're talking about collapsing infrastructure requirements by factors of 5, 6, or 10. You're going from 10 or 20 old servers to perhaps a couple of servers running much more efficiently. And, with modernization at play, you can actually increase that multiplication.

These are very significant from a server point of view on the storage side. You're eliminating the need for sparsely used dedicated storage and moving to a shared, or virtualized storage environment, with the same kind of cost saving ratios at play here. So, it's a profound impact in the infrastructure environment.

Gardner: Correct me, if I am wrong, John, but virtualization helps, when we want to whittle down the number of servers while we increase utilization. Doesn't virtualization also help you to expand and scale out as your demands might increase, but at a level consummate with the demand, rather than in large chunks, which may have been the case without virtualization?

Rapid provisioning

Bennett: Oh, yes. I could talk for the rest of this podcast just about virtualization benefits, so don't let me get started. But, very clearly, we see benefits in areas like flexibility and agility, to use the marketing terms, but also the ability to provision resources very quickly. We see customers moving from operational models, where it would take them weeks or months to deploy a new business service, to where they are able to do it in hours.

We see them able to shift resources to where they are needed, when they are needed, in a much more dynamic fashion.

We see improvements in quality of service, as a result of those things. We actually see availability in business continuity benefits from these. So virtualization is -- in my mind, and I have said this before -- as fundamental a data center technology as server storage and networking are.

Gardner: It seems that virtualization is the gift that keeps on giving. Not only do you get a significant reduction in energy cost when you replace older systems and bring in virtualization to increase utilization, but, as you point out, over time, your energy consumption, based on demand, would be low given this ability to provision so effectively and given the ability to get more out of existing systems.

They are projecting that, on an annual basis, they're saving $714,000 in energy costs in the data center, and an additional $12,000 a year in endpoint power consumption outside of the data center.



Bennett: Yes, absolutely.

Gardner: Do you have any examples? Do you have a specific customers or someone that HP has worked with who has instituted virtualization and then has come back with an energy result?

Bennett: We have a number of examples. I'll just share one example here.

The First American Corporation, America's largest provider of business information, had the requirement of being able to better align their resources to business growth in a number of business services, and also were looking to reduce energy costs; two very simple focuses. They implemented a consolidation and virtualization solutions built around HP BladeSystems.

They are projecting that, on an annual basis, they're saving $714,000 in energy costs in the data center, and an additional $12,000 a year in endpoint power consumption outside of the data center.

Gardner: So that spells ROI pretty swiftly?

Bennett: Oh, yes, absolutely.

Gardner: Ian Jagger, let's go to you now on this next major topic -- application modernization. I've also heard this referred to as "cash for clunkers." What do we mean by that?

Investment opportunity


Jagger: There is a parallel that can be drawn there in sense of trading in those clunkers for new cash that can be invested within modernization projects.

John has done a great job talking about virtualization and its parallel, application modernization. I'd like to pull those two together in a certain way. If we're looking, for example, at the situation where a customer needs a new data center, then it makes sense for that customer to look at all the cases put together -- application modernization, virtualization, and also data center design itself.

I mentioned the word “converged” earlier. Here is where it all stands to converge from an energy perspective. Data centers are expensive things to build, without doubt. Everyone recognizes that and everybody looks at ways not to build a new data center. But, the point is that a data center is there to run applications that drive business value for the company itself.

What we don't do a good job of is understanding those applications in the application catalog and the relative importance of each in terms of priority and availability. What we tend to do is treat them all with the same level of availability. That is just inherent in terms of how the industry has grown up in the last 20-30 years or so. Availability is king. Well, energy has challenged that kingship if you like, and so it is open to question.

. . . Converging the facility design with application modernization, takes millions and millions of dollars of data center construction costs, and of course the ongoing operating costs derived from burning energy to cool it at the end of the day.



Now, you could look at designing a facility, where you have within the facility specific PODs (groups of compute resources) that would be designed according to the application catalog's availability and priority requirements, tone down the tooling infrastructure that is responsible for those particular areas, and just retain specific PODs for those that do require the highest levels of availability.

Just by doing that, by converging the facility design with application modernization, takes millions and millions of dollars of data center construction costs, and of course the ongoing operating costs derived from burning energy to cool it at the end of the day.

Gardner: It sounds that with these PODs that are somewhat functionally specific we are almost mapping a service-oriented architecture (SOA) to the data center facility. Is that a fair comparison?

Jagger: Yeah. It's a case of understanding the application catalog, mapping that availability and prioritization requirement, allowing for growth, and allowing for certain levels of redundancy that ultimately you can then build a POD structure within your data center.

You don't need UPS, for example, for everything. You don't need to end redundancy or twice redundancy for all applications. They are not all that critical and therefore why should we treat them as all being critical.

Gardner: A big part of being energy wise is really just being smart about how you understand your requirements and then apply the resources -- not too much, not too little -- sort of the Goldilocks's approach -- just right.

Talk to your utility

Jagger: One of the smartest things you can actually do as a business, as an IT manager, is to actually go and talk to your utility company and ask them what rebates are available for energy savings. They typically will offer you ways of addressing how you can improve your energy efficiency within the data center.

That is a great starting point, where your energy becomes measurable. Taking an action on reducing your energy, not just hits your operating cost, but actually allows you to get rebates from your energy company at the same time. It's a no-brainer.

Gardner: Perhaps to reverse engineer from the energy source itself and find the best ways to work with that.

Jagger: Right.

Gardner: John Bennett, is there anything that you would like to add to the topic of application modernization for energy conservation?

Bennett: I'd like to comment a bit about the point made earlier about thinking smarter. What we are advising customers to do is take a more complete view of the resources and assets that go into delivering business services to the company.

. . . In working with the facilities team, as well as the IT teams, you can actually deliver a lot of incremental value and a lot of significant savings to the organization.



It's not just the applications and the portfolio, which Ian has spoken of, and the infrastructure from a server, storage, and networking perspective. It's the data center facilities themselves and how they are optimized for this purpose -- both from a data center perspective and from the facility-as-a-building perspective.

In considering them comprehensively in working with the facilities team, as well as the IT teams, you can actually deliver a lot of incremental value -- and a lot of significant savings to the organization.

Gardner: Let's move on to our next major category -- data center infrastructure best practices. Again, this is related to these issues of virtualizing and finding the right modernization approaches. Are there ongoing ways in which business as usual in the data center does not work to our advantage when we consider energy? Let's start with you, Ian.

Jagger: As we talked about earlier in terms of best practices, it doesn't necessarily follow that a given best practice returns the best results. I think there has to be an openness on behalf of the company itself on what actions it should take, with respect to driving down energy costs and ensuring solid ROI on any capital expenditure that's required to do that.

Just for example, I mentioned earlier that shutting off CRAC units would be one of the best practices, and turning the temperature up produces certain results.

Payback opportunity

I am thinking of one particular customer, where we suggested that they shut down three CRAC units. Now, that would give them a certain saving, but the cost of some of the work that would have to be done with that equaled the amounts of saving for the first year. So, there is a one-year payback there, and of course the rest is all payback after that point.

But yet, with the same customer, we looked at and advised to say, well, if you use chillers with variable speed compressors, instead of constant speed compressors, then there is certainly a capital requirement there. In the case of this customer, it was about $300,000. But the return on that was $360,000 in one year.

That investment created a larger return on payback than simply shutting down the three CRAC units or indeed the correct placement of floor grilles within the data center.

That was a case not of best practice, but having higher impact than best practice itself. It's not easy for customers to get into the detail of this. This is where expertise comes into it. We need to go beyond the typical list of best practices areas of expertise, and how that expertise can highlight specific areas of payback and ROI and where the business or the IT can actually justify the cost of doing the work.

We have a data center transformation workshop, again, not necessarily to “do it for a customer”, but to work with the customer in defining what their personal roadmap would look like.



Gardner: John Bennett, when it comes to leveraging expertise in order to bring about these efficiencies and make the right choices on how to invest on this ongoing best practices continuum, how does HP enter into this?

What are some ways in which the expertise that you've developed as a company working with many customers over many years, come to bear on some of these new customers or new instances of requirements around energy?

Bennett: We can bring it to bear in a number of ways. For customers who are very explicitly concerned about energy and how to reduce their energy cost and energy consumption, we have an Energy Analysis Assessment service. It's a great way to get started to determine which of the best practices will have the highest impact on you personally, and to allow you to do the cherry-picking that we talked about earlier. We'll ask Ian perhaps to talk a little more about that service in a moment.

For customers who are looking at things a little more comprehensively, energy analysis and energy efficiency are two aspects of a data-center transformation process. We have a data center transformation workshop, again, not necessarily to “do it for a customer”, but to work with the customer in defining what their personal roadmap would look like.

One element that is considered are the facilities and the data centers themselves. It may very well end up saying, "You need a data-center strategy project. You need to have an analysis done of the applications portfolio to business services to understand how many data centers you have, where they should be, what kinds they should be, what you should do with the data centers you have." Or, it may be that the data centers are not an issue for that particular customer.

Gardner: Another big area where cost plays into these operational budgets, the ongoing budgets, is labor. Is there a relationship between labor in the IT operations and energy? Is there some way for these two very large line items within the IT budget; labor and energy, to play off of one another in some productive manner?

More correlative than causative

Bennett: Well, there is a strong relationship, especially on the infrastructure best practices that impact labor. I would treat it as correlative rather than causative, but as you ruthlessly simplify and standardize your environment, as you move to a common shared infrastructure, you actually can significantly reduce your management costs and begin the process of shifting your IT budget away from management and maintenance.

We see most customers spending 70 percent plus of their operational budget on management and maintenance, the opportunity is flipping that around to where they spend 70 percent of their operational budget on business projects. So, there is a strong set of benefits that come on the people side along with the energy side.

Now, for organizations that have green strategies in addition to having strategies for energy efficiency, one can use IT to help the organization be greener. Some very simple things are to make use of things like HP's Halo rooms for video conferencing and effective meetings without travel and to set up remote access with the corresponding security, so that people can work from home offices or work remotely. A lot of things can be done with green benefits as well as energy benefits.

Gardner: John, just briefly for our listeners, how do you distinguish green from energy conservation, what's the breakdown between them?

Bennett: Well, I am not sure how to characterize the breakdown, but energy is very typically focused either on reducing direct energy cost or reducing energy consumption.

A lot of things one can do that are more than just “pure energy savings”. It falls back often to energy, but the whole idea of sustainability is a little bit of a different concept.



The broader green benefits will tend to look at areas like sustainability, or having what some people refer to as a neutral carbon footprint. So, if you look at your supply chain backwards and out to your customers, you're not consuming as much of the earth's resources in producing your goods and services, and you are helping your people not consume resources needlessly in delivering the business services that they provide to their customers.

It's about just recycling practices, using recycled goods, packaging efficiency, cutting out paper consumption, changing business processes, and using digitization. A lot of things one can do that are more than just “pure energy savings”. It falls back often to energy, but the whole idea of sustainability is a little bit of a different concept.

Gardner: Ian, I have heard many times the issue around cable management come up in best practices as well. What's the relationship between energy and cable management in a complex data center environment?

Jagger: Cable management, as you say, is one of those best-practice areas. There are a couple of ways you can look at that. One is from the original plant design with respect to cable ducting and just being accurate with respect to the design of that.

Continuous operation

The second part is running an operation continuously. That operation is dynamic, and so it's never going to stand still. Poor practice starts to take over after a while, and what was once well-designed and perhaps tidy, is no longer the case The cables are going to run here and there, you move this and you move that, and so on. So, that best practice isn't sustained.

You can simply just move back in and just take a fresh look at that and say, "Am I doing what I need to be doing with respect to cabling?" It can have a significant impact, because cabling does interrupt the airflows and air pressures that are running underneath the raised floor.

It's simply a case of getting back to the best practice in terms of how it was originally designed with respect to cable management. There are products in there that we ourselves sell, not just from a design perspective, but racking products that enable that to happen.

Gardner: On the topic of good design, let's move to our fourth major area -- data center building and facility planning. This is for those folks who might not want to, but need to build a whole new data center Or, if they've got an issue where they want to consolidate numerous data centers into a single facility, they might think about moving one or replacing it. A lot of different scenarios can lead to this.

How about starting with you John Bennett? What do you need to consider, when you are going to this whole new facility? I would think the first thing would be where to put the thing -- where is the location.

One of the interesting things about location is that there is no right answer, and there is no right answer because of qualitative aspects of customer’s decision making that come into play.



Bennett: Actually, before you get to choosing the location, the real first question is, "What is the type of facility do you need?" Ian talked earlier about the hybrid data center concept, but the first questions are how big do you need and what does it have to be to meet and support the needs of the business? That's the first driver.

Then, you can get into questions of location. One of the interesting things about location is that there is no right answer, and there is no right answer because of qualitative aspects of customer’s decision making that come into play.

There are a lot of customers, for example, who have, and run, data centers downtown in cities like New York, Tokyo and London -- very expensive real estate, but it's important to the business to have their data centers near their corporate offices.

There are companies that run their data centers in remote locations. I know a major bank on the West Coast that runs their primary data centers in Iowa. You can have strategies for having regional data centers. I think that the Oracle data center strategy is to have data centers around the world, in three locations.

HP has its data centers, six data centers, three pairs, located in different parts of the United States, providing worldwide services.

Environmental benefits

You can choose to locate them at places that have environmental benefits, like geothermal benefits. We have a new data center that we are opening up in the UK, which is incredibly energy efficient -- perhaps Ian can talk briefly about that -- taking advantage of local winds. You can take advantages of natural resources from a power point of view.

Gardner: The common philosophy here is to be highly inclusive, bringing in as many aspects of impacting on the decision and long-term efficiency. This is what needs to take place top-down.

Bennett: There are a lot of factors at play. The priorities and weightings of those for individual customers will vary quite significantly. So all of those need to be taken into consideration.

If you are doing a new data center project, chances are this is something that is not just going to your CFO for approval, but probably to the board of directors. It's something that not only is going to have to have a business case in its own right, but have to meet the corporate hurdle rates and be viewed as an opportunity cost for the organization. These are very fundamental business decisions for many customers.

Gardner: Ian Jagger, when we look to these new facilities factoring in a much lower energy footprint that may not have been the case with older facilities might help make that decision and might prompt that board to move sooner than later.

But the play of climate on a data center and energy efficiency is truly significant.



Jagger: Right. Going to the point of actually where to locate it, some companies do have preferences for a data center to be located adjacent to where they are actually conducting business, That doesn't necessarily follow for everyone.

But the play of climate on a data center and energy efficiency is truly significant. We have a model within our Energy Efficiency Analysis that will model for our customers the impact of where a data center could be based, based on climate zone and the relative impact of that.

The statistics are out there in terms of breaking up climate zones into eight regions -- One being the hottest and Eight, the coldest -- and then applying humidity metrics on top of that as well. Just going from one to the other can double or even triple the power usage effectiveness (PUE) rating, which is the usage of energy to power IT over the total end users coming into the data center in the first place. Siting the data center can have an enormous impact on cost and efficiency.

Gardner: I imagine that your thoughts earlier about the PODs and the differentiation within the data center based on certain new high-level requirements. This could also now be brought to bear along with cabling when you are planning a new facility, something that you might not have been able to retrofit into an older one.

Rates of return

Jagger: It's easier for sure to design that into a new facility than it is to retrofit it to an old one, but that doesn't exclude applying the principle to old ones. You would just get to a point where you have a diminishing rate of return in terms of the amount of work that you need to do within an older data center, but certainly you can apply that.

The premise here is to understand possible savings or the possible efficiency available to you through forensic analysis and modeling. That has got to be the starting point, and then understanding the costs of building that efficiency.

Then, you need a plan that shows those costs and savings and the priorities in terms of structure and infrastructure, have that work in a converged way with IT, and of course the payback on the investment that's required to build it in the first place.

Gardner: I wonder if there are any political implications around taxation, carbon footprint, and cap-and-trade types of legislation. Any thoughts about factoring location and new data centers in with some of those issues that also relate to energy?

Bennett: Certainly, there are. The UK, for example, already has regulations in place for new buildings that would impact a new data center design project. There is a Data Center Code of Conduct standard in the European Union. It's not regulation yet, but many people think that these will be common in countries around the world -- sooner rather than later.

Gardner: So, yet another indication that getting a full comprehensive perspective when considering these energy issues is very important.

The highly chilled air coming off the sea has a significant part to play in the cooling efficiency of the data center, because we have simply taken that air and are using it to chill the data center. There are enormous efficiencies there.



Let's go back to examples. Do we have some instances where people have created entirely new data centers, done the due diligence, looked at these varieties of perspectives from an energy point of view, and what's been the result? Are there some metrics of success to look at?

Jagger: I think John spoke earlier about a data center we recently built in the UK. The specific site was on the Northeast coast of the UK. I know the area well.

Bennett: It sounds like you might Ian.

Jagger: The highly chilled air coming off the sea has a significant part to play in the cooling efficiency of the data center, because we have simply taken that air and are using it to chill the data center. There are enormous efficiencies there.

We've designed data centers using geothermal activity. Iceland is a classic. Iceland sets itself up, as, "Come to us. Bring your data center to us, because we can take advantage of the geothermals that are in place with respect to that."

Examining all factors

To slightly argue against that, there are a number of data centers being sited in locations like Arizona, where you would consider the cost of cooling the data center to be much greater. Well, the humidity factor plays into that, because there is relatively low humidity there.

The other factor that's coming into that is how you work with the utility company and what the utility rates are? How much you are paying per kilowatt-hour for energy? Still other factors come into play, like general security with respect to the data center.

There are lots of instances where siting the data center is determined by the political considerations that you've talked about. It could be in terms of taking advantage of natural resource. It could be in terms of whether incentives are greater. There are many, many reasons. This would be part of any study, and the modeling that I talked about should take it all into account.

Gardner: So, clearly, there are many, many variables, a great deal of complexity of having a global perspective, and a great deal of experience certainly would come to be very productive when moving into this.

Jagger: Just to give you a specific example, we recently ran an analysis for a company based in Arizona. They were interested in understanding what the peer comparison would be for other companies in a similar climate zone -- how efficient were they in comparison to peers that they could correctly compare themselves to?

You can look at energy efficiency, but part of that game is in understanding your relative efficiency compared to others.



You can look at energy efficiency, but part of that game is in understanding your relative efficiency compared to others. What is it that you consider efficient? A data center with a PUE of 2 may be incredibly efficient, compared to a data center with a PUE of 1.4, based on climate location. In other words, the one with a PUE of 2 is actually more efficient than the one with 1.4, because of the influence of climate. If they were peer to peer, it would reflect that.

Gardner: How does an organization begin? We've talked about new data centers, modernization, virtualization, and refining and tuning best practices. Any thoughts on how to get started and where some valuable resources might reside?

Do you have a plan?

Jagger: To me, the only question would be whether you're improving efficiency according to a plan? Do you know the business benefit and the ROI of each improvement that you would like and that you would consider there? If you don't start at that point, you're going to get lost. So what is the plan that you are looking to do, and what is the business benefit that would follow that plan?

Bennett: That plan derives from having a data center strategy, in the positive sense of the word, which is understanding the business strategy and its plans going forward. It's understanding how the business services provided by IT contribute to that business strategy and then aligning the data centers as one of many assets that come into play in delivering those business services.

We see a lot of customers who have either very aged data center strategies or don't have formal data center strategies, and, as a result, aren't able to maximize the value that they deliver to the organization.

At the end of the day, typically, power cuts even by an average data center is actually not going to the servers, but to cooling, fans, and lighting, or the non-IT productive elements. Less than half would be going to the servers.



Jagger: You may have noticed this thing throughout this podcast from John and me, one of convergence or synchronization between IT and the facilities. I think that's apparent.

Don't necessarily focus on IT as a starting point. At the end of the day, typically, power cuts even by an average data center is actually not going to the servers, but to cooling, fans, and lighting, or the non-IT productive elements. Less than half would be going to the servers.

So, look at some of the other areas beyond IT itself. Those generally would be infrastructure areas.

You've also got to consider how you're going to measure this. How do you look at measuring your efficiency? Some level of energy automation and discovery of measuring energy that should be built in.

Gardner: So. that falls back into the realm of IT financial management.

Jagger: Right.

Gardner: We have been discussing ways in which you can begin realistically reducing energy consumption across data centers -- old data centers and new data centers -- and applying good practices, regardless of their age or location.

Helping us understand how to move in the conservative use of energy, we have been joined by John Bennett, worldwide director for Data Center Transformation Solutions at HP. Thank you, John.

Bennett: My pleasure Dana. Thank you.

Gardner: We've also been joined by Ian Jagger, worldwide marketing Manager for Data Center Services. Thank you, Ian.

Jagger: You are very welcome Dana.

Gardner: This is Dana Gardner, principal analyst at Interarbor Solutions. You've been listening to a sponsored BriefingsDirect podcast. Thanks for listening, and come back next time.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Learn more. Sponsor: Hewlett-Packard.

Gain more insights into data center transformation best practices by downloading free whitepapers at http://www.hp.com/go/dctpodcastwhitepapers.

Transcript of a sponsored BriefingsDirect podcast on strategies for achieving IT energy efficiency. Copyright Interarbor Solutions, LLC, 2005-2009. All rights reserved.

Part 2 of 4: Web Data Services Provide Ease of Data Access and Distribution from Variety of Sources, Destinations

Transcript of a sponsored BriefingsDirect podcast, one of a series on web data services, with Kapow Technologies, with a focus on information management for business intelligence.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: Kapow Technologies.

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you’re listening to BriefingsDirect.

Today, we present a sponsored podcast discussion on how to make the most of web data services for business intelligence (BI). As enterprises seek to gain better insights into their markets, processes, and business development opportunities, they face a daunting challenge -- how to identify, gather, cleanse, and manage all of the relevant data and content being generated across the Web.

In Part 1 of our series we discussed how external data has grown in both volume and importance across internal Internet, social networks, portals, and applications in recent years. As the recession forces the need to identify and evaluate new revenue sources, businesses need to capture such web data services for their BI to work better and fuller.

Enterprises need to know what's going on and what's being said about their markets across those markets. They need to share those web data service inferences quickly and easily across their internal users. The more relevant and useful content that enters into BI tools, the more powerful the BI outcomes -- especially as we look outside the enterprise for fast shifting trends and business opportunities.

In this podcast, Part 2 of the series with Kapow Technologies, we identify how BI and web data services come together, and explore such additional subjects as text analytics and cloud computing.

So, how to get started and how to affordably bring web data services to BI and business consumers as intelligence and insights? Here to help us explain the benefits of web data services and BI, is Jim Kobielus, senior analyst at Forrester Research.

Jim Kobielus: Hi, Dana. Hello, everybody,

Gardner: We're also joined by Stefan Andreasen, co-founder and chief technology officer at Kapow Technologies. Welcome, Stefan.

Stefan Andreasen: Thank you, Dana. I'm glad to be here.

Gardner: Jim, let's start with you. Let's take a look at what's going on in the wider BI field. Is it true that the more content you bring into BI the better, or are there trade-offs, and how do we manage those tradeoffs?

The more the better

Kobielus: It's true that the more relevant content you bring into your analytic environment the better, in terms of having a single view or access in a unified fashion to all the information that might be relevant to any possible decision you might make within any business area. But, clearly, there are lots of caveats, "gotchas," and trade-offs there.

One of these is that it becomes very expensive to discover, to capture, and to do all the relevant transformation, cleansing, storage, and delivery of all of that content. Obviously, from the point of view of laying in bandwidth, buying servers, and implementing storage, it becomes very expensive, especially as you bring more unstructured information from your content management system (CMS) or various applications from desktops and from social networks.

So, the more information of various sorts that you bring into your BI or analytic environment, it becomes more expensive from a dollars-and-cents standpoint. It also becomes a real burden from the point of view of the end user, a consumer of this information. They are swamped. There's all manner of information.

If you don't implement your BI environment, your advanced analytic environment, or applications in a way that helps them to be more productive, they're just going to be swamped. They're not going to know what to do with it -- what's relevant or not relevant, what's the master reference, what's the golden record versus what's just pure noise.

So, there is that whole cost on productivity, if you don't bring together all these disparate sources in a unified way, and then package them up and deliver them in a way that feeds directly into decision processes throughout your organization, whether HR, finance, or the like.

Gardner: So, as we look outside the organization to gain insights into what market challenges organizations face and how they need to shift and track customer preferences, we need to be mindful that the fire hose can't just be turned on. We need to bring in some tools and technologies to help us get the right information and put it in a format that's consumable.

Kobielus: Yes, filter the fire hose. Filtering the fire hose is where this topic of web data services for BI comes in. Web data services describes that end-to-end analytic information pipe-lining process. It's really a fire hose that you filter at various points, so that the end users turn on their tap and they're not blown away by a massive stream. Rather, it's a stream of liquid intelligence that is palatable and consumable.

Gardner: Stefan, from your perspective in working with customers, how wide and deep do they want to go when they look to web data services? What are we actually talking about in terms of the type of content?

Andreasen: Referring back to your original question, where you talk about whether we need more content, and whether that improves the analysis and results that analysts are getting, it's all about, as Jim also mentioned, the relevance and timeliness of the data.

There is a fire hose of data out there, but some of that data is flowing easily, but some of it might only be dripping and some might be inaccessible at all. Maybe I should explain the concept.

Think about it this way. The relevant data for your BI applications is located in various places. One is in your internal business applications. Another is your software-as-a-service (SaaS) business application, like Salesforce, etc. Others are at your business partners, your retailers, or your suppliers. Another one is at government. The last one is on the World Wide Web in those tens of millions of applications and data sources. There is very often some relevant information there.

Accessible via browser

Today, all of this data that I just described is more or less accessible in a web browser. Web data services allow you to access all these data sources, using the interface that the web browser is already using. It delivers that result in a real-time, relative, and relevant way into SQL databases, directly into BI tools, or to even service enabled and encapsulated data. It delivers the benefits that IT can now better serve the analysts need for new data, which is almost always the case.

BI projects happen in two ways. One is that you make a completely new BI. You get a completely new BI system, and then make brand-new reports, and new data sources. That's the typical BI project.

What's even more important is that incremental daily improvement of existing reports. Analysts sit there, they find some new data source, they have their report, and they say, "It would be really good, if I could add this column of data to my report, maybe replace this data, or if I could get this amount of data in real-time rather than just once a week." So it's those kinds of improvements that web data services also really can help with.

Gardner: Jim Kobielus, it sounds like we've got two nice opportunities here. One is the investments that have already been made in BI internally, largely for structured data. Now, we have this need to look externally and to look at the newer formats internally around web content and browser-based content. We need to pull these together.

Kobielus: There are a lot of trends. One of them is, of course, self-service mashups by end users of their own reports, their own dashboards, and their own views of data from various sources, as well as their data warehouses, data marts, OLAP cubes and the like.

But, another one gets to what you're asking about, Dana, in terms of trends in BI. At Forrester, we see traditional BI as a basic analytics environment, with ad-hoc query, OLAP, and the like. That's traditional BI -- it's the core of pretty much every enterprise's environment.

Advanced analytics, building on that initial investment and getting to this notion of an incremental add-on environment is really where a lot of established BI users are going. Advanced analytics means building on those core reporting, querying, and those other features with such tools as data mining and text analytics, but also complex event processing (CEP) with a front-end interactive visualization layer that often enables mashups of their own views by the end users.

When we talk about advanced analytics, that gets to this notion of converging structured and unstructured information in a more unified way. Then, that all builds on your core BI investment -- smashing the silos between data mining and text mining that many organizations have implemented for good reasons. These are separate projects, probably separate users, separate sources, separate tools, and separate vendors.

We see a strong push in the industry towards smashing those silos and bringing them all together. A big driver of that trend is that users, the enterprises, are demanding unified access to market intelligence and customer intelligence that's bubbling up from this massive Web 2.0 infrastructure, social networks, blogs, Twitter and the like.

Relevant to ongoing activities

That's very monetizable and very useful content to them in determining customer sentiment, in determining a lot of things that are relevant to their ongoing sales, marketing, and customer service activities.

Gardner: So, we're not only trying to bring the best of traditional BI with this large pool of valuable information from web data services. We're also trying to extend the benefits of BI beyond just the people who can write a good SQL query, the proverbial folks in the white lab coats behind the glass windows. We're trying to bring those BI analytics out to a much larger class of people in the organization.

Kobielus: Exactly. SQL queries are the core of traditional BI and data warehousing in terms of the core access language. Increasingly, in the whole advanced analytics space, SQL is becoming just one of many access techniques.

One might, in some ways, describe the overall trend as toward more service-oriented architecture (SOA), oriented access of disparate sources through the same standard interfaces that are used everywhere else for SOA applications. In other words, WS/XML, WSDL, SOAP, and much more.

So, SOA is coming to advanced analytics, or is already there. SOA, in the analytics environment, is enabled through a capability that many data federation vendors provide. It's called a "semantic virtualization layer." Basically, it's an on-demand, unified roll up of disparate sources.

Increasingly, in the whole advanced analytics space, SQL is becoming just one of many access techniques.



It transforms them all to a common set of schemas and objects, which are then wrapped in SOA interfaces and presented to the developer as a unified API or service contract for accessing all this disparate data. SOA really is the new SQL for this new environment.

Gardner: Stefan, what is holding back organizations from being able to bring more of this real-time, highly actionable information vis-à-vis web services? What's preventing them from bringing this into use with their BI and analytics activity?

Andreasen: First, let me comment on what Jim said, and then try to answer your question. Jim's comment about SOA as common to BI is really spot on.

The world is more diverse

Traditionally, for BI, we've been trying to gather all the data into one unified, centralized repository, and accessing the data from there. But, the world is getting more diverse and the data is spread in more and different silos. What companies realize today is that we need to get service-level access to the data, where they reside, rather than trying to assemble them all.

So, tomorrow's data stores of BI, and today's as well -- and I'll give you an example -- is really a combination of accessing data in your central data repositories and then accessing them where they reside. Let me just explain that by an example.

One Fortune 500 financial services company spent three years trying to build a BI application that would access data from their business partners. The business partners are big banks spread all over the U.S. The effort failed, but they had to solve this problem, because it was a legal and regulatory necessity for them.

So, they had to do it with brute force. Basically, they had analysts logging into their business partners' web sites and business applications, and copying and pasting those data into Excel to deliver those reports.

Finally, we got in contact with them, and we solved that problem. Web data services can encapsulate or wrap the data silos that were residing with their business partners into services -- SOAP services, REST services, etc. -- and thereby get automated access to the data directly into the BI tool. So, the problem they tried to solve for three years could now be solved with data services, and is running really successfully in production today.

This is also where web data services technology comes into play. Who knows best what data they want? It's the analysts, right? But who delivers the data? It's the IT department.



Kobielus: Dana, before we go to the next question, I want to extend what Stefan said, because that's very important to understand this whole space. This new paradigm, where SOA is already here in advanced analytics, is enabled by mashup. I published a report recently called Mighty Mashups that talks about this trend.

You need two core things in your infrastructure to make this happen. One is data mashups. In the back end, in the infrastructure, you need to have orchestrated integration, transformations, consolidation, and joining among disparate data sets. Then, you expose those composite data objects as services through SOA.

Then, in the front end, you need to enable end users to have access to these composite data objects through a registry, or whatever you call it, that's integrated into the environments where the user actually does work, whether it's their browsers/portal, Excel, or Microsoft Office environment. So, it's the presentation mashup on the user front end, and data mashup -- a.k.a. composite data objects -- on the back end to make this vision a reality.

Gardner: So, what's been holding back this ability to use a variety of different data types, content types, and data services in relation to BI has been proprietary formats, high cost and complexity, laborious manual processes, perhaps even spreadsheets, and a little older way of presenting information. Is that fair, Stefan?

Andreasen: I think so, yes. This is also where web data services technology comes into play. Who knows best what data they want? It's the analysts, right? But who delivers the data? It's the IT department.

Tools are lacking

Today, the IT department often lacks tools to deliver those custom feeds that the line of business is asking for. But, with web data services, you can actually deliver these feeds. The data that IT is asking for is almost always data they already know, see, and work with in the business applications, with the business partners, etc. They work with the data. They see them in the browsers, but they cannot get the custom feeds. With the web data services product, IT can deliver those custom feeds in a very short time.

Let me use an example here again. This is a real story. Suppose I am the CEO of one of the largest network equipment manufacturers in the world. I am running a really complex business, where I need to understand the sales figures and the distribution model. I possibly have hundreds of different systems and variables I need to look at to run my business.

Another fact is I am busy. I travel a lot. I'm often in the airport or where I don't have access to my systems. When I finally get access, I have to open my laptop, get on the 'Net’, and pull out my report.

What we did here was we took our product, service enabled the relevant reports, built a Blackberry front end to that, and delivered that in three hours, from start to end. So, suddenly, in a very agile fashion, the CEO could reach his target and look at his data anywhere he had wireless access.

Gardner: It must be very frustrating for these analysts, business managers, and business development people to be able to see content and data out on the web through their browser, but not be able to get it into context with their internal BI systems, and get those dashboards and views that allow a much fuller appreciation of what's really going on.

So, breaking down this barrier and giving them the key to the house, or actually giving IT a way to deliver the key to the house, is critical for the agility of BI going forward.



Andreasen: It's almost absurd. Think about it. I'm an analyst and I work with the data. I feel I own the data. I type the data in. Then, when I need it in my report, I cannot get it there. It's like owning the house, but not having the key to the house. So, breaking down this barrier and giving them the key to the house, or actually giving IT a way to deliver the key to the house, is critical for the agility of BI going forward.

Kobielus: I agree. Here's an important point I want to make as well. The key to making this all happen, making this mashup vision of reality in the final analysis, is expanding the flexibility of your data or source discovery capabilities within the infrastructure.

Most organizations that have a BI environment have one or more data warehouses aggregating and storing the data and they've got pre-configured connections and loading of data from specific sources into those data warehouses. Most users who are looking at reports in their BI environment are looking only at data that's pre-connected, pre-integrated, pre-processed by their IT department.

The user feels frustration, because they go on the Web and into Google and can see the whole universe of information that's out there. So, for a mashup vision to be reality, organizations have got to go the next step.

Much broader range

It's good to have these pre-configured connections through extract, transform and load (ETL) and the like into their data warehouse from various sources. But, there should also be ideally feeds in from various data aggregators. There are many commercial data aggregators out there who can provide discovery of a much broader range of data types -- financial, regulatory, and what not.

Also, within this ideal environment there should be user-driven source discovery through search, through pub-sub, and a variety of means. If all these source-discovery capabilities are provided in a unified environment with common tooling and interfaces, and are all feeding information and allowing users to dynamically update the information sets available to them in real-time, then that's the nirvana.

That means your analytic environment is continuously refreshed with information that's most relevant to end users and the decisions they are making now.

Gardner: So, we've identified the problem, and that's bringing the best of web services and web data into the best of what BI does and then expanding the purview of that beyond the white lab coats crowd, into the people who can take action on it. That's great. But, with the fire hose, we can't just start allowing this access to these data services without what the IT department considers critical. That is to keep the cost down, because we're still in recession and the budgets are tight.

We also need to have governance. We need to have manageability. We need to make the IT people feel like they can be responsible in opening up this filtered fire hose. So how do we do that, Stefan? How do we move from pure web static to an enterprise-caliber web data services?

The way our product works is that it allows you to instruct our system how to interact with a web application, just the same way as the line of business user.



Andreasen: Thank you for mentioning that. Jim, to get back to you on mashups, that's really relevant. Let's just look at the realities in IT departments today. They're probably understaffed. They've probably got budget cuts, but they have more demand from lines of business, and they probably also have more systems they have to maintain. So, they're being pushed from all sides.

What's really necessary here is a new way of solving this problem. This is where Kapow and web data services come in, as a disruptive new way of solving a problem of delivering the data -- the real-time relevant data that the analyst needs.

The way it works is that, when you work with the data in a browser, you see it visually, you click on it, and you navigate tables and so on. The way our product works is that it allows you to instruct our system how to interact with a web application, just the same way as the line of business user.

This means that you access and work with the data in the world in which the end users see the data. It's all with no coding. It's all visual, all point and click. Any IT person can, with our product, turn data that you see in a browser into a real feed, a custom feed, virtually in minutes or in a few hours for something that would typically take days, weeks, or months -- or may even be impossible.

Hand in hand

So a mashup is really an agile business application, a situational application. How can you make situational BI without agile data, without situational data? They basically go hand in hand. For mashups to deliver on the promise, you really need a way to deliver the data feeds in a very agile fashion.

Gardner: But what about governance and security?

Andreasen: Web data services access the data in the way you do from a web browser. All data resides in a database somewhere -- inside your firewall, at a customer, at a partner, or somewhere. That database is very secure. There's no way to access the database, without going through tedious processes and procedures to open a hole in that firewall.

The beauty with web data services is that it's really accessing the data through the application front end, using credentials and encryptions that are already in place and approved. You're using the existing security mechanism to access the data, rather than opening up new security holes, with all the risk that that includes.

Gardner: Jim, from some of the reports that you've done recently, what are customers, the enterprise customers, telling you about what they need in terms of better access to web data services, but also mindful about the requirements of IT around security and governability and so forth?

Kobielus: Right, right. The core theme I'm hearing is that mashups, user self-service development, and maintenance of user disparate data are very, very important, for lots of reasons. One, of course, is speeding delivery of analytics and allowing users to personalize it, and so forth. But, mashups without IT control is essentially chaos. And, mashups without governance is an invitation to chaos.

. . . users should be able to mashup and create their own reports and dashboards, but, from the perspective of the companies that employ them, they should only be able to mashup from company-sanctioned sources . . .



What does governance mean in this environment? Well, it means that users should be able to mashup and create their own reports and dashboards, but, from the perspective of the companies that employ them, they should only be able to mashup from company-sanctioned sources, such as data warehouses data marts, and external sources.

They should be able to only mashup that data, tables, records, or fields that they have authorized access to. They should only be able to mashup within the bounds of particular templates, reports, and dashboards that are sanctioned by the company and maintained by IT. There should be ongoing monitoring of access, utilization, and refreshes.

Then, users should be able to share their mashups with other users to create ever more composite mashups, but they should only be able to share data analytics that the recipient has authorized access to.

Now, this sounds like fascism, but it really isn't, because in practice what goes on is that users are usually given a long leash in a mashup environment to be able to pull in external data, when need be, with IT being able to monitor the utilization or the access of that data.

Fundamentally, governance comes down to the fact that all the applications are stored within a metadata environment -- repositories, and so forth -- that are under management by IT. So, that's the final piece in the mashup governance equation.

Gardner: I think I'm hearing you say that you really should have an intermediary between all of that web data and your BI analytics and the people making the decisions, not only for those technical reasons, but also to vet the quality of the data.

It’s in IT’s interest

Kobielus: Exactly. This is in IT's interest, and they know that. IT wants to insource as much of the development and maintenance of reports and dashboards and the like as they can get away with, which means it's pushed down to the end user to do the maintenance themselves on their own views.

IT is more than happy to go toward mashup, if there is the ability for them to keep their eyes and ears open, to set the boundaries of the sandbox, and insource to end users.

Gardner: Stefan, I want to go back to you, if I could. We talked about how to bring this into IT, but we also need to bring in to this the role of the developer, because we're just not talking about integration, we're also talking about presentation.

Does what Kapow brings to the table also allow those developers to get a task about trying to expose web data services within the context of applications, views, different audit presentation, dashboards, and what not? What's the role of the developer in this?

Andreasen: That's very important. We talked about this fire hose before. When I see a fire hose in front of me, I imagine the analyst can now open this fire hose and all the data in the world just splashing in their face, and that's really not the case. web data services allows the developer to incite the IT department to much more quickly develop and deliver those custom feeds or those custom web services that the analysts need in the BI tools.

Also, on governance, the reality is that the data that has value is data that comes from business partners, from government, or from sources where you have a business relationship, and therefore can govern it.



Also, on governance, the reality is that the data that has value is data that comes from business partners, from government, or from sources where you have a business relationship, and therefore can govern it. But, for various reasons, you cannot rewrite those applications, you cannot access those SQL databases in a traditional way. web data services is a way to access data from trusted sources, but access them in a much more agile way.

Gardner: Those services are coming across in a standardized format that developers can work with using existing tools.

Andreasen: Yes, that's very important. Web data services deliver the data into your standard data warehouse, into your standard SQL databases. Or, as I said earlier, it can wrap those applications into SOAP services, REST services, RSS feeds, and even .NET and Java API, so you get the API or you get the data access exactly the way you need it in your BI tool, in your data mining environment, etc.

Gardner: We've established the need. We've looked at the value of increasing BI's purview. We've looked at the larger trends around SOA and bringing lots of different data types into an architecture that can then be leveraged for BI and analytics. We've looked at the need for extending this to business processes outside the organization, as well as data types inside. We've looked at the role of the developer.

Are there examples, Stefan, of people who are actually doing this, who have been early adopters, who have taken the step of recognizing an intermediary and the tool and platform set to manage web data services in the context of BI? And, if they've done that, what are the paybacks, what are the metrics of success?

Andreasen: One of our early adopters is Audi. They've been using our product for five years. What was important for them was that, traditionally, it could take three to six months for them to get access to some data. But, with the Kapow Web Data Server, they were able to access data and create these custom feeds in a much shorter fashion, days rather than months.

What the business needs

They have been using it successfully for five years. They are growing with it, they're getting a lot of benefit around it, and couldn't imagine running the IT department without web data services today, because it gives them the way to deliver this agile custom data feeds that the business needs.

Gardner: Jim Kobielus, looking to the future, it seems to me that there is going to be more types of data coming from external sources. Perhaps, more of the internal data that companies have used in traditional applications -- BI and integration -- might find itself being housed in server farms, otherwise known as clouds, either on-premises, on some third-party grid or utility fabric, or some hybrid of the two.

When we factor in the movement and expected direction of cloud computing, how does that then bear down on the requirements for managed, governed, and IT-caliber, mission-critical caliber web data service tools?

Kobielus: It simplifies it and complicates it. It simplifies to some degree or enables this vision of self-service BI mashup, with automated source discovery, to come to fruition. You need a lot of compute power, you need a lot of data storage to do things like high volume, real-time text analytics.

A lot of that is going to have to be outsourced to public clouds that are scalable. They can scale out petabytes worth of data or can scale out some massive server farms to do semantic analysis and transformations and the like. So, the storage and the processing for most visions have to be outsourced to cloud providers. To some degree it makes it possible to realize this vision on the back end, at the web data services and data mashup side.

Public clouds are essentially silos from each other . . . They don't necessarily interoperate out of the box with your existing premises data environment, if you're an enterprise.



It also complicates it, because now you're introducing more silos. Public clouds are essentially silos from each other. There is Amazon, and there is the Windows SQL data or Azure, Then, of course, there is Google and a variety of others that are providing clouds that don't interoperate well, or at all, with each other. They don't necessarily interoperate out of the box with your existing premises data environment, if you're an enterprise.

So, the governance of all these disparate functions, the coordination of security, and the encryption and so forth across all these environments, as well as the coordination of the data archiving and auditing need to be worked out by each organization that goes this route with a disparate and motley assortment of internal and external platforms that are managing various functions within this analytic cloud.

In other words, it could complicate this whole equation considerably, unless you have one predominant public cloud partner that can do all the data integration, all the cleansing, all the transforms, all the warehousing in their cloud, and can provide you also with this SOA abstraction layer, the semantic virtualization layer, and can also ideally host your advanced analytics applications, like your data mining, in that environment.

It can do it all for you in a very streamlined way, with a common governance, security administration, and data modeling toolset. Remember, end users are a big part of this equation here. The end users can then pick up these cloud-based tools to mash up data within this unified cloud and mash it up in a way that makes sense to end users, not the professional black belt data modelers.

That vision cannot be realized right now with the commercial cloud offerings in the analytic market. I think it will take about two to three to five years for the cloud providers to go this route. It's not there yet.

Gardner: We're about out of time. I want to take the same question to Stefan about the cloud computing angle and the mixed sourcing for applications, datasets, and business processes. It seems to me this would be an opportunity for Kapow.

No master hub

Andreasen: Absolutely. What I don't see is one big vendor that solves all your data needs and becomes like the master hub for all information and data on the Web. History has shown that the way that companies compete with each other is to differentiate themselves.

If everybody was using the same provider and the same kind of data, they couldn't differentiate. This is really, I think, what companies realize today -- unless we do something different and better, than our competitors, we are not going to win this game.

What's important with web data services is hosting the tools and the facilities to access the data, but allowing the customers to create in a self-service fashion the custom data feeds they need. Our product fits perfectly into that world as well. We already have many of our customers using out product in the cloud. We become a tool where they can create ad hoc, on demand, or as necessary data feeds, and to share them with anybody else that needs them.

Kobielus: I've got one more point. In this ecosystem that's emerging, there's a strong role for providers of tooling specifically focused on self-service mashup and also for what's often called on-demand analytical sandboxing, which could be used by end users to create their own analytic workspace, and pull information.

What's important with web data services is hosting the tools and the facilities to access the data, but allowing the customers to create in a self-service fashion the custom data feeds they need.



Those that can provide the tooling that works in front of whatever the organization's preferred data management or data federation or data warehousing or BI vendor might be. So there's a plenty of opportunity for the likes of Kapow, and many others in this space too, for complementary solutions that are integrated with any of the leading data federation and cloud analytic solutions that are out there.

Gardner: Very good. I'm afraid we'll have to leave it there. We've been discussing the requirements around bringing web data services into BI, but doing so in a mission-critical fashion that's amenable to the IT department.

I want to thank our guests. We've been joined by Jim Kobielus, senior analyst at Forrester Research. Thanks, Jim.

Kobielus: Sure, no problem.

Gardner: We've also been joined by Stefan Andreasen. He's the co-founder and chief technology officer at Kapow Technologies. Thank you so much, Stefan.

Andreasen: Thank you everyone for a great discussion.

Gardner: This is Dana Gardner, principal analyst at Interarbor Solutions, and you've been listening to a sponsored BriefingsDirect podcast. This is just part of a series of four podcasts on the subjects around web data services and BI.

We look forward to future discussions on text analytics, cloud computing, and the role of BI in the future. Thanks for listening, and come back next time.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: Kapow Technologies.

Transcript of a sponsored BriefingsDirect podcast, one of a series on web data services, with Kapow Technologies, with a focus on information management for business intelligence. Copyright Interarbor Solutions, LLC, 2005-2009. All rights reserved.

Thursday, October 01, 2009

Cloud Computing by Industry: Novel Ways to Collaborate Via Extended Business Processes

Transcript of a sponsored BriefingsDirect podcast examining how cloud computing methods promote innovative sharing and collaboration for industry-specific process efficiencies.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: Hewlett-Packard.

Free Offer: Get a complimentary copy of the new book Cloud Computing For Dummies courtesy of Hewlett-Packard at www.hp.com/go/cloudpodcastoffer.

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you’re listening to BriefingsDirect.

Today, we present a sponsored podcast discussion on how to make the most of cloud computing for innovative solving of industry-level problems. As enterprises seek to exploit cloud computing, business leaders are focused on new productivity benefits. Yet, the IT folks need to focus on the technology in order to propel those business solutions forward.

As enterprises confront cloud computing, they want to know what's going to enable new and potentially revolutionary business outcomes. How will business process innovation -- necessitated by the reset economy -- gain from using cloud-based services, models, and solutions?

It's as if the past benefits of Moore's Law, of leveraging the ongoing density of circuits to improve performance while also cutting costs, has now evolved to a cloud level, trying to (in the context of business problems) do more for far less.

Early examples of applying cloud to industry challenges, such as the recent GS1 Canada Food Recall Initiative, show that doing things in new ways can have huge payoffs.

We'll learn here about the HP Cloud Product Recall Platform that provides the underlying infrastructure for the GS1 Canada food recall solution, and we will dig deeper into what cloud computing means for companies in the manufacturing and distribution industries and the "new era" of Moore's Law.

Here to help explain the benefits of cloud computing and vertical business transformation, we welcome Mick Keyes, senior architect in the HP Chief Technology Office. Welcome, Mick.

Mick Keyes: Thank you, very much.

Gardner: We are also joined by Rebecca Lawson, director of Worldwide Cloud Marketing at HP. Hello, Rebecca.

Rebecca Lawson: Hello.

Gardner: And, we're also joined by Chris Coughlan, director of HP's Track and Trace Cloud Competency Center. Welcome to the show, Chris.

Chris Coughlan: Thanks, very much.

Gardner: I'd like to start with Rebecca, if I could. Tell us a little bit about the cloud vision, as it is understood at HP. Where does this fit in, in terms of the business, the platform, and the tension between the technology and the business outcomes?

Overused term

Lawson: Sure, I'm happy to. Everyone knows that "cloud" is a word that tends to get hugely overused. Instead of talking specifically about cloud, at HP we try to think about what kinds of problems our customers are trying to solve, and what are some new technologies that are here now, or that are coming down the pike, to help them solve problems that currently can't be solved with traditional business processing approaches.

Rather than the cloud being about just reducing costs, by moving workloads to somebody else's virtual machine, we take a customer point of view -- in this case, manufacturing -- to say, "What are the problems that manufacturers have that can't be solved by traditional supply chain or business processing the way that we know it today, with all the implicated integrations and such?"

That's where we're coming from, when we look at cloud services, finding new ways to solve problems. Most of those problems have to do with vast amounts of data that are traditionally very hard to access by the kinds of application architectures that we have seen over the last 20 years.

Gardner: So, we're talking about a managed exposure of information, knowledge, and things that people need to take proper actions on. I've also heard HP refer to what they are doing and how this works as an "ecosystem." Could you explain what you mean by that?

Most of those problems have to do with vast amounts of data that are traditionally very hard to access by the kinds of application architectures that we have seen over the last 20 years.



Lawson: As we move forward, we see that, different vertical markets -- for example, manufacturing or pharmaceuticals -- will start to have ecosystems evolve around them. These ecosystems will be a place or a dynamic that has technology-enabled services, cloud services that are accessible and sharable and help the collaboration and sharing across different constituents in that vertical market.

We think that, just as social networks have helped us all connect on a personal level with friends from the past and such, vertical ecosystems will serve business interests across large bodies of companies, organizations, or constituents, so that they can start to share, collaborate, and solve different kinds of issues that are germane to that industry.

A great example of that is what we're doing with the manufacturing industry around our collaboration with GS1, where we are solving problems related to traceability and recall.

Gardner: So, for these members within the ecosystem, their systems alone cannot accomplish what having a third party or cloud-based platform can accomplish in terms of cooperation, collaboration, coordinated and managed, and even governed business processes.

Lawson: That's right. In fact, I'll throw it over to Mick to talk about how this is really different and really how it serves the greater purpose of the manufacturing community. Mick?

Multiple entities

Keyes: A good example is the manufacturing industry, and indeed the whole linear type supply chain that is in use. If you look at supply chains, food is a good example. It's one of the more complicated ones, actually. You can have anywhere up to 15-20 different entities involved in a supply chain.

In reality, you've got a farmer out there growing some food. When he harvests that food, he's got to move it to different manufacturers, processors, wholesalers, transportation, and to retail, before it finally gets to the actual consumer itself. There is a lot of data being gathered at each stage of that supply chain.

In the traditional way we looked at how that supply chain has traceability, they would have the, infamous -- I would call it -- "one step up, one step down" exchange of data, which meant really that each entity in the supply chain exchanged information with the next one in line.

That's fine, but it's costly. Also, it doesn't allow for good visibility into the total supply chain, which is what the end goal actually is.

What we are saying to industry at the moment -- and this is our thesis here that we are actually developing -- is that, HP, with a cloud platform, will provide the hub, where people can either send data or allow us to access data. What a cloud will do is aggregate different piece of information to provide value to all elements of the supply chain to give greater visibility into the supply itself.

What a cloud will do is aggregate different piece of information to provide value to all elements of the supply chain to give greater visibility into the supply itself.



Food is one example, but you've got lots of other examples in different industries -- the pharmaceutical industry, of course. You've also got the aeronautical industry and the aerospace industry. It's any supply chain that's out there, Dana.

Gardner: Mick, you mentioned this hub and this platform. Is this just a blank canvas that these vertical industries can then come to and apply their needs or is there a helping hand, in addition to the strict technological fabric, that can apply some level of expertise and understanding into these verticals?

Keyes: If you look at the way we're defining the whole ecosystem, as Rebecca referred to around cloud computing, we have the cloud-optimized infrastructure, which HP has got a great pedigree in. Then, we're looking, from a platform point of view, at the next level. From this, we'll launch the different specific services.

In that platform, for example, we've got the components to cover data, analytics, software management, security, industry-specific type information, and developer type offerings as well. So, depending on what type of industry you're in, we're looking at this platform as being almost a repeatable type of offering, and you can start to lay out individual or specific industry services around this.

Gardner: The reason I asked is that there are a number of prominent cloud providers nowadays who do seem to provide mostly a blank canvas. It's very powerful. The cost benefits are there. It gives developers and architects something new to pursue, but there is not much in addition to the solution level there.

A little bit more

Keyes: When you offer or develop specific services and such for industry, you need a little bit more than being able to look at it from a technology point of view. Industry knowledge, we have found, is key, but also, when we talk to the businesses and each element of a supply chain -- and food is a good example, because it's global -- there are different cultural influences involved, such as the whole area of understanding governance and data, where it can and cannot be stored.

Technology is obviously a very important part of it, but how we look at producing services and who can consume the services is equally important. Also, we see this type of initiative as stimulating a lot of new innovation. When we use our platform to create certain pockets of data, for use of a better word, we are looking at how we can mashup different type of services.

Some companies will come with a good idea. There are other partners, excellent partners, who are developing very specific and good applications. We will use this hub and our business knowledge, as well, to look at the creation of new types of services and the mashup of different services.

It allows us also to talk to the business people in different parts of the supply chain and different industries to look at very fast, creative ways of offering new services for their industry.

There were a lot of health scares and food scares over the last year or so. We looked at that and said. "This is a very good opportunity to actually develop everything as a service."



Gardner: Chris Coughlan, tell us a little bit about your competency center, how you started, and perhaps illustrate with an example how this technological knowledge and appreciation of the business issues come together?

Coughlan: As follow-on from what Mick said, we have infrastructure as a service (IaaS), we have platform as a service (PaaS), and we have software as a service (SaaS). And, in the industry we're told was that there was going to be everything as a service. But really nobody started defining what you meant beyond SaaS.

There were a lot of health scares and food scares over the last year or so. We looked at that and said. "This is a very good opportunity to actually develop everything as a service."

We also came to the conclusion, which is very important, that there are two aspects of that. There has to be collaboration along all the various company supply chains, particularly if you want to recall something, or if you want to do track and trace. As well as that, there has to be standardization in what you are doing. So, that led to our relationship with GS1 and the development of the recall system.

Gardner: I spoke in my setup about both lowering cost and enabling new levels of productivity and innovation. Have you found that to be the case? Are you able to do both of those?

Chain of islands

Coughlan: Absolutely. If you think about it, the current recall systems in the food industry -- and Mick talked them – target from “farm to fork”, so to speak. Look at all the agencies. There's manufacturing, suppliers, retailers, and whatever. A piece of food can be caught anywhere within that supply chain, and each company and each unit in that supply chain is really behaving as an island in itself.

They might have their own systems, but then those systems are not linked. If there's a problem, you have to go from automated systems to manual systems, whatever. What we've done is we have linked all those systems up. We have agreed on a standard template from the GS1. This is the information that all those agents along the supply chain will share with each other, so that food can be recalled very quickly and very effectively.

If that's done, you can see that from the health and safety issue. You can see it from a contamination issue. You can see it from getting items off shelves and preventing items from being shipped. This can happen quite fast, as opposed to the system we have today.

Gardner: This is a payback that seems to have a very positive impact across that ecosystem, for the consumers, the suppliers, the creators, and then the brands, if they are involved.

Coughlan: Absolutely. First of all, as a consumer, it gives you a lot more confidence that the health and safety issues are being dealt with, because, in some cases, this is a life and death situation. The sooner you solve the problem, the sooner everybody knows about it. You have a better opportunity of potentially saving lives.

So we really look at it from a positive view also, about how this is creating benefits from a business point of view.



As well as that, you're looking at brand protection and you're also looking at removing from the supply chain things that could have further knock-on effects as well.

Keyes: Just to interject there. Those are very good points that Chris is making. We see a big appetite from different people in supply chains to get involved in this type of mechanism, because they look at it from a brand or profit center point of view. As a company, you'll be able to get greater visibility into your process or into your brand efforts right through the consumer.

In the older way supply chains worked, as Chris mentioned, it was linear -- one step up, one step down. The people at the lower end of the supply chain, for use of a better word, often weren't able to find out how the products were being used by consumers.

We have SaaS now, not just to any individual entity in the supply chain, but anybody who subscribes to our hub. We can aggregate all the information, and we're able to give them back very valuable information on how their product is used further up the supply chain. So we really look at it from a positive view also, about how this is creating benefits from a business point of view.

Gardner: So, a critical business driver, of course, is the public-safety issue. But, in putting into place this template of cloud process, we perhaps gain a business intelligence (BI) value over time with greater visibility across these different variables in the supply chain itself.

Addressing food safety

Keyes: Absolutely. There are quite a lot of activities you see around the world at the moment around greater focus on food safety. In the U.S., for example, HR 2749, a bill that's gone to Congress, is really excellent in how it looks to address the whole area of food safety.

If you look at that, it's leaning towards the concept of greater integration in supply chains. Regulatory bodies, healthcare bodies, and sectors like that will very quickly be able to address any public safety issues that happen.

We're also looking at how you integrate this into the whole social-networking arena, because that's information and data out there. People are looking to consume information, or get involved in information sharing to a certain degree. We see that as a cool component also that we can perhaps do some BI around and be able to offer information to industry, consumers, and the regulatory bodies fairly quickly.

The key to this is that this technology is not causing the manufacturers to do a lot of work.



Coughlan: The point there is that cloud is enabling a convergence between enterprises. It's enabling enterprise collaboration, first of all, and then it's going one step further, where it's enabling the convergence of that enterprise collaboration with Web 2.0.

You can overlay a whole pile of things --carbon footprints, dietary information, and ethical food. Not only is it going to be in the food area, as we said. It's going to be along every manufacturing supply chain -- pharmaceuticals, the motor industry, or whatever.

Gardner: Rebecca, do you have something you want to offer?

Lawson: The key to this is that this technology is not causing the manufacturers to do a lot of work. For example, if I am a peanut packaging person, I take peanuts from lots of different growers and I package them up. I send some to the peanut butter companies and some to the candy manufacturing companies or whatever.

I already have data in house about what I am doing. All I have to do to participate in this traceability example or a recall example is once a day cut a report, stream the data up into the cloud, and I am done.

It's not a lot of effort on my part to participate in the benefits of being in that traceability and recall ecosystem, because I and all the other people along that supply chain are all contributing the relevant data that we already have. That's going to serve a greater whole, and we can all tap into that data as well.

Viewing the flow

So, for example, maybe there is a peanut outbreak, and I, as the peanut packaging person, can quickly go and kind of see what the flow was across the different participants of growers, retailers, consumers, and all that. The cloud technology allows us to do that, and that's why we designed it this way.

The platform that HP created in this whole ecosystem is geared towards harnessing data and information that's pretty much already there and being able to access it for key questions, which would have been nearly impossible to answer, say five years ago, when the technologies were just not around to do that.

It's a win-win-win for individual companies, which can now reduce their insurance exposure, because they've got their processes covered. They have the data. It's already shared. So, it's a major step forward for manufacturing. We think this kind of a model is not just for manufacturing. This just happens to be one good use case that we can all relate to as consumers, because everybody is afraid of a Salmonella outbreak. It affects all lives. But, it's applicable to other industries as well.

Gardner: Of course, a recent example would be the flu outbreak, as well. So, there are lots of different ways in which a common currency of shared data and information can be very critical and important.

I could care less how powerful a server is. What I care about are the problems that I am trying to solve.



I also want to look at the importance of that common currency, which, in this case, is standardized service calls and application programming interfaces (APIs), and what we have come to be familiar with as Web services is now enabling this cloud synergy across these ecosystems.

I wonder if anyone would like to take a stab at my premise that, in the past, we have looked for productivity from increased cycles in the silicon and on the hardware and in IT itself. But, is there a new possibility for a higher level of Moore's Law, so to speak, in applying these cloud approaches to productivity? Does anyone share my enthusiasm for that?

Lawson: Absolutely. In fact, I could care less how powerful a server is. What I care about are the problems that I am trying to solve. If I'm in the environmental world, if I'm government, or if I'm a financial services organization, I want to be able to creatively think about how I serve my customers.

These new technologies are allowing HP's customers to solve problems much differently than they did before, using a wider expanse of currency, as you said, which is information. Information is the currency of our era.

Structured vs, unstructured

One of the big shifts going on is that information in the past 5, 10, or 20 years has been largely held in very structured databases. That's a really good thing for certain kinds of data, but there is other data now that's just streaming into the Internet, streaming into the cloud, which is held in a more unstructured fashion.

We can now deal with that data. We can now run search and query across semistructured or unstructured data and get to some interesting results really quickly, as opposed to more traditional ways of holding certain kinds of data in a relational database. We don't think that it's going away. We just see that there is a whole new currency coming in through new ways to access information.

Coughlan: I'm a great believer in applying Moore's Law to a lot of things beyond technology -- to society, to productivity, as you said, and whatever. It's the underlying technology that originally defines Moore's Law, which actually then drives the productivity, the change in society, etc.

But, you've heard of another law called Metcalfe's Law, where he talks about the power of the network. We are bringing in the power of collaboration. What you have then are two of these nonlinear laws, which are instituting change, reducing price, doubling capacity, etc. You've even got a reinforcing thing there, which might even put Moore's Law even faster than Moore predicted himself.

Gardner: A part of this has to be, of course, cooperation and trust. What is it about the platform for manufacturing that HP has developed that enables that trust and that places this hub, this third-party, in a position where all the members of the ecosystem feel that they are protected?

We look to GS1 as the trusted advisor out there, with industry, with governments, around safety, around standards, and on traceability.



Coughlan: This is one of the reasons that we partnered with GS1 in this whole space. You're right, Dana. It would be something that industry wants to know immediately. Why would we trust an IT provider, for example, to be the trusted advisor to integrate all the different elements of the supply chain?

We're pretty much aware of that. In our discussions with GS1, the international standards body, is trusted by industry. This is their great strength. They are neutral. They are in 110 different countries. They have done a lot of work about getting uniform standards about how different systems can integrate, especially in this whole area of supply chain management.

We look to GS1 as the trusted advisor out there, with industry, with governments, around safety, around standards, and on traceability. They're not a solution provider, but they will go to best in class with their ideas.

They have asked the industry for ideas. They have gone to the industry and explained the process, for example, of how recall, as an example, should work and how traceability should work. So, we feel that to partner with somebody like GS1 is key to getting trust in the industry to apply these types of systems.

Gardner: Do you expect to see additional partnerships, and should standards bodies be thinking about moving towards partners in the cloud, so that they can extend their role as a trusted advisor, as a neutral third-party, but be able to execute on that now at a higher abstraction?

Win-win situation

Keyes: Absolutely. This is a win-win for everybody here. There are lots of really good partners out there who have, for example, point solutions that are in industry at the moment. We feel there are a lot of benefits to these partners through using GS1 standards.

Let's say that most of them do at the moment and they are all compliant, but they can work with our traceability hubs and to try and see whether they can help exchange information. In return, we'll be able to supply information and publish information through their systems back to industry as well.

GS1 is important in this also in getting together the industry, not just the actual manufacturers or the retailers, but also the technology people in the industry, so there will be uniform standards. We all know from developing traditional systems and tightly coupled systems in manufacturing and the supply chain that you need an easier matter of collaboration. GS1 has done an excellent job in the industry defining what these standards should look like.

Gardner: I know we've been focused on manufacturing, but not to go too far off the beaten track, there's also this need for greater cooperation between public and private sectors across regulatory issues. Have we seen anything moving along those lines, a trusted partnership between a manufacturing platform like HP has provided, where some sort of a public agency might then reach out to these private ecosystems?

Keyes: If you don't want to dwell on the food area, often what you find is that governments bring out laws and regulations, and they say industry must apply these laws. Often, you get a bit of a standoff, where industry would immediately say, "Okay. This is government telling us what to do, etc."

Industry is now looking at this type of model to take a preemptive step and to show that they are also active in the whole area of food safety.



In our journey of what we've been trying to do around this food industry, a lot of time we talk directly to industries themselves. Industry now also sees what the issues are and they agree with what the governments and the regulatory bodies are trying to do.

Industry is now looking at this type of model to take a preemptive step and to show that they are also active in the whole area of food safety. It's in their interests to do it, but now I think they have a mechanism, which industry, government, and regulatory bodies can actually use.

For example, if you look at the recall project that we've been involved in, we're taking data and accessing data in industry and in retailers also, but we're looking at a service that we can publish for industry. We call it visibility type services, where, at a glance, they can look at where all elements of the recall might be and what industries are actually being affected.

We're very keen to share services or offer services to different regulatory bodies, be it government, or be it directly with consumers, consumer bodies as well, we have been pretty active in discussing this with.

Gardner: Thank you, Mick. Chris, do you have any insights as well in terms of this public-private device?

Variety of clouds

Coughlan: Mick has said most of it there and Rebecca spoke earlier on about the ecosystem. As things begin to develop, you will be able to see public clouds, private clouds, and hybrid clouds. Then, you'll have a cloud portal accessing those under various circumstances, to solve various problems, or to get various pieces of information.

I see third-party point solutions feeding into those clouds. That's one of the areas that we offer -- third-party solutions -- be it in the food industry or other industries. They feed into our cloud, and that information can be either private information or collaborative information, where they define where they are going to do the collaboration, or it could be public information.

So, it would mean the private cloud, where some of the information could go into the public cloud, and other information could be a hybrid type of cloud.

Gardner: Rebecca, it seems like we could go on for hours about all these wonderful use-case scenarios and potential innovation improvements on process and the crossing of divides. But, the ecosystem is not just in the supply chain.

Right now, a lot of the industry is talking about cloud, and a lot of folks are focused on things like IaaS, virtual machines as a service, and things like that.



It also needs, I suppose, to be pulled together in terms of the cloud infrastructure, and the players that need to come together in order to enable these higher level business benefits. It strikes me that there are not that many companies that can be in a position of pulling together the ecosystem on the delivery side of these services.

Lawson: That's true, and what's different about what we are doing is we're taking a top-down approach. Right now, a lot of the industry is talking about cloud, and a lot of folks are focused on things like IaaS, virtual machines as a service, and things like that.

But you can switch it around and say, "How can we apply technology in a new way and build out the platform to support the services that industries need?" Then, for those services you build out the right kind of infrastructure and scale out an infrastructure basis on which all of that can run very smoothly.

Working backward

Now, you have a really good organizing principle to say, "If we're going to solve this problem of traceability, food track and trace, and recall, how are we going to solve that problem? Everything really drives from there, as opposed to saying, "What's the cheapest platform on which we can run some kind of food traceability?" That's just coming at it backward.

In fact, a good analogy to what we are doing with these vertical ecosystems is a well-known use case around Salesforce.com and the Force.com platform that generated around it.

Most folks realize that salesforce.com started with a sales-force automation product. Then, it broadened into a customer relationship management (CRM) product, and then, before you knew it, they had a platform on which they built the community of service or application providers, their App Exchange. That community is enabled by their underlying platform. That community serves a horizontal function for sales and marketing oriented or adjacent types of services.

If you pull that analogy out into an industry like manufacturing, transportation, or financial services, it's the same sort of thing. You want that platform of commonality, so different contingents can come and leverage the adjacencies to whatever it is that they are doing.

We really see that this ecosystem approach is the way to think about it, and vertical is the way to think about it, although, obviously, different verticals will blend together. We're working on similar projects in the transportation arena, where manufacturing can cross over quite quickly into public transportation and add lots of new development. So we are pretty excited about all these new opportunities.

We really see that this ecosystem approach is the way to think about it, and vertical is the way to think about it, although, obviously, different verticals will blend together.



Gardner: So, we actually can start thinking about pulling together ecosystems of ecosystems?

Keyes: Absolutely. We look at what we're doing at the moment around food and how that might affect the whole healthcare area as well. There are a lot of new innovations coming out in the biomedical area as well, of how we can expand things like food, pharmaceutical, or drugs to the whole health system. As you said, Dana, we see that as a very important area of collaboration between different ecosystems.

Lawson: One more point is that the ecosystem implies that it's not just about the technology. It's about the people. So, different aspects of the ecosystem are going to be human. They may be machine. They may be bits of code. There are conditions and tons of events. The ecosystem is a more holistic approach, in which you have the infrastructure, development and runtime environments, and technology-enabled services.

Gardner: If I'm a member of an ecosystem -- be it in the manufacturing, vertical, health, food recall, regulatory, or public sector -- and these concepts resonate with me, how do I get started? If I'm in a standards body of some sort, where do I go to say, "What's the partnership potential for me?"

Lawson: The first thing you can do is call HP and take a look at what we have done in our Galway Center of Expertise around traceability -- track and trace -- and we would be happy to show you that. You can take a look under the covers and see how applicable it is to your situation.

Gardner: Very good. We've been taking a look at how the new productivity levels can be exploited vis-à-vis cloud computing -- not just at the technological level, but at the process level of finding partnerships and standards and approaches that pull together ecosystems of business, potentially across business and the public sector.

Helping us to understand better the potential for cloud computing as a business tool, and how HP, and most recently GS1 Canada have pulled together a Food Recall Platform based on the HP Cloud Product Recall Platform, we have been joined by Mick Keyes. He is the senior architect in the HP Office of the Chief Technology Officer. Thank you, Mick.

Keyes: Thank you.

Gardner: We've also been joined by Rebecca Lawson, director of Worldwide Cloud Marketing at HP. Thanks, Rebecca.

Lawson: Thank you very much.

Gardner: And also, Chris Coughlan, director of HP's Track, Trace, and Cloud Competency Center. Thank you so much, Chris.

Coughlan: Thank you.

Gardner: This is Dana Gardner, principal analyst at Interarbor Solutions. You've been listening to a sponsored BriefingsDirect podcast. Thanks for listening, and come back next time.

Free Offer: Get a complimentary copy of the new book Cloud Computing For Dummies courtesy of Hewlett-Packard at www.hp.com/go/cloudpodcastoffer.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: Hewlett-Packard.

Transcript of a sponsored BriefingsDirect podcast examining how cloud computing methods promote innovative sharing and collaboration for industry-specific process efficiencies. Copyright Interarbor Solutions, LLC, 2005-2009. All rights reserved.