Monday, October 25, 2010

FuseSource Gains New Autonomy to Focus on OSS Infrastructure Model, Apache Community Innovation, Cloud Opportunities

Transcript of a sponsored podcast discussion on the status and direction of FuseSource, which is being given its own corporate identity today by Progress Software.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: FuseSource.

Dana Gardner: Hi, this is Dana Gardner, Principal Analyst at Interarbor Solutions, and you're listening to BriefingsDirect.

Today, we present a sponsored podcast discussion on the rapid growth, increased relevance, and new market direction for major open source middleware and integration software under the Apache license.

We'll learn how the FUSE family of software is now under the FuseSource name and has gained new autonomy as its own corporate identity. We'll also look at where FuseSource projects are headed in the near future. [NOTE: Larry Alston also recently joined FuseSource as president.]

Part of the IONA Technologies acquisition by Progress Software in 2008, FuseSource has now become its own company, owned by Progress, but now more autonomous, to aggressively pursue its open source business model and to leverage the community development process strengths.

Even as the IT mega vendors are consolidating more elements of IT infrastructure, and in some cases, buying up open-source projects and companies, the role and power of open source for enterprise and service providers alike has never been more popular or successful. Virtualization, cloud computing, mobile computing, and services orientation are all supporting more interest and increased mainstream use of open-source infrastructure.

Please join me in welcoming ours guests. We're here now to discuss how FuseSource is evolving to meet the need for open source infrastructure with Debbie Moynihan, Director of Marketing for FuseSource. Welcome to the show, Debbie.

Debbie Moynihan: Hi, Dana. Thank you. It's great to be here.

Gardner: We're also here with Rob Davies, Director of Engineering for FuseSource. Welcome to the show, Rob.

Rob Davies: Hi, Dana. Good to speak to you today.

Gardner: Debbie, tell me about some of the trends. As I said, we're seeing some of the most aggressive use of open source and IT infrastructure. We're seeing great success in terms of total cost, efficiency, and agility. Why is that happening now, and where do you see the demand trends headed to in the next several years?

Cost reduction

Moynihan: As we all know, over the past couple of years, there has been a lot of focus on cost reduction, and that resulted in a lot of people looking at open source who maybe wouldn’t have looked at open source in the past.

The other thing that’s really happened with open source is that some of the early adopters -- we have had customers for many years -- started out with a single project and now has standardized on FuseSource products across the entire organizations. So there are many more proof-points of large global organizations rolling out open source in mission-critical production environments. Those two factors have driven a lot of people to think about open source and start adopting open source over the past couple of years.

Then, the whole cloud trend came along. When you think about scaling in the cloud, open source is perfect for that. You don’t have to think about the licensing cost as you scale up. So, there are a lot of trends that have been happening and that have really been really helpful. We're very happy about them helping push open source into the mainstream.

From a FuseSource perspective, we've been seeing over 100 percent growth each year in our business, and that’s part of the reason for some of the things we're going to talk about today.

Gardner: How about the popularity of the Apache license? We see controversy, in some cases, a lack of clarity and understanding about where some other licenses are going, but Apache seems to be pretty solid and pretty accepted.

Moynihan: We really like the Apache license. There's a lot of confusion around open source licensing. There are many different licenses. There is a lot of fine print. A lot of people don’t want to think about it, and a lot of legal departments get concerned about the gray areas. The Apache license is very easy to understand and it's very permissive in what you can do with software that’s licensed under the Apache license.

Essentially, you can make any modifications you want to the software and you don’t necessarily have to contribute back to the community. It's nice, if you can contribute back, but from a business perspective, if you want to use any of the components, it's what's considered a non-viral license. So, you're pretty free to do what you want, as long as you give credit back to those who wrote the initial code.

Gardner: Rob, we've seen a lot of popularity for open source in operating systems -- server operating systems, in particular -- but why has the use of open source for infrastructure, say for integration and middleware, become so popular? Why do you think that’s going to continue with such things as cloud?

Davies: There has been a trend over the last few years, and Debbie alluded to this, with companies looking to open source and kicking the tires around. In fact, I recently spoke to a large customer of ours in the telco space. They had this remit. Any open source that came in, they wouldn’t put into mission-critical situations, until they kicked the tires for a good while -- at least a couple of years.

Because there has been this push for more open source projects following open standards, people are now more willing to have a go using open source software.

Snowball effect

We've been around in this space for a while, but the earlier adopters who were just trying out in distinct groups are now rolling this out into broader production. Because of that, there is this snowball effect. People see that larger organizations are actually using open source for their infrastructure and their integration. That gives them more confidence to do the same.

In fact, if you look at the numbers of some of our larger customers, they are using Apache ServiceMix and Apache ActiveMQ to support many thousands of business transactions, and this is business-critical stuff. That alone is enough to give people more confidence that open source is the right way to go.

Gardner: Debbie, tell us a little bit about the FuseSource move toward more autonomy. This clearly is an opportunity, but it’s a different opportunity than a purely commercial license and software model. Tell us what’s going on with Progress Software and FuseSource.

Moynihan: We're really excited as a team. Progress is launching a new company called FuseSource that will be completely focused on the open source business model. The FuseSource team has been an independent business unit, since IONA was acquired by Progress Software. We have been fairly independent within the company, but separated as our own company we'll be able to be completely independent in terms of how we do our marketing, sales, support, services, and engineering.

When you're part of a large organization, there are certain processes that everyone is supposed to follow. Within Progress, we are doing things slightly differently (or very differently depending on the area) because the needs of the open source market are different. So being our own company we'll have that independence to do everything that makes sense for the open-source users, and I'm pretty excited about that.

Being our own company we'll have that independence to do everything that makes sense for the open-source users, and I'm pretty excited about that.



Gardner: So, here we are in the middle of October, and this is pretty much now a done deal. Tell me about the history of FuseSource and what led up to this movement.

Moynihan: Rob, who is on the call, can maybe talk about the early days. He was actually a founder of a startup company and that was really the genesis of that is now FuseSource. So Rob, why don’t you start out and I can chime in if needed.

Davies: The notion is of having open source infrastructure start with a group of developers and founders in open source projects. It worked for commercial license based infrastructure product companies before. We -- the other individuals are James Strachan, Hiram Chirino, and Guillaume Nodet -- realized that the best way to deliver open source for infrastructure was to develop open source at Apache.

We decided that open source is the best thing to do, because it opens up the software for engineers to look at, use, and enhance. We felt like that was a very good way to grow a community around the projects we wanted to do.

We started this company called LogicBlaze, which was acquired three years ago by IONA. At that time, we decided to sell to IONA because we wanted to piggyback on their expertise of doing large infrastructure rollouts. IONA, the FUSE brand, and the FUSE product line then really came into the forefront.

Get the message out

D
ebbie Moynihan, who was the director of open source at IONA, was working on another project at the time called Celtix, which morphed into Apache CXF. We decided to collaborate on this effort to get this message out about using really good infrastructure based on Apache open source projects and get that into the marketplace.

Then, when IONA was acquired by Progress, Progress initially liked the idea, or liked the fact that it’s disruptive. They invested in the group: we added more employees, more sales people, more people in marketing, etc. We have been involved in that for the last two years.

But, it has gotten to a point where we realized that to operate it in its most effective way it has to be outside of Progress to a degree, because it is so different in the go-to-market strategy and what we deliver to customers compared to the rest of what Progress is doing with the one-product solution.

Moynihan: Also, from a business prospective, Progress’ go-to-market is, as Rob said, offering solutions at the business level, whereas open source has traditionally been looked at by developers and project managers more from a technical perspective and more from an open source advocate perspective.

That’s growing over time, as we have talked about earlier. Open source is becoming more and more mainstream, but our approaches to marketing and sales are different in the FuseSource team and are much more community oriented and grassroots than the way that corporate marketing is done at Progress Software.

Our model is that there is no license cost. It’s a subscription support model.



Gardner: Let’s face it, the business models are quite different. The way in which you develop revenue is more through support and maintenance and not on the upfront costs and implementations. Maybe you could explain why the business models being separate makes more sense.

Moynihan: Absolutely. From a practical perspective, the business model is very different. In traditional enterprise software sales, there is a license fee which is typically a large upfront license cost relative to the entire cost over the lifetime of that software. Then, you have your annual maintenance charges and your services, training, and things like that.

From an open source perspective, typically upfront, there is no license cost. Our model is that there is no license cost. It’s a subscription support model, where there is a monthly fee, but the way that it is accounted for and the way that it works with the customer is very different. That's one of the reasons we split out our business. The way that we work with the customers and the way they consume the software are very different. It’s a month-to-month subscription support charge, but no license charge.

Gardner: It’s interesting to me that Progress with FuseSource recognizes that there is that little bit of apples and oranges going on, and perhaps keeping them separate is in the best interest of the users and the community. But, we're seeing the opposite in other companies, where people are looking to fold open source projects and products into a larger family or stable of commercial products.

Do you think that we are going to see that trail off in the market? I guess the question is: what about these mega vendors and the direction of how an open source model and a commercial model should or shouldn’t overlap or exist together?

Very difficult

Moynihan: There are a lot of opinions out there on whether or not open source can be successful in a hybrid model within a single mega vendor. My view is that it’s very difficult, especially because the business model is different. If you're a company and you're out there selling a large portfolio of products, where only a small amount of it is open source, you have a team of people trying to sell, market, and grow business around that portfolio. They're going to focus on the license product.

They're going to have a tendency to focus on those products that are going to drive revenue in the short-term, from a business perspective. It has nothing to do with whose model is better.

I'm very happy that Progress has decided to separate out FuseSource. We already had our own sales team, but now we can be completely focused on working with our customers to help them adopt open source, and when it makes sense, they can work with us to get support and to get training.

It’s a very consultative partnering model. In the early days we really like to provide everything someone needs to get going at no cost. You can come to FuseSource.com and get a lot of documentation, and you can get a lot of training webinars for free. We have weekly webinars that show you how to get going on our products, and that’s nothing that you would see in traditional commercially licensed software.

Gardner: Debbie, tell me about what a customer should expect. If you're a user of FuseSource and if you're in the community, how will this move towards autonomy actually impact you? Will you perhaps not even notice too much?

Overall, it will be really good for our customers. We've talked with them, and they're pretty excited about it. We're all excited about it.



Moynihan: From a customer perspective, this change will have a small but significant impact. We are continuing to do everything that we have been doing, but as I mentioned earlier, we will be able to have even more independence in the way that we do things. So it will all be beneficial to customers.

From an administrative perspective, our email address will change to FuseSource.com and invoices will say FuseSource instead of Progress Software, for example. But, from who they're going to be working with, who their account managers will be, who is developing the software, and who is providing the services and the support, it’s going to be the same people that they have been working with.

We have also launched a new community site at FuseSource.com, which we're pretty excited about. We were planning to do that and we've been working on that for several months. That just provides some additional usability and ability to find things on the site.

Overall, it will be really good for our customers. We've talked with them, and they're pretty excited about it. We're all excited about it.

Gardner: Let's get back to looking at the overall market for infrastructure, open source infrastructure in particular. Rob, tell me a little bit about what's going on in the market?

We're seeing a lot of interest in clouds, private clouds, hybrid clouds. We're certainly also seeing a great deal of emphasis on reducing costs, particularly from the service provider, where they are going down to minute margins in some cases. They really need to make to sure that they're doing this in the most cost-effective manner. Then I have to imagine that if the service providers are able to provide IT-as-a-service at a low cost, the IT enterprises themselves will have to follow suit.

Help me understand the new economics of IT and how open source infrastructure fits into that.

Disruptive in the market

Moynihan: From a market perspective, at a high level, open source is really disruptive in the market in that it's affecting how people are buying software. Generally, we've seen a lot of changes over the past 5 to 10 years anyway, where license costs seem to be coming down with more and more discounting, and people are looking at it.

Historically, software vendors looked at license revenue as the premium part of the business to focus on. More and more they're realizing that a lot of value really does come from the services side. Why? Because that’s where you partner with your customer. That’s where you get to know them. That’s where you help them select the right solutions.

In the open source community, that’s how it works. People come to the community and work with the developers directly. It eliminates a lot of the cost involved in large, complex software organizations, where you might have to wait to schedule time of the product manager, who then would have to spend time with the engineers understanding what's happening with the products, so that he could then relay it to the account team, and then they would meet with the customer.

Open source just breaks down a lot of barriers and eliminates a lot of the costs involved in getting the best software to the users. Why? Because people are talking directly to the developers in the community. The developers are getting the feedback directly.

While we do have some level of product management for open source, a lot of it is based around packaging, delivery, licensing, and these types of things, because our engineers are hearing directly from customers on a moment-by-moment basis. They're seeing the feedback in the community, getting out there, and partnering with our customers. So, from an economic perspective, the model is different.

You pay as you go. You scale as you go. And you don’t have that upfront capital expenditure cost.



Just from the overall "how it works" from a buy-in perspective for the customer, it's very different. It's very attractive in these times that we are having right now, because upfront you don’t have the capital expenditure costs. You can get going. You can go to an open source community site, download the software, and try it out.

We've actually seen people get to proof of concept before they have even spoken with us. We've seen people build our stuff into a product as an application provider, as an OEM, and then come to us. That will tell you how easy it is for people to consume and use open source without having to spend a lot trying to select or figure it out, before they even can try it out.You can try it before you buy it, and when you go to buy, you pay as you go.

That’s also the reason people like cloud. You pay as you go. You scale as you go. And you don’t have that upfront capital expenditure cost. For new projects, it can be really hard to get money right now. All these benefits are why we're seeing so much growth in FuseSource.

Gardner: Are there some salient examples that demonstrate what you've been talking about? I'll throw this out to either one of you. Some of your customers might be good examples of how this can work, both from an economic, technical, and innovation freedom perspective as well.

Moynihan: I'll mention a couple of examples. They are kind of similar and something that we are seeing more and more. Sabre Holdings delivers a lot of applications for various airlines. They have a lot of partners, travel agencies, and airlines. Also, the Federal Aviation Administration (FAA). Those are two of our customers.

In both of those cases, they started looking at open source at the project level, but eventually came to standardize on open source for their common integration infrastructure, and to recommend it - not just within their own organizations - but in both of those cases, to their partners.

Integration is easy

That’s the really nice thing about open source. Integration within your own company is easy. You can have any crazy interface and you'll figure out how to do it. But when you partner, you can't tell your partner how to build their interfaces. But, you can have a common integration platform and say, "Can you transform your stuff so it can connect to this platform?"

With open source, they don't have to have a license for that. So, it's quite nice. They can get going, try it out, and see how it works without requiring their partners to pay any cost. From an economic perspective, they could try it out, get going, look at some proof concepts, test it out, and then rolled it out for a standardized infrastructure internally for some major projects. Then, work with partners to roll it out further.

Gardner: To your point Rob, we've heard a call for more standards in the market around cloud, such as common operating environments and standards for interoperability. In lieu of having those structured standards develop rapidly, we have the open source fallback position. We can't always know what the commercial underpinnings are for services across an ecosystem of cloud consumers or providers, but having a common open-source infrastructure base might very well serve that purpose. Is that what we are finding technically?

Davies: That’s really on the money, Dana. There is this trend as well. When you look at cloud, there are different issues you have to overcome. There is the issue about deploying into the cloud. How do you do that? If you're using a public cloud, there are different mechanisms for deploying stuff. And there are open source projects already in existence to make that easier to do.

This is something we have found internally as well. We deploy a lot of internal software, when we are doing our big scale testing. We make choices about which particular vendors we're going to use. So, we have to abstract the way we are doing things. We did that as an open source project, which we have been using internally.

You have to have choice. You can’t really dictate to use it this way or the other way. You've got to have a whole menu of different options for connecting.



When you get to the point of deploying, it’s how do you actually interface with these things? There is always going to be this continuing trend towards standards for integration. How are you going to integrate? Are you going to use SOAP? Are you going to use RESTful services? Would you like to use messaging, for example, to actually interface into an integration structure?

You have to have choice. You can’t really dictate to use it this way or the other way. You've got to have a whole menu of different options for connecting. This is what we try to provide in our software.

We always try to be agnostic to the technology, as much as how you connect to the infrastructure that we provide. But, we also tend to be as open as we can about the different ways of hooking these disparate systems together. That’s the only way you can really be successful in providing something like integration as a service and a cloud-like environment. You have to be completely open.

Gardner: It sounds as if we've been able to capture the best of both worlds, with FuseSource being based on mature Apache software projects with the model around the FuseSource support, which is several years old and very well demonstrated in the market. But now that you are autonomous, you're also getting the benefits of being a startup, of being innovative, being able to move, being fleet, being able to be agile.

Debbie, is that a fair characterization? By going autonomous with FuseSource, you're getting the best of a mature, established mission-critical enterprise supplier, but also, you're able to move quickly in a rather dramatically changing market.

Best of both worlds

Moynihan: Definitely. We're really excited about it. Definitely being backed by Progress Software provides us the benefit that customers can have that assurance that we're backed by a large organization. But, having FuseSource as standalone company, as you said, gives us that independence around decision making and really being like a startup.

Sometimes, we get ideas, we want to make it happen, and we can make it happen. We can make it happen, the same day or the next day. We'll be able to move as quickly as we want. And, we'll be able to have our own processes in any functional area that we need to best meet the needs of the open source users.

Gardner: Rob, from a technical perspective, how do you view this best-of-both-worlds benefit?

Davies: From a technical perspective, it’s really good for us. The shackles are off. There’s a lot of suddenly reinvigorating that seems to move forward. We've got a lot of really good ideas that we want to push out and roll out over the coming year, particularly enhancing of the products we already have, but also moving onto new areas.

There's a big excitement, like you would expect when you have got a startup. It just feels like a startup mentality. People are very passionate about what they're doing inside FuseSource.

Because those shackles have been taken away, it means that we can actually start innovating more in the direction we really want to drive our software too. It’s really good.



It's even more so, now that we have become autonomous of Progress. Not that working inside Progress was a bad thing, but we were constrained by some of the rigors and procedures that you have to go through when you are part of a larger organization. Because those shackles have been taken away, it means that we can actually start innovating more in the direction we really want to drive our software too. It’s really good.

Gardner: Well, great. How can people learn more about FuseSource? You said earlier Debbie that you have a website that’s been refreshed. Are there some URLs or directions that you would point people to in order to learn more?

Moynihan: Yes, I would point people to FuseSource.com. They can always contact us directly as well. Rob and I would be happy to speak with anyone that has questions. You can send an email to info@fusesource.com and we would love to talk with anyone that has any questions or wants to hear more about it. FuseSource.com is the place to get information on the web. We have a Twitter account, twitter.com/fusenews, that you can follow as well.

Gardner: I want to thank you both. We have been discussing how a newly autonomous FuseSource is evolving to meet the need for open source infrastructure in a rapidly changing marketplace, and of course in an environment where cost and low risk are all very much top of mind.

So, thanks again to Debbie Moynihan, Director of Marketing for FuseSource. Thanks, Debbie.

Moynihan: Thank you, Dana.

Gardner: And also, Rob Davies, Director of Engineering for FuseSource. Appreciate your joining us, Rob.

Davies: No problem. Good to speak to you, Dana.

Gardner: This is Dana Gardner, Principal Analyst at Interarbor Solutions. You've been listening to a sponsored BriefingsDirect podcast. Thanks for listening, and come back next time.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: FuseSource.

Transcript of a sponsored podcast discussion on the status and direction of FuseSource, which is being given its own corporate identity by Progress Software. Copyright Interarbor Solutions, LLC, 2005-2010. All rights reserved.

You may also be interested in:

Tuesday, September 28, 2010

Automated Governance: The Lynchpin for Success or Failure of Cloud Computing

Transcript of a sponsored podcast on cloud computing and the necessity for automated and pervasive governance across a services lifecycle.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Get a copy of Glitch: The Hidden Impact of Faulty Software. Learn more about governance risks. Sponsor: WebLayers.

Dana Gardner: Hi, this is Dana Gardner, Principal Analyst at Interarbor Solutions, and you're listening to BriefingsDirect.

Thanks for joining this sponsored podcast discussion on why governance is so important in the budding era of cloud computing. As cloud-delivered services become the coin of the productivity realm, how those services are managed as they are developed, deployed, and used -- across a services lifecycle -- increasingly determines their true value.

Management and governance are the arbiters of success or failure when we look across a services ecosystem and the full lifecycle of those applications. And yet, governance is still too often fractured, poorly extended across the development-and-deployment continuum, and often not able to satisfy the new complexity inherent in cloud models.

One key testbed for defining the role and requirements for cloud governance is in applications development, which due to the popularity of platform as a service (PaaS) is already largely a services ecosystem.

Often times, development teams are scattered globally, contractors can come and go, testing is provided as a service -- all while the chasm between development and deployment shrinks and iterations of deployments are hastening.

Here we’ll discuss the needs and potential for solutions around governance in the cloud era using the development and deployment environment as a bellwether for future service environments.

Here to help us explain why visibility across services creation and deployment is essential -- and how governance can be effectively baked into complex ecosystems -- we're joined by Jeff Papows, President and CEO of WebLayers and the author of Glitch: The Hidden Impact of Faulty Software. Welcome back to BriefingsDirect, Jeff.

Jeff Papows: Dana, thanks for having me on again.

Gardner: And, we're also here with John McDonald, CEO of CloudOne Corp. Welcome to the show, John.

John McDonald: Dana, hi. Thanks.

Gardner: Let’s start off, as it often the case with these cloud discussions, by defining "cloud" for our purposes, what we're going to focus on today, and I think that has lots to do with PaaS. So, let’s start with you John McDonald. Tell us what you think of when people mention cloud, particularly in development and deployment strategies in this notion of PaaS?

The role of confusion

McDonald: There is a ton of confusion about this right now and to be honest with you, for a lot of companies this confusion serves them in what they are trying to do.

To try to clarify it for everybody, cloud computing is really quite simple to understand. It’s all about getting access to hardware on-demand. This is hardware that I might use for any number of purposes -- for storing data, providing tools, and hosting an application.

There are a lot of companies out there that have done hosting in the past, application hosting or whatever, who are now morphing into cloud-computing companies. Some of them are actually even using cloud-computing technologies to do it, even though they just named themselves that.

Cloud, from a technology perspective, is more about some very sophisticated tools that are used to virtualize the workloads and the data and move them live from one bank of servers to another and from one whole data center to another, without the user really being aware of it. But, fundamentally, cloud computing is about getting access to a data center that’s my data center on-demand.

It’s frequently confused with another concept called software as a service (SaaS). SaaS is about getting access to software on-demand. So as cloud is to hardware, SaaS is to software. Frequently, these concepts are used together, so that when you do that you have an environment that scales up and down dynamically as your needs change up and down.

Sometimes that’s labeled Platform as a Service. In other words, I'm providing an entire platform or a work bench of tools on-demand. There are two concepts together, sometimes it’s called infrastructure as a service (IaaS), when what it is that I am providing is more of an infrastructural set of tools.

Fundamentally, the easiest way to remember it is that cloud is to hardware as SaaS is to software. Basically, for CloudOne, we're providing IBM Rational Development tools both through cloud computing and SaaS. Right now, we're the only people doing that. So, it’s unique and frankly pretty fun.

Gardner: Jeff Papows, why do you think application development has become such a great demonstration of what cloud computing can do? Why is there such a good fit between cloud, as far as John just defined it, and application development?

Papows: John’s explanation was both accurate and important, because there's a habitual capacity in our industry, as both of you have recognized, for people to get confused or hung up on vocabulary and on the most recent flavor of acronym headaches.

If you think about a lot of what John said, and a lot of about what’s going on in cloud computing it’s not a particularly new thing. What we used to think of was hosting or outsourcing. Then, you saw vertical instantiations of it around particular competencies like payroll. Companies like ADP were basically clouds with distinctive vertical expertise and processing payroll and doing tax reporting.

Mobile world

What’s happening now is the world is becoming more mobile, as 20 percent of our IT capacity is focused on new application development every year as opposed to maintaining what we have.

We have to get more creative and more distributed about the talent that contributes to those critical application development and projects. That’s why you begin to see, as John started to describe it, a razors and razor blade taxonomy, where it’s one thing to virtualize the hardware environment and some of the baseline topology and infrastructure, but then you begin to add layers of functionality.

Rational Team Concert (RTC) is one good case in point, as John pointed out too, but design time governance is the next logical thing in that continuum, so that all of the inherent risk mitigation associated with governance and then IT contacts can be applied to application development in a hybrid model that’s both geographically and organizationally distributed.

Gardner: John McDonald, you mentioned the fact that cloud fits in where workloads are unpredictable. With application development that’s certainly the case. It’s not just the constant hum of production, but really more fits and starts. Tell me, from your perspective, why cloud works so well to support application development across its continuum right up and into deployment?

McDonald: Yeah, that is the case. There's a myth that development is something that we ought to be tooling up for, like providing power to a building or water service. In reality, that’s not how it works at all.

The money that you save by doing that is the reason you can open any trade magazine and the first seven pages are all going to be about cloud.



There are people who come and go with different roles throughout the development process. The front-end business analysts play a big role in gathering requirements. Then, quite often, architects take over and design the application software or whatever we are building from those requirements. Then, the people doing the coding, developers, take over. That rolls into testing and that rolls into deployment. And, as this lifecycle moves through, these roles wax and wane.

But the traditional model of getting development tools doesn’t really work that way at all. You usually buy all of the tools that you will ever need up front, usually with a large purchase, put them on servers, and let them sit there, until the people who are going to use them and log in and use them. But, while they are sitting there, taking up space and your capital expense budget, and not being used, that’s waste.

This cloud model allows you to spin up and spin down the appropriate amount of software and hardware to support the realities of the software development lifecycle. The money that you save by doing that is the reason you can open any trade magazine and the first seven pages are all going to be about cloud.

It's allowing customers of CloudOne and IBM Rational to use that money in new, creative, interesting ways to provide tools they couldn't afford before, to start pilots of different, more sophisticated technologies that they wouldn't have been able to gather the resources to do before. So, it's not only a cost-savings statement, it's also ease of use, ease of start-up, and an ability to get more for your dollar from the development process. That's a pretty cool thing all the way around.

Gardner: So the good news is about that agility, that flexibility and adaptability toward a workflow of some sort across a development process. The bad news is that these things can spin out of control and that there is not a common thread or a fabric around them -- especially if you're doing source in your cloud’s hybrid models or multiple cloud or multiple sources of the platform or tools or testing.

Back to you, Jeff Papows. What do we do in terms of defining the problem set? What's the problem that governance is going to help solve?

Economic perturbation

Papows: John describes some of the economic realities, as well as the pragmatic realities of agile development, which I agree is not linear. It's a set of perturbations that, as John said, wax and wane depending on where you are in a particular development cycle, in which organizations your skills are being, are being amassed. That's as it should be, and it's nature’s law. In any event, you're not going to change it.

When you try to add some linear structure and predictability to those hybrid models, as you both have been discussing, the constant that can provide some order and some efficiency is not purely technology-based. It's not just the virtualization, the added machine capacity, or even the middleware to include companies like WebLayers or tools like Rational. It's the process that goes along with it. One of the really important things about design-time governance is the review process.

In a highly distributed, hybrid, agile application-development model -- where you may have business analysts in Akron, Ohio, architects in Connecticut, coders in Singapore, and outsourced QA in India -- the one constant taxonomy is the ability to submit and review and deal with some logical order and structure to the workflow that makes that collaborative continuum more predictable and more logical, irrespective of all of the moving parts, both digital and human, and the fabric that we're talking about here.

Governance is a big part of the technology toolset that institutionalizes that review process and adds that order to what otherwise can quickly become a bit chaotic, depending on where you are in the perturbations that John describes.

McDonald: This is a really good point that you're making, Jeff. The challenge of tools in the old days was that they were largely created during a time where all the people and the development project were sitting on the same floor with each other in a bunch of cubes in offices.

The cloud allows us to create a dedicated new data center that sits on the Internet and is accessible to all, wherever they are, and in whatever time zone they are working, and whatever relationship they have to my company.



As the challenges of development have caused companies to look at outsourcing and off-shoring, but even more simplistically the merger of my bank and your bank, then we have groups of developers in two different cities, or we bought a packaged application, and the best skill to help us integrate it is actually from a third-party partner which is in a completely different city or country. Those tools have shown their weaknesses, even in just getting your hands on them.

How do I punch a hole through the firewall to give you a way to check in your code problems? The cloud allows us to create a dedicated new data center that sits on the Internet and is accessible to all, wherever they are, and in whatever time zone they are working, and whatever relationship they have to my company.

That frees things up to be collaborative across company boundaries. But with that freedom comes a great challenge in unifying a process across all of those different people, and getting a collaborative engine to work across all those people.

Papows: That’s a great point John. I was with the CIO of a major New York bank about two weeks ago. Like so many CIOs in this financial services sector post-2008 they are in the midst of clamming together two very large complex inherently different back-office systems. Then, on a magical date, somehow they're supposed to intersect without the digital version of Pearl Harbor. That’s not a reasonable request, but these are not reasonable times.

Complexity curve

Without the ability to create these ad hoc environments, not just organizationally or geographically, but perhaps separate production from testing and development -- and without the ability to automate a good part of the tooling associated with reviewing these massive, mountains of legacy code bases before you magically intersect these things and put them together -- there's not a prayer that carbon-based, biped life forms are going to pull that off without a far more automated approach to that kind of a problem. It’s reached a point in the complexity curve, where you just can’t throw enough bodies at it.

McDonald: That’s right. It’s almost a requirement to keep the wheels on the bus and to have some degree of ability to manage the process in the compliance with regulations and the information about how decisions were made in such distributed ways that they are traceable and reviewable. It’s really not possible to achieve such a distributed development environment without that governance guidance.

Gardner: One of the interesting things that I have noticed in talking about cloud for the past several years is the realization fairly early on that the owner of the application or service -- and not the provider -- are the ones who are inherently and ultimately responsible for the governance. They are also responsible to the end user in in terms of their performance expectations. So, given that reality, who is responsible for governance and where should it begin and end? Where does it intersect with ownership within these ecosystems?

Papows: When I say "governance," I'm not talking about the Sarbanes-Oxley corporate governance context. I am talking about it specifically as it relates to IT. That is a function of the C-level executives, meaning it’s a partnership between the CIO and the CEO. This is not something that happens at the level of architect, the program, or this digital professional that’s in the trenches. There is an aspect of this, Dana, that we have to wake up and get environmentally much more honest with one another about.

We're dealing with some challenges for the first time that require out-of-the-box thinking. I talk about this in "Glitch." We have reached a point where there a trillion connected devices on the Internet as the February of this year. There are a billion embedded transistors for every human being on the planet.

We have reached a point where there a trillion connected devices on the Internet as the February of this year. There are a billion embedded transistors for every human being on the planet.



For the first time, we're seeing a drought in available computer scientists graduating from colleges and universities. The other side of the dot-com implosion was that the vocation became somewhat less attractive to people.

Moreover, 70 percent of the transaction-processing systems that we're dependent upon in the world economy today run on the things like mainframes and they are written in languages like COBOL. Although there are some very valiant efforts being made by IBM in about 600 universities, we're going to see more of that human capital retire, reach the end of their time with us, and die off in terms of the workforce. Yet, all of that inherent complexity is, at the same time, being exacerbated by all of these mergers and acquisitions.

Put all of those things together and, if it weren’t for companies like CloudOne that are creating these ecosystems and distributed environments that allow people to deal with the 20 percent of that new application development in unique and new ways vis-à-vis the cloud, for the first time in the history of our industry, as computer scientists, we're on the verge of tremendous challenges. That’s why I say it’s a partnership between C-level executives, because these are not traditional times.

Gardner: John McDonald, where do you see the notion of baking-in governance taking place. Clearly, the incentive, the direction, and the vision needs to come from on-high. But how do you embed governance into a development workflow, for example?

Everything has to disappear

McDonald: My view is that it absolutely has to be so incipiently based that everything that you are doing has to disappear. Here’s what I mean by that. Developers view themselves quite often as artists. They may not articulate it that way, but they often see themselves as artists and their palette is code.

As such, they immediately rankle at any notion that, as artists, they should be governed. Yet, as we’ve already established, that guidance for them around the processes, methods, regulations, and so on is absolutely critical for success, really in any size organization, but beyond the pale in a distributed development environment. So, how do you deal with that issue?

Well, you embed it into their entire environment from the very first stage. In most companies, this is trying to decide what projects we should undertake, which in lot of companies is a mainly over-glorified email argument.

It goes right on through to the requirements gathering around those projects that we have decided to undertake to the project plans that are put around those projects, to the architecture, the design, the coding, the testing, the build, and the deployment.

It all has to be embedded at every step of that way, gently nudging, and sometimes shuttling all these players back into the right line, when it comes to ensuring that the result of their effort is compliant with whatever it is that I needed to be compliant to.

In short, Dana, you’ve got to make it be a part of and embedded into every stage of the development process, so that it largely disappears, and becomes something that becomes such a natural extension of the tool so that you don’t have anyone along the way realizing that they are being governed.

Unless you automate it, unless you provide the right stack of tools and codify the best practices and libraries that can be reusable, it simply won’t happen.



Papows: John is exactly right, Dana. It’s got to be automated. You’re not going to do something as ubiquitously as John is describing in a manually intensive non-electronic process. It will fundamentally break down.

Everybody intellectually buys into governance, but nobody individually wants to be governed. Unless you automate it, unless you provide the right stack of tools and codify the best practices and libraries that can be reusable, it simply won’t happen. People are people, and without the automation to make it natural, unnatural things get applied some percentage of the time, and governance can’t work that way.

Gardner: Let’s look at an example vis-à-vis CloudOne. John, tell me a little bit about how you do this. Now that we’ve made a determination to this is the right approach, I’m assuming you use WebLayers to do this. Tell me a little bit about CloudOne as an example of how this can work.

McDonald: When we first began this company, all those many months ago, we knew that this is going to be incredibly important.

WebLayers was the very first partner that we reached out to say, "Can you go down this journey with us together, as we begin developing these workbenches, these integrated toolsets, and delivering them through the cloud on-demand?" We already know and see that embedding governance in every layer is something we have to be able to do out of the gate.

The team at WebLayers was phenomenal in responding to that request and we were able to take several based instances of various Rational tools, embed into them WebLayers technology, and based on how the cloud works, archive those, put them up in our library to be able to be pulled down off-the-shelf, cloned, and made an instance of for the various customers that we have coming to our pipeline who want to experience this technology in what we are doing.

So, right from the start, Dana, we put that into what we are doing, so that when customers experience CloudOne’s technology either in pilot or in production they never know that it’s not theirs. CloudOne Team Concert is a better Rational Team Concert, because it has WebLayers embedded into it, than simply buying Team Concert and doing it on your own.

Embedded automation

At this point, we have approaching a hundred customers who have, in one shape or form, used or touched some WebLayers technology in the course of a pilot. We frankly see a very healthy group of customers, as we go into the fall of this year, who we believe are going to become customers of that technology simply because they have been able to experience that embedded automation, almost disappearing into the background kinds of guidance for governance is what Jeff has been talking about. So, it’s been really a great journey so far, and I can only see it getting better.

Gardner: I know it’s hard to quantify results when you are preventing something bad from happening, but are there any metrics of success? Can you point to the embedded governance and say that got us to "blank" or paid off in some manner or another?

Papows: Unfortunately, the best examples of those tend to come from the places where governance is not. You’ve read about or heard about or experienced first hand the disasters that can happen in production environments, where you have some market-facing application, where service is lost, where there is even brand damage or economic consequences.

We’ve seen ad hoc development. As an example, a year ago at a major European investment banking firm where a CEO did what CEOs frequently do, and demanded that everyone complete an online workflow for everyone’s annual review process.

This particular CEO went further and said that if it wasn't done by year-end, any manager who hasn’t completed this for all of his or her constituents wouldn’t be eligible for the year-end bonus. Somebody very quickly cobbled together some HR workflow, unbeknownst to anyone.

It’s seven times more expensive to fix an application service after it’s deployed than it is in design.



There was a sense of urgency. It relied on a single database thread that was part of a production system that was not reinforced. When everybody collided at once to coalesce to the demands that this particular executive was articulating, it brought down the trading floor.

In the four hours that that system was lost, as Murphy’s law would frequently have it, there was about an 11 percent market accretion. The cost of that particular institution for the difference in the trading value for the hours that they were out of business was about $24 million.

There are instances like that, which become almost water-cooler legend, where you can quantify fantastic ROI in reverse. There is a new concept -- and John is probably starting to get exposed to this -- called "technical debt" -- I think one of your blogs touched on this earlier.

We're beginning to quantify the opportunity cost and the human cost in terms of mandates of IT time for things that are not governed or not adhered, so that as you catalogue the number of programs, files, WSDLs, objects, and stuff that don’t meet the acid test, you get a sense for the number of days, and as a consequence the dollars, of technical debt that you are amassing.

There was a great article -- I can’t remember who published it -- that said that it’s seven times more expensive to fix an application service after it’s deployed than it is in design. God knows whether that’s got any decimal-point accuracy, but it’s certainly directionally correct. We are going to provide some dashboard reporting in some objects in our management dashboard series.

As we look toward the end of the year, that will give you some widgets and dials, and we’ll begin to quantify the cost of the things that we find that don’t adhere to the libraries that people like John are building into their infrastructure. While a lot of things in information technology in the last couple of decades have been largely subjective, we're going to get to the point where we are going to start quantify these things fairly precisely.

Gardner: What about you, John McDonald? Do you have any sense of the paybacks, the metrics of success when governance is done properly in your neck of the woods?

Signboards of success

McDonald: I have to agree with Jeff. The biggest signboards of success here are when things go badly. The avoidance of things going badly is unfortunately very difficult to measure. That is something that everyone who attempts to do a cloud-delivered development environment and does the right thing by embedding in it the right governance guidance should know coming out of the gate. The best thing that’s going to happen is you are not going to have a catastrophe.

That said, one of the neat things about having a common workbench, and having the kinds of reporting in metrics that it can measure, meaning the IBM Jazz, along with the WebLayers technology, is that I can get a very detailed view of what’s going on in my software factory at every turn of the crank and where things are coming off the rails a little bit.

I equate this in some ways to a car production factory, where there are many moving parts, lots of robot arms, and people lifting plate glass into place and screwing in bolts and that sort of thing. Everything may look great in my factory, but at the end of the factory, I consistently see the door handle is off by three inches. I can’t release those cars to my dealership network with bad door handles, so I know that I've got a problem, but I can very quickly see where the problem is. That’s how most companies right now deal with governance issues. They wait until the very end of it, as it’s about ready to be shipped to the dealer, and then they notice the door handle is off.

It may be great to go back and know where to fix the door handle, but wouldn’t it be nice to know, before that car went to the rest of the line, that we had a problem with the machine in the door handle’s section. That’s what the kinds of metrics and measurement and responsiveness that this offers allows you to do -- fix the door handle, before it gets any farther down the line, so you never get to that catastrophe where the engine falls out of the bottom.

You don’t even get to the small issues where the door handle is off. You nip them in the bud. Doing that live every day with the visibility into the reports and the metrics around governance is really the magic here, so that you never have that issue of a catastrophe, where you have to hold up and say, "Well, we’ll do better next time."

There's an age-old expression that you're so close to the forest you can't see the trees. Well, I think in the IT business we’re sometime so deeply embedded in the bark we can't see anything.



Gardner: Let's take a look to the future. Clearly, you'll find few people to argue with the fact that software is becoming more important to more companies. And, the cloud is becoming a new way -- at least new in terms of how people conceive of it -- of acquiring software and delivering services.

So this is all going to get worse. We're going to have more companies that see a strategic imperative around software and development and more opportunity for ecosystems and services. Then, of course, we’ve got explosion of data and mobile devices. Let me go first to you, Jeff.

I suppose I already know the answer, but is this important to do now? It's just going to get worse, but doesn't this also cut across and beyond where we go with development and into so many other areas of business? It seems, as I said in the setup, that you guys are the bellwether of how this can become more prevalent across more aspects of business in general.

Papows: You're right. Here is the reality, and it’s interesting sometimes. There's an age-old expression that you're so close to the forest you can't see the trees. Well, I think in the IT business we’re sometime so deeply embedded in the bark we can't see anything.

We've been developing, expanding, deploying, and reinventing on a massive scale so rapidly for the last 30 years that we've reached a breaking point where, as I said earlier, between the complexity curves, between the lack of elasticity and human capital, between the explosion and the amount of mobile computing devices and their propensity for accessing all of this backend infrastructure and applications, where something fundamentally has to change. It's a problem on a scale that can't be overwhelmed by simply throwing more bodies at it.

Creative solutions

Secondly, in the current economy, very few CIOs have elastic budgets. We have to do as an industry what we've done from the very beginning, which is to automate, innovate, and find creative solutions to combat the convergence of all of those digital elements to what would otherwise be a perfect storm.

That, in fact, is where companies like CloudOne are able to expand and leap productivity equations for companies in certain segments of the market. That's where automation, whether it's Rational, WebLayers, or another piece of technology, has got to be part of the recipe of getting off this limb before we saw it off behind us.

The IT business has become such a critical part of our economy. Put the word "glitch" in your Google Alerts bar and see how many times a day you find out about customers that are locked out of ATM networks, manufacturing flaws, technology disasters in the Gulf, nuclear power plants in Houston, or people being killed by over-radiation because of software bugs in medical equipment. It's reaching epidemic proportions, and the proof-point is that you see it in the daily broadcast news cycles now.

There there is simply no barrier for anyone to give this a try.



So SaaS, cloud computing, automated governance, forms of artificial intelligence, Rational tooling, consistent workbench methodologies, all of these things are the instruments of getting ourselves out of the corner that we have otherwise painted ourselves in.

I don't want to seem like an alarmist or try to paint too big a storm cloud on the horizon, but this is simply not something that's going to happen or be resolved in a business-as-usual usual fashion.

Gardner: Okay, so the stakes are high, they are getting higher. Back to you for the final word, John McDonald. What do you recommend for people who need to get started or are thinking of getting more involved with governance part-and-parcel with their activities?

McDonald: That's one of the coolest things of all about this whole model, in my mind. There there is simply no barrier for anyone to give this a try. In the old model, if you wanted to give the technology a try, you had better start with your calculator. And you had better get the names and addresses of your board of directors, because you're going there eventually to get the capital approval and so on to even get a pilot project started in many cases with some of these very sophisticated tools.

This is just not the case anymore. With the CloudOne environment you can sign on this afternoon with a web-based form to get a instance of let's say, Team Concert set up for you with WebLayers technology embedded in it, in about 20 minutes from when you push "submit," and it's absolutely free for the first model. From there, you grow only as you need them, user-by-user. It's really quite simple to give this concept a try and it's really very easy.

If you have any inclination at all to see what it is that Jeff and I are telling you, give it a whirl, because it's very simple.

Gardner: Okay, we'll have to leave it there. We've been discussing the needs and potential for solutions of governance in the cloud era, and using managed development and deployment environments as a bellwether for future cloud and service and IT use.

I want to thank our guests, we have been talking with Jeff Papows, President and CEO of WebLayers as well as the author of Glitch: The Hidden Impact of Faulty Software. Thanks so much, Jeff.

Papows: Thank you, Dana, and thank you, John.

Gardner: Yes, we've been joined here also by John McDonald, the CEO of CloudOne Corp.

Gardner: I'm Dana Gardner, Principal Analyst at Interarbor Solutions. You've been listening to a sponsored BriefingsDirect podcast. Thanks for listening, and come back next time.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Get a copy of Glitch: The Hidden Impact of Faulty Software. Learn more about governance risks. Sponsor: WebLayers.

Transcript of a sponsored podcast on cloud computing and the necessity for automated and pervasive governance across a services lifecycle. Copyright Interarbor Solutions, LLC, 2005-2010. All rights reserved.

You may also be interested in:

Wednesday, September 22, 2010

Data Center Transformation Includes More Than New Systems, There's Also Secure Data Removal, Recycling, Server Disposal

Transcript of a sponsored podcast discussion on how proper and secure handling of legacy equipment and data is an essential part of data center modernization planning and cost reduction.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: HP.

Dana Gardner: Hi, this is Dana Gardner, Principal Analyst at Interarbor Solutions, and you're listening to BriefingsDirect.

Today we present a sponsored podcast discussion on an often-overlooked aspect of data center transformation (DCT), and that is what to do with the older assets inside of data centers as newer systems come online.

DCT is a journey, and an essential part of that process is modernizing, but at the same time sun-setting older systems must come with data protection in mind, and even with an eye to monetize those older systems, or at least recycle them properly.

Properly disposing of data, and other IT assets, is an often overlooked, and under-appreciated element of the total data center transformation journey, but it’s one that can cause great disruption and an increase in costs, if not managed well.

Compliance and recycling issues, as well as data security concerns and proper software disposition should therefore be top of mind. We'll take a look at how HP manages productive transitions of data center assets -- from security and environmental impact, to recycling and resale, and even to rental of new and older systems during a DCT process.

Indeed, many IT organizations are largely unaware of the security and privacy risks of the systems that they need to find a new home for and can often find themselves delivered to the wrong hands. So thinking through the retirement of older assets should be considered early in the DCT process.

With us now to explain how to best take care of the older systems reducing risk, as well as providing a financial return, are Helen Tang, Worldwide Data Center Transformation Lead for HP Enterprise Business, and Jim O'Grady, Director of Global Life Cycle Asset Management Services with HP Financial Services. Welcome to the show.

Helen, let me start with you. As I mentioned, we've got a whole lifecycle to think about with DCT, but what’s driving the market right now? Where are the enterprises involved with DCT going, and how can we start thinking about the total picture in terms of how to do this well?

A total solution

Helen Tang: That’s a great question, Dana. HP really started marketing DCT as a total solution that spans hardware, software, and services throughout the entire life cycle in 2008, when we launched the solution. Since then, we've had about 1,000 customers take this journey to very successful results.

I would say 2010 is a very interesting year for DCT number one, because of the economic cycle. We are -- fingers crossed -- slowly coming out of this recession, and we're definitely seeing that IT organizations have a little bit more money to spend.

This time around, they don’t want to repeat past mistakes, in terms of buying just piles of stuff that are disconnected. Instead, they want a bigger strategy that is able to modernize their assets and tie into a strategic growth enablement asset for the entire business.

So, they've turned to vendors like HP and said, "What do we do differently this time and how do we go about it in a much more strategic fashion?" To that end, we brought together the entire HP portfolio, combining hardware, software, and services, to deliver something that helps customers in the most successful fashion.

When you look at the entire life cycle, some of them are starting with consolidation. Some of them already did a lot of virtualization, and they want to move to more holistic automation. So, throughout that entire process, as you mentioned earlier, there's a lot to think about, when you look at hardware and software assets that are probably aged and won’t really meet today’s demands for supporting modern applications.

How to dispose of those assets? Most people don’t really think about it nor understand all of the risks involved. That’s why we brought Jim here to talk to you more about this.

Gardner: Helen, as I understand it, it's often a 10- or 20-year cycle, when you completely redo or transform a data center. So there probably aren’t people around with the skills to remember the last time their organization disposed of a data center. This is a fairly new activity, something you might need to look to outside help for.

Tang: Absolutely. Of course, there are different pieces to the DCT picture. When you say 10 or 20 years, that’s generally referring to the lifecycle of facilities. Within that, hardware usually lasts between 5 and 8 years. But the problem is that today there are the new things coming about that everybody is really excited about, such as virtualization, and private cloud. Even experienced IT professionals, who have been in the business for maybe 10, 20 years, don’t quite have the skills and understanding to grasp all this.

In HP, we're in a unique vantage point of being the number one technology company. We're about 300,000 strong in terms of headcount, and about half of those are consultants who have expertise across all these data center technologies. As I mentioned earlier, we've helped over 1,000 customers. So, we have a lot of practical hands-on experience that can be leveraged to help our customers.

Gardner: Jim O'Grady, it sounds as if the risk here is something that a lot of these organizations don’t appreciate. What are the levels of risk that are typical, if you wait until the last minute and don’t think through the proper disposal of your existing or older assets?

Brand at stake

Jim O'Grady: We're not trying to overstate that risk too much. But the risk may be that you are simply putting your company’s brand at stake, through improper environmental recycling compliance, or exposing your clients, customers, or patients’ data to a security breach. This is definitely one of those areas you don’t want to read about in a newspaper to figure out what went wrong.

We see that a lot of companies try to manage this themselves, and they don’t have the internal expertise to do it. Often, it’s done in a very disconnected way in the company. Because it’s disconnected and done in many different ways, it leads to more risks than people think. If you know how to do it correctly in one part of your enterprise and you are doing it differently in another part of your enterprise, you have a discrepancy that’s hard to explain, because you're not learning from yourself and you're not using best practices within the organization.

Also, a lot of our clients choose to outsource this work to a partner. They need to keep in mind that they are sharing risk with whomever they partner with. So, they have to be very cautious and be extremely picky about who they select as a partner.

You have to feel very comfortable that your partner’s brand is as respected as your brand, especially if you are defending what happened to your board of directors or, worse yet, you get into a legal proceeding. If you don’t kick the tires with your partner and you don’t find out that the partner consists of a man, a dog, and a pickup truck, you just may have a hard time defending yourself as to why you selected that partner.

This may sound a bit self-serving, but I always suggest for enterprises to resist smaller local vendors. Use the fewest number of vendors that can manage your business scale and geographic coverage requirements. This is an industry where low barriers to entry really just don’t match up to the high levels of customer accountability required to properly manage your end-of-use asset disposition.

We also suggest that they have a well thought-out plan for destroying or clearing data prior to the asset decommissioning.



Gardner: So while we can have very significant risk, on the other hand we also know that there are proper, well-established, well-understood ways of doing this correctly. Even though the risk might not be understood, doing this right is possible.

Tell me some of the basic steps of doing this properly, where you don’t run into these risks, where you have thought it through or found those who have established best practices well under control.

O'Grady: Some of the best practices that we recommend to our customers is to have a well-established plan and budget up-front, one that’s sponsored by a corporate officer, to handle all of the end-of-use assets well before the end-of-use period comes.

We also suggest that they have a well thought-out plan for destroying or clearing data prior to the asset decommissioning and/or prior to the asset leaving the physical premise of the site. Use your outsource partner, if you have one, as a final validation for data security. So, do it on site, as well as do it off site.

Vendor qualification

Also, develop a very strong vendor audit qualification and ongoing inspection process. Visit that vendor prior to the selection and know where your waste stream is going to end up. Whatever they do with the waste stream, it’s your waste stream. You are a part of the chain of custody, so you are responsible for what happens to that waste stream, no matter what that vendor does with it.

So, you need to create rigorous documented end-to-end controls and audit processes to provide audit trails for any future legal issues. And finally, select a partner with a brand name and reputation for trust and integrity. Essentially, share the risk.

Gardner: What about environmental issues? These can vary pretty widely. Maybe you don’t know about these risks, but as you say, you're going to be responsible for them. Tell me about the environmental and even recycling aspects of this equation.

O'Grady: You're right. That’s one of the most common areas where our clients are caught unaware of the complexity of the data security, the e-waste legislation requirements that are out there, and especially the pace of its change.

Legislation resides at the state, local, national, and regional levels, and they all differ. There's some conflict, but some are in line with each other. So it's very difficult to understand what your legislative requirements are and how to comply. Your best bet is to deal with a highest standard and pick someone that knows and has experience in meeting these legislative requirements.

Legislation resides at the state, local, national, and regional levels, and they all differ.



Gardner: Now, part of the process that you're addressing is not just what to do with these assets, but how to make the transition seamless. That is to say, your service level agreements (SLAs) are met and that the internal users or external customers for your organization don't know that there is this transition going on.

So, in addition to what we've talked about in terms of some of these assets and environmental and security issues, how can you also guarantee that all the lights stay on and the trains keep running on time?

O'Grady: HP Financial Services (HPFS) has a lot of asset management capabilities to bring to bear, to help customers through their DCT. A lot of it is financial-based capability and services and a lot of it is non-financial based.

Let me explain. From a financial asset ownership model, HPFS has the ability to come in and work with a client, understand what their asset management strategy is, and help them to personalize the financial asset ownership model that makes sense for them.

For example, perhaps a client has a need to monetize some of their existing IT asset base. Let's just say there are existing data center assets somewhere. We have the asset-management expertise to structure a buyout for those targeted data-center assets and lease those same assets back to the client with a range of flexible terms, potentially unlocking some hidden capital for them to exploit elsewhere, perhaps funds for additional data center capacity.

Managing assets

T
hat's just one of many examples of how we help our clients manage their assets in a more financially leveraged way. In addition to that, customers often have a requirement to support legacy gear during the DCT journey HPFS can help customers with pre-owned legacy HP product.

We're able to provide highly customized pre-owned authentic legacy HP product solutions, sometimes going back 20 years or more. We're seeing a big uptake in the need to support legacy product, especially in DCT. The need for temporary equipment just scaling out legacy data center hardware platform capacity that’s legacy locked is an increasing need that we see from our clients.

Clients also need to ensure their product is legally licensed and they do not encounter intellectual property right infringements. Lastly, they want to trust that the vendor has the right technical skills to deal with the legacy configuration and compatibility issues.

Our short-term rental program covers new or legacy products. Again, many customers need access to temporary product to prove out some concepts, or just to test some software application on compatibility issues. Or, if you're in the midst of a transformation, you may need access to temporary swing gear to enable the move.

Finally, customers should consider how they retire and recover value for their entire end-of-use IT equipment, whether it's a PDA or supercomputer, HP or non-HP product.

We can help educate customers on the hidden risk and dispositioning that end-of-use equipment into the secondary market.



In summary, most data center transformations and consolidations typically end with a lot of excess or end-of-use product. We can help educate customers on the hidden risk and dispositioning that end-of-use equipment into the secondary market. This is a strength of HPFS.

Gardner: Jim, you mentioned global a few times. Help me understand a little more about the global benefits that HP is bringing to the table here in terms of managing multiple markets, multiple regulation scenarios, and some of these other secondary market issues.

O'Grady: From what I see in the market, there are tremendous amounts of global complexities that customers are trying to overcome, especially when they try to do data center consolidation and transformation, throughout their enterprise across different geographies and country borders.

You're talking about a variety of regulatory practices and directives, especially in the EU, that are emerging and restrict how you move used and non-working product across borders. There are a variety of different data-security practices and environmental waste laws that you need to be aware of.

Gardner: Let’s look at some examples. It’s one thing to understand this at a high level, but seeing it in practice is a lot more edifying and educational. So, I guess HP is a good place to start.

What did you learn when you had to take many different data centers from a lot of merger and acquisition activity in many different markets? Tell me a little about the story of HP’s data centers, when it comes to properly sun-setting and managing all of those assets.

Creating three data centers

O'Grady: First, let me explain what were up against in terms of the complexities of consolidating HP’s data centers. There were about 85 worldwide data centers located in 29 different countries, and we were consolidating down to 6 data centers within three U.S. geographical zones. Essentially, we were creating three prime data centers and three backup data centers.

HP decommissioned over 21,000 assets over a one-year period. In addition, there were another 40,000 data center related assets. The majority of the data center servers were being ported to new technologies. So, this really left a tremendous amount of IT product to be decommissioned, retired, and to recover value for.

HPFS was asked to come in and take control of the entire global reverse logistics, the off-site data security plan, as well as the asset re-marketing and recycling process.

The first thing that we had to do was establish a global program office with dedicated support. This is almost required for every larger global asset recovery project in the market, so there can be a single coordination point and focus to make all the trains run on time, so to speak, and to manage and reconcile a single financial and asset reporting process. This is critical.

HP wanted to have one place, one report, with reconciled asset level detail that demonstrated that every single asset came back was audited, wiped of data, and was recycled in an environmentally compliant way.

We also needed to set up a global reverse logistics strategy that proved to be extremely challenging as well. HP had SWAT teams deployed for time-critical de-installs in some of the smaller remote locations. They needed us to position secured truck vehicles to be there within a one-hour window, where there was no local storage available to store the data center equipment as it came out of the data center.

We were able to hand back apparent net recovery, versus a bill for our services. They were quite pleased with that result.



We also had to act as a backup for data eradication. HP’s policy was to wipe the data on-site, but they realized that that’s not always a perfect process, and so they wanted us to again wipe all of the equipment as it came back into our process.

Last, but not least, recovering value for the older end-of-use assets was one of our highlighted strengths. We found 90 percent of the decommissioned data center products to be re-marketable, and that’s not unusual in the market. We were able to hand back apparent net recovery, versus a bill for our services. They were quite pleased with that result, and it must have worked out more than okay, because I am still around to describe what we did.

Gardner: I can see now why this is under Financial Services. This really is about auditing, making sure that every asset is managed uniquely and fully, and that nothing falls through the cracks. But, as you point out, there is this whole resale, with the regulation and recycling issues to be managed as well.

Tell me a little more about this. Obviously for HP, you were doing it for your own good. Are there some other examples that we can look to where that bill has been much lower because of the full exercise of your financial secondary market activities?

A key strength

O'Grady: Sure. That’s where we think our strength is. If you look at a leasing organization, when you lease a product, it's going to come back. A key strength in terms of managing your residual is to recover the value for the product as it comes back, and we do that on a worldwide basis.

We have the ability to reach emerging markets or find the market of highest recovery to be able to recover the value for that product. As we work with clients and they give us their equipment to remarket on their behalf, we bring it into the same process.

When you think about it, an asset recovery program is really the same thing as a lease return. It's really a lot of reverse logistics -- bring it into a technical center, where it's audited, the data is wiped, the product is tested, there’s some level of refurbishment done, especially if we can enhance the market value. Then, we bring it into our global markets to recover value for that product.

Gardner: While the IT folks don’t necessarily always want to think about the financial implications, they're certainly more likely to get the okay to move ahead with their technical transformations, to get the new tools and systems that they want, if they can better appreciate that there is a way to recover costs safely from existing systems. I think it's probably an important lesson for IT people that they might not have thought of.

Helen, do you have any thoughts about that, about the culture between a financial side and the technical side in DCT?

The customer asked us to recreate their old data center environment at that new location, with the exact legacy spectrum of the original data center.



Tang: We're seeing some interesting shifts right now. So, you're right. Classically, there was this conception that the CFO’s office didn’t understand the needs of IT, and IT didn’t understand how best to save money, and also optimize for those tricky CAPEX and OPEX issues.

But, in the last five years, we're seeing a shift, and these two organizations are working harder together. A lot of it is because they have to do it, with the recession, and a lot of the limitations. But, we're starting to see sort of this IT hybrid role called the IT controller, that typically reports to the CIO, but also dot-lines into the CFO, so that the two organizations can work together from the very beginning of a data center project to understand how best to optimize both the technology, as well as the financial aspects.

Gardner: Thanks to Jim, we've heard quite a bit about HP's story. Are there some other users out there who have gone through this, and that we can look to for some more understanding of how this should apply to future DCT? Back to you, Jim?

O'Grady: Sure. There was a case that involved an internationally known food services company that had a data center consolidation and move requirement. This company had 12,000 locations in 35 countries. Their basic need was to economically migrate to a new facility, where there was room for expansion and consolidation.

Their existing server environment was only a couple of years old, and it wasn’t economical for them to replace them at this point in time. So, the customer asked us to recreate their old data center environment at that new location, with the exact legacy spectrum of the original data center.

Legacy solution

We were more than happy to provide them with a highly customized HP legacy server solution, that was identical to their existing equipment, and we did it on a short-term rental basis. Once the new cutover data center was fully operational, we simply brought back all of the equipment located in the original data center.

We did it as a trade for the legacy servers that were rented and installed at the new data center site. Essentially, it was an asset swap. We rented some equipment to them. We brought back their original data center. And, we just called it a day. So it was an extremely pleasant and easy transition for the customer to get through.

We also helped the customer manage the license transfers from their prior owned servers to the ones that we just provided to them.

Gardner: These are things that organizations on their own probably don’t have the visibility and understanding to pursue. Is it fair to say, Jim, that a lot of companies are just leaving money on the table, so to speak, if they try to do this themselves, if they look for some of those local secondary market folks, and are maybe not as creative as they could be in some of these financial approaches to the best outcome for these assets?

O'Grady: I think they do. They typically try to disconnect some of this activity and they don’t put it into a holistic view in terms of the DCT effort. Typically, what we find with companies trying to recover value for product is that they give it to their facilities guys or the local business units. These guys love to put it on eBay and try to advertise for the best price. But, that’s not always the best way to recover the best value for your data center equipment.

Your best bet is to work with a disposition provider that has a very, very strong re-marketing reach into the global markets, and especially a strong demonstrative recovery process.



We're now seeing it migrate into the procurement arm. These guys typically put it out for bid and select the highest bid from a lot of the open market brokers. A better strategy to recover value, but not the best.

Your best bet is to work with a disposition provider that has a very, very strong re-marketing reach into the global markets, and especially a strong demonstrative recovery process.

Gardner: I suppose there is a scale issue too. An organization like HP can absorb a football field full of laptops, whereas not every other entrant in the market, at least the local players involved, can absorb that sort of scale. So tell me a little bit about the size issues, both scaling up, in terms of the largest types of data centers and IT issues, but also perhaps scaling down in terms of where specialization or even highly vertical systems are involved?

O'Grady: That’s a good point. Especially in the large DCTs, you're getting back a lot of enterprise equipment, and it's easy to re-market it into the secondary market and recover value. You could put it on the market tomorrow and recover very minimum value, but we have a different process.

We have skilled product traders within our product families who know how to hold product, and wait for the right time to release it into the secondary market. If you take a lot of product and sell it in one day, you increase the supply, and all of the recovery rates for the brokers drop overnight. So, you have to be pretty smart. You have to know when to release product in small lot sizes to maximize that recovery value for the client.

Gardner: Tell me how you get started on this Jim? As we said, this is sort of a black box for a lot of people. The IT people don’t understand the financial implications, and the financial folks might not understand what’s involved when a data center is going to be transformed and what’s coming down the avenue that they need to then disposition.

So, how do we merge these cultures? How do they get started, and who do you target this information at? Is there a Chief Disposition Officer? I tend to doubt it. Who should be involved? Who should be in charge?

C-level engagement

O'Grady: We recommend that a C level executive is in charge, whether it's the CIO, the CFO, or the Security Officer. Someone at a very high level should be engaged. To engage us is very simple. Your best bet is to contact your HP account manager. They would know how to get in contact with us to bring our services to bear.

You can also look at, under HP.com, and get to HPFS, and that’s really simple. From there, it's an easy process to engage us. If you're looking for pre-owned or rental equipment, we would provide a dedicated rep, or you can just access this product through most authorized HP enterprise resellers. They know how to access legacy product from us as well.

Asset recovery is a much more complex engagement. We start with a consultation with the client on a whole range of topics to educate them on what the whole used IT disposition market is all about, especially things to watch out for. There are things like comparing price versus risk, when you are selecting a vendor, and what to think about to ensure you comply with the emerging legislation and directives that you need to be aware of and deal with, especially environmental waste stream management, data security, and cross-border product movement of non-working IT products.

We also help clients understand strategies to recover the best value for decommissioned assets, as well as how to evaluate and how to put in place a good data-security plan.

We help them understand whether data security should be done on-site versus off-site, or is it worth the cost to do it on-site and off-site. We also help them understand the complexities of data wiping enterprise product, versus just the plain PC.

The one thing we help customers understand, and it’s the real hidden complexity is how to set up an effective reverse logistic strategy.



Most of the local vendors and providers out there are skilled in wiping data for PCs, but when you get into enterprise products, it can get really complex. You need to make sure that you understand those complexities, so you can secure the data properly.

Lastly, the one thing we help customers understand, and it’s the real hidden complexity is how to set up an effective reverse logistic strategy, especially on a global basis. How do you get the timing down for all the products coming back on a return basis?

Gardner: Helen, it sounds as if there is a whole lot going on with this cleansing and regrouping of assets in such a way that you get the most financial return. To me this is a real educational issue for the DCT process and that this needs to be really considered early on and not as an afterthought.

Tang: That’s absolutely true, which is why we reach out to our customers in various interactions to talk them through the whole process from beginning to end.

One of the great starting points we recommend is something we called the Data Center Transformation Experience Workshop, where we actually bring together your financial side, your operations people, and your CIOs, so all the key stakeholders in the same room, and walk through these common issues that you may or may not have thought about to begin with. You can walk out of that room with consensus, with a shared vision, as well as a roadmap that’s customized for your success.

Gardner: Back to you one last time, Jim. Tell us a little more about how people should envision this? If we're in the process, what’s the right frame of mind -- philosophy, if you will -- of this disposition element to any DCT?

Well-established plan

O'Grady: My advice is to have a well-established plan in the budget and think about that way up front in the process. This is the one area that most of our clients fail to do. What happens is that, at the disposition point, they accumulate a lot of assets and they haven’t budgeted for how to disposition those assets. They get caught, so to speak.

Customers should educate themselves about the market complexities involved with dispositioning your own products from the data security standpoint, as well as environmental legislation.

As you try to recover value in the secondary market, you own the result of that transaction. So, you could be putting your company brand at risk, if you're not complying with the morass of legislative directives and regulations that you find out there at a global and local level.

Gardner: We've been hearing about trying to make the most productive transitions with data center assets and transformation activity from the vantage point of security and environmental impact, recycling, and resale. I suppose the idea here now is to get your older assets out with low risk, but also get the highest financial return from them as well.

I want to thank our panelists for helping us sort this out. We have been here with Helen Tang, Worldwide Data Center Transformation Lead for HP Enterprise Business, and Jim O’Grady, Director of Global Life Cycle Management with HP Financial Services. Thanks so much, Jim.

O'Grady: Thank you, Dana.

Gardner: This is Dana Gardner, Principal Analyst at Interarbor Solutions. You've been listening to a sponsored BriefingsDirect podcast. Thanks for listening, and come back next time.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: HP.

Transcript of a sponsored podcast discussion on how proper and secure handling of legacy equipment and data is an essential part of data center modernization planning and cost reduction. Copyright Interarbor Solutions, LLC, 2005-2010. All rights reserved.

You may also be interested in: