Showing posts with label enterprise. Show all posts
Showing posts with label enterprise. Show all posts

Wednesday, February 03, 2010

CERN’s Evolution to Cloud Computing Portends Revolution in Extreme IT Productivity?

Transcript of a BriefingsDirect podcast on the move to cloud computing for data-intensive operations, focusing on the work being done by the European Organization for Nuclear Research.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: Platform Computing.

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you’re listening to BriefingsDirect. Today, we present a sponsored podcast discussion on some likely directions for cloud computing based on the exploration of expected cloud benefits at a cutting edge global IT organization.

We are going to explore the thinking on how cloud computing both the private and public varieties might be useful at CERN, the European Organization for Nuclear Research in Geneva.

CERN has long been an influential bellwether on how extreme IT problems can be solved. Indeed, the World Wide Web owes a lot of its usefulness to early work done at CERN. Now the focus is on cloud computing. How real is it, and how might an organization like CERN approach cloud?

In many ways CERN is quite possibly the New York of cloud computing. If cloud can make it there, it can probably make it anywhere. That's because CERN deals with fantastically large data sets, massive throughput requirements, a global workforce, finite budgets, and an emphasis on standards and openness.

So please join us, as we track the evolution of high-performance computing (HPC) from clusters to grid to cloud models through the eyes of CERN, and with analysis and perspective from IDC, as well as technical thought leadership from Platform Computing.

Join me in welcoming our panel today, Tony Cass, Group Leader for Fabric Infrastructure and Operations at CERN. Welcome, Tony.

Tony Cass: Pleased to meet you.

Gardner: We’re also here with Steve Conway, Vice President in the High Performance Computing Group at IDC. Welcome, Steve.

Steve Conway: Thanks. Welcome to everyone.

Gardner: And, we're also here with Randy Clark, Chief Marketing Officer at Platform Computing. Welcome Randy.

Randy Clark: Thank you. Glad to be here.

Gardner: Over the last several years, we've seen cloud computing become quite popular as a concept. It remains largely confined to experimentation, but this notion of private cloud computing is being scoped out by many large and influential enterprises as well as large early adopters like CERN.

Let me go to you Steve Conway. What's the difference between private and public cloud and how far away are any tangible benefits of cloud computing from your perspective?

Already here

Conway: Private cloud computing is already here, and quite a few companies are exploring it. We already have some early adopters. CERN is one of them. Public clouds are coming. We see a lot of activity there, but it's a little bit further out on the horizon than private or enterprise cloud computing.

Just to give you an example, we just did a piece of research for one of the major oil and gas companies, and they're actively looking at moving part of their workload out to cloud computing in the next 6-12 months. So, this is really coming up quickly.

Gardner: So, this notion of having a cohesive approach to computing and blending what you do on premises with these other providers isn't just pie in the sky. This is really something people are serious about.

Conway: Well, CERN is clearly serious about it in their environment. As I said, we're also starting to see activity pick up with cloud computing in the private sector with adoption starting somewhere between six months from now and, for some, more like 12-24 months out.

Gardner: Randy Clark, from your perspective, how many customers of Platform Computing would you consider to be seriously evaluating what we now refer to as public or private cloud?

Clark: We have formally interviewed over 200 customers out of our installed base of 2,000. A significant portion -- I wouldn’t put an exact number on that, but it's higher than we initially anticipated -- are looking at private-cloud computing and considering how they can leverage external resources such as Amazon, Rackspace and others. So, it's easily a third and possibly more.

Gardner: Tony Cass, let's go to you at CERN. Tell us first a little bit about CERN for those of our readers who don’t know that much or aren't that familiar. Tell us about the organization and what it does, and then we can start to discuss your perceptions about cloud.

Cass: We're a laboratory that exists to enable, initially Europe’s and now the world’s, physicists to study fundamental questions. Where does mass come from? Why don’t we see anti-matter in large quantities? What's the missing mass in the universe? They're really fundamental questions about where we are and what the universe is.

We do that by operating an accelerator, the Large Hadron Collider, which collides protons thousands of times a second. These collisions take place in certain areas around the accelerator, where huge detectors analyze the collisions and take something like a digital photograph of the collision to understand what's happening. These detectors generate huge amounts of data, which have to be stored and processed at CERN and the collaborating institutes around the world.

We have something like 100,000 processors around the world, 50 petabytes of disk, and over 60 petabytes of tape. The tape is in just a small number of the centers, not all of the hundred centers that we have. We call it "computing at the terra-scale," that's terra with two R's. We’ve developed a worldwide computing grid to coordinate all the resources that we have with the jobs of the many physicists that are working on these detectors.

Gardner: So, to look at the IT problem and unpack it a little bit. You're dealing with such enormous amounts of data. You’ve been in the distribution of these workloads for quite some time. Maybe you could explain a little bit the evolution of how you've distributed and managed such extreme workload?

No central management

Cass: If you look at the past, in the 1990’s, we had people collaborating, but there was no central management. Everybody was based at different institutes and people had to submit the workloads, the analysis, or the Monte Carlo simulations of the experiments they needed.

We realized in 2000-2001 that this wasn’t going to work and also that the scale of resources that we needed was so vast that it couldn’t all be installed at CERN. It had to be shared between CERN, a small number of very reliable centers we call the Tier One centers and then 100 or so Tier Two centers at the universities. We were developing this thinking around the same time as the grid model was becoming popular. So, this is what we’ve done.

What a lot of the grid academics have done is in understanding or exploring what could be done with the grid, as an idea. What we've been focusing on is making it work and not pushing the envelope in terms of the technology, but pushing the envelope in terms of the scale to make sure that it works for the users. We connect the sites. We run tens of thousands of jobs a day across this and gradually we’ve run through a number of exercises to distribute the data at gigabytes a second and tens of thousands of jobs a day.

We've progressively deployed grid technology, not developed it. We've looked at things that are going on elsewhere and made them work in our environment.

Gardner: As I understand it, the interest you have in cloud isn’t strictly a matter of ripping and replacing, but augmenting what you're already doing vis-a-vis these grid models.

Cass: Exactly. The grid solves the problem in which we have data distributed around the world and it will send jobs to the data. But, there are two issues around that. One is that if the grid sends my job to site A, it does so because it thinks that a batch slot will become available at site A first. But, maybe a grid slot becomes available at site B and my job is site A. Somebody else who comes along later actually gets to run their job first.

Today, the experiment team submits a skeleton job to all of the sites in order to detect which site becomes available first. Then, they pull down my job to this site. You have lots of schedulers involved in this -- in the experiment, the grid, and the site -- and we're looking at simplifying that.

These skeleton jobs also install software, because they don’t really trust the sites to have installed the software correctly. So, there's a lot of inefficiency there. This is symptomatic of a more general problem. Batch workers are good at sharing resources that are relatively static, but not when the demand for resource types changes dynamically.

So, we’re looking at virtualizing the batch workers and dynamically reconfiguring them to meet the changing workload. This is essentially what Amazon does with EC2. When they don’t need the resources, they reconfigure them and sell the cycles to other people. This is how we want to work in virtualization and cloud with the grid, which knows where the data is.

Gardner: Steve Conway, you’ve been tracking HPC for some time at IDC. Maybe you have some perceptions on how CERN is a leading adopter of IT over the years, the types of problems they're solving now, or the types of problems other organizations will be facing in the future. Could you tell us about this management issue and do you think that this is going to become a major requirement for cloud computing?

World technology leader

Conway: Starting with CERN, their scientists have earned multiple Nobel prizes over the years for their work in particle physics. As you said before, CERN is where Tim Berners-Lee and his colleagues invented the World Wide Web in the 1980s.

More generally, CERN is a recognized world leader in technology innovation. What’s been driving this, as Tony said, are the massive volumes of data that CERN generates along with the need to make the data available to scientists, not only across Europe, but across the world.

For example, CERN has two major particle detectors. They're called CMS and ATLAS. ATLAS alone generates a petabyte of data per second, when it’s running. Not all that data needs to be distributed, but it gives you an idea of the scale or the challenge that CERN is working with.

In the case of CERN’s and Platform’s collaboration, as Tony said, the idea is not just to distribute the data but also the applications and the capability to run the scientific problem.

CERN is definitely a leader there, and cloud computing is really confined today to early adopters like CERN. Right now, cloud computing services constitute about $16 billion as a market.

IDC: By 2012, which is not so far away, we project that spending for cloud computing is going to grow nearly threefold to about $42 billion. That would make it about 9 percent of IT spending.



That’s just about four percent of mainstream IT spending. By 2012, which is not so far away, we project that spending for cloud computing is going to grow nearly threefold to about $42 billion. That would make it about 9 percent of IT spending. So, we predict it’s going to move along pretty quickly.

Gardner: How important is this issue that Tony brought up about being able to manage in a dynamic environment and not just more predictable static batch loads?

Conway: It’s the single biggest challenge we see for not only cloud computing, but it has affected the whole idea of managing these increasingly complex environments -- first clusters, then grids, and now clouds. Software has been at the center of that.

That’s one of the reasons we're here today with Platform and CERN, because that’s been Platform’s business from the beginning, creating software to manage clusters, then grids, and now clouds, first for very demanding, HPC sites like CERN and, more recently, also for enterprise clients.

Gardner: Randy Clark, as you look at the marketplace and see organizations like CERN changing their requirements, what, in your thinking, is the most important missing part from what you would do in management with HPC and now cloud? What makes cloud different, from a management perspective?

Dynamic resources

Clark: It’s what Tony said, which is having the resources be dynamic not static. Historically, clusters and grids have been relatively static, and the workloads have been managed across those. Now, with cloud, we have the ability to have a dynamic set of resources.

The trick is to marry and manage the workloads and the resources in conjunction with each other. Last year, we announced our cloud products -- Platform LSF and Platform ISF Adaptive Cluster -- to address that challenge and to help this evolution.

Gardner: Let’s go back to Tony Cass. Tell me what you’re doing with cloud in terms of exploration. I know you’re not in a position to validate, or you haven’t put in place, any large-scale implementation or solutions that would lead the market. But, I’m very curious about what the requirements are. What are the problems that you're trying to solve that you think cloud computing specifically can be useful in?

Cass: The specific problem that we have is to deliver the most physics we can within the fixed budget and the fixed amount of resources. These are limited either by money or by data-center cooling and generally are much less than the experiment wants. The key aim is to deliver the most cycles we can and the most efficient computing we can to the physicists.

I said earlier that we're looking at virtualization to do this. We’ve been exploring how to make sure that the jobs can work in a virtual environment and that we can instantiate virtual machines (VMs), as necessary, according to the different experiments that are submitting workloads at one time to integrate the instantiation of VMs with the batch system.

At the moment, we're looking at how you can reliably send a virtual image that's generated at one place to another site.



Once we got that working, we figured that the real problem was managing the number of VMs. We have something like 4,000 boxes, but if you have a VM per call, plus a few spare, then it can easily get to 60,000, 70,000, or 80,000 VMs. Managing these is the problem that we are trying to explore now, moving away from “can we do it” to “can we do it on a huge scale?”

Gardner: Are you yet at the point where you want to be able to manage the VMs that you have under your own control, and perhaps starting to deploy virtualized environments and workloads in someone else’s cloud and make them managed and complementary.

Cass: There are two aspects to that. The resources in our community are at other sites, and all of the sites are very independent. They are also academic environments. So, they are exploring things in their own way as well. At the moment, we're looking at how you can reliably send a virtual image that's generated at one place to another site.

Amazon does this, but there are tight constraints in the way they manage that cluster, because they built it thinking about this. Universities maybe didn’t build their own cluster in a way that separates that out from some of the other computing they're doing. So, there are security and trust implications there that we are looking at. That will be a thing to collaborate on long-term.

More cost effective

Certainly, if we configure things in our own way, when we look in a cloud environment, perhaps it will be more cost effective for us to only purchase the equipment we need for the average workload and they buy resources from Amazon or other providers. But, there are interesting things you have to explore about the fact that the data is not at Amazon, even if they have the cycles.

There are so many things that we’re thinking about. The one we’re focusing on at the moment is effectively managing the resources that we have here at CERN.

Gardner: Steve Conway, it sounds as if CERN has, with its partnered network, a series of what we might call private-cloud implementations and they're trying to get them to behave in concert at what we might call at a public cloud level. That exercise could, as with the World Wide Web, create some de-facto standards and approaches that might, in fact, help what we call hybrid cloud computing moving forward. Does that fairly surmise where we are?

Conway: That’s right. There are going to have to be more rigorous open standards for the clouds. What Tony was talking about at CERN is something that we see elsewhere. People are turning to public clouds today -- "turning to" just meaning exploring at this point for a way to handle overload work and search workloads.

But, we're seeing some smaller and medium-size businesses looking to public clouds as a way to avoid having to purchase their own internal resources . . . and also as a way of avoiding having to hire experts who know how to operate them.



The Internet itself is a pretty high latency network, if you think of it that way. People are looking to send portions of the workload that doesn't have a lot of communication dependencies particularly inter-processor communication dependencies, because the latency doesn't support that.

But, we're seeing some smaller and medium-size businesses looking to public clouds as a way to avoid having to purchase their own internal resources, clusters for example, and also as a way of avoiding having to hire experts who know how to operate them. For example, engineering services firms don't have those experts in house today.

Gardner: Back to you Tony Cass, I know this is still a bit hypothetical, but if there were the standards in place, and you were able to go to a third-party cloud provider for some of these spikes or occasionally dynamically generated workloads that perhaps exceed your current on-premise’s capabilities, would this be a financial boon to you, where you could protect your pricing and you could decide the right supply and demand fit when it comes to these extreme computing problems?

Cass: It would certainly be a boon. The possibility is being demonstrated by experiments that are actually based at Brookhaven to do simulations that are CPU-intensive, where they don't need much data transfer or data access. They have been able to run simulations cost-effectively with EC2.

Although their cycles, compared to some of the things we're doing, are more expensive, if we don't have to buy all of the resources, we could certainly save money. Another aspect is that it is beyond money in some sense. If you need to get something fixed for a conference, and you are desperately trying to decide whether or not you’ve discovered the Higgs then it's not a case of “money's no object,” but you can get the resources from a cloud much more quickly than you can install capacity at CERN. So both aspects are definitely of interest.

Gardner: Randy Clark, this makes a great deal of sense from the perspective of a large research organization. But, we're not just talking about specific workloads. We're talking about workloads that will be common across many other vertical industries or computing environments. Can you name a few, or mention some from your experience, where we should expect the same sorts of economic benefits to play out.

Different use cases

Clark: What we're seeing is across industries. Financial services is certainly taking a leadership role. There's a lot going on in the semiconductor or electronic industry. Business intelligence (BI) is across industries and government. So, across industries, we see different use cases.

To your point, these use cases are enterprise applications to run the business, and we're seeing that in Java applications, test and development environments, and traditional HPC environments.

That's something driven by the top of the organization. Tony and Steve laid it out well. They look at the public/private cloud economically, and say, "Architecturally, what does this mean for our business?" Without any particular application in mind they're asking how to evolve to this new model. So, we're seeing it very horizontally and, to your point, in enterprise and HPC applications.

Gardner: Steve Conway, thinking about these large datasets, Randy brought up BI, and that, of course, means warehousing, data analytics, and advanced analytics. A lot of organizations are creating datasets at a scale never anticipated, never mind seen before, things from sensors, mobile devices, network computing, or social networking.

BI is one of those markets that, in its attributes, straddles the world of HPC and enterprise computing just as financial services does . . .



How do we bring together these compute resources, the raw power with these large data sets. I think this is an issue that CERN might also be a bellwether on, in somehow managing these large data sets and the compute power, bringing them architecturally into alignment.

Conway: BI is one of those markets that, in its attributes, straddles the world of HPC and enterprise computing just as financial services does, in the sense that they have workloads that don't have a whole lot of communications dependencies. They don't need networks with very high latency for the most part.

You see organizations like the University of Phoenix, which has 280,000 online students, that have already made this evolution -- in this case, with Platform helping them out -- from clusters to grid computing today. Now, they're looking toward cloud computing as a way to take them further.

You also see that not just in the private sector side. One of the other active customers that's really looking in that same direction is the Centers for Disease Control (CDC), which has moved to from clusters to grid computing.

What you're seeing here is people who have already stepped through the earlier stages of this evolution. They've gone from clusters to grid computing for the most part and now are contemplating the next move to cloud computing. It's an evolutionary move. It could have some revolutionary implications, but, from a technological standpoint, sometimes evolutionary is much safer and better than revolutionary.

Gardner: Tell us about some of the solutions that you now need to bring to market or are bringing to market around management and other issues? Where have you found that the rubber hits the road, in terms of where people can take this in real time? What's the current state of the art? Rather than talking about hypothetical, what's now possible, when it comes to moving from cluster and grid to the revolution of cloud?

Interaction of technologies

Clark: What Platform sees is the interaction of distributed computing and new technologies like virtualization requiring management. What I mean by that is the ability, in a large farm or shared environment, to share resources and then make those resources dynamic. It's the ability to add virtualization into those on the resource side, and then, on the server side, to make it Internet accessible, have a service catalog, and move from providing IT support to truly IT as a competitive service.

The state of the art is that you can get the best of Amazon, ease of use, cost, accessibility with the enterprise configuration, scale, and dependability of the enterprise grid environment.

There isn't one particular technology or implementation that I would point to, to say "That is state of the art," but if you look across the installations we see in our installed base, you can see best practices in different dimensions with each of those customers.

Gardner: Randy, what are some typical ways that you're seeing people getting started, when they want to make these leaps from evolutionary progression to revolutionary paybacks? Where do they start making that sort of catalytic difference?

Taking a step back, we see customers thinking about architecturally how do they want to have that management layer.



Clark: The evolution is the technology, as Steve said. The revolution is in the approach architecturally to how to get to that new spot.

Taking a step back, we see customers thinking about architecturally and how they want to have that management layer. What is that management layer going to mean to them going forward? And, can they quickly identify a set of applications and resources and get started?

So, there is an architecture piece to it, thinking about what the future will hold, but then there is a very pragmatic piece -- let's get going, let's engage, let's build something and be able to scale that out over time. We saw that approach in grid computing. We're encouraging folks to think, but then also to get started.

Gardner: Tony Cass at CERN, what are your next steps? Where would you expect to be heading next as you explore the benefits and possible real-world opportunities?

Cass: We’re definitely concentrating for the moment on how we exploit effective resources here. The wider benefits we'll have to discuss with our community.

Gardner: What would you like to see happen next?

Focusing on delivery

Cass: What I would like to see happen next is a definite cloud environment at CERN, where we move from something that we're thinking about to something that is in operation, where we have the ability to use resources that aren’t primarily dedicated for physics computing to deliver cycles to experiment. I'd like to see a cloud, a dynamically evolving environment in our computer center. We’re convinced it's possible, but delivering that is what we’re focusing on.

Gardner: Steve Conway, where do you see things headed next? What are the next steps that we should look for, as we move from that evolutionary progression to more of a revolutionary productivity?

Conway: It's along a couple of dimensions. One is the dimension of people actually working in these environments. In that sense, the CERN-Platform collaboration is going to help drive the whole state of the art forward over the next period of time.

People are a little bit concerned about testing their data there. The evolution of standards is going to accelerate this trend.



The other one, as Randy mentioned before, it that the evolution of standards is going to be important. For example, right now, one of the barriers to public-cloud computing is vendor lock-in, where the cloud, the Amazons, the Yahoos, and so forth are not necessarily interoperable. People are a little bit concerned about testing their data there. The evolution of standards is going to accelerate this trend.

Gardner: Why don’t I give the last word today to Randy? Tell us about some information that's available out there for folks who are looking to explore and take some first steps toward this more revolutionary benefit.

Clark: I'd encourage everybody to visit our website. There are a number of white papers, webinars, and webcasts that we've done with other customers to highlight some other use cases within development, test, and production environments. I'd point people to the resource page on our website www.platform.com.

Gardner: I want to thank our guests. This has been a very interesting discussion, and I certainly look forward to following what CERN does, because I do think that they’re going to be a leader in terms of what many others will be end up doing in B2B cloud computing.

Thank you to Tony Cass, Group Leader for Fabric Infrastructure and Operations at CERN. Thank you, sir.

Cass: Thank you.

Gardner: And also a good, big thank you to Steve Conway, Vice President in the High Performance Computing Group at IDC. Thank you, Steve.

Conway: Thanks.

Gardner: And also, of course, thank you to Randy Clark, Chief Marketing Officer at Platform Computing.

Clark: Thank you for the opportunity.

Gardner: This is Dana Gardner, principal analyst at Interarbor Solutions. You've been listening to a sponsored BriefingsDirect podcast on what likely outcomes we can expect from cloud computing and architecture, on the progression from grid to cloud computing, and moving into a more revolutionary set of benefits. Thanks for listening and come back next time.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: Platform Computing.

Transcript of a BriefingsDirect podcast on the move to cloud computing for data-intensive operations, focusing on the work being done by the European Organization for Nuclear Research. Copyright Interarbor Solutions, LLC, 2005-2010. All rights reserved.

You may also be interested in:

Sunday, October 25, 2009

Application Transformation Case Study Targets Enterprise Bottom Line with Eye-Popping ROI

Transcript of the first in a series of sponsored BriefingsDirect podcasts -- "Application Transformation: Getting to the Bottom Line" -- on the rationale and strategies for application transformation.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Learn more. Sponsor: Hewlett-Packard.


Gain more insights into "Application Transformation: Getting to the Bottom Line" via a series of HP virtual conferences Nov. 3-5. For more on Application Transformation, and to get real time answers to your questions, register to the virtual conferences for your region:
Register here to attend the Asia Pacific event on Nov. 3.
Register here to attend the EMEA event on Nov. 4.
Register here to attend the Americas event on Nov. 5.


Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you’re listening to BriefingsDirect.

This podcast is the first in the series of three to examine Application Transformation: Getting to the Bottom Line. We'll discuss the rationale and likely returns of assessing the true role and character of legacy applications, and then assess the true paybacks from modernization.

The ongoing impact of the reset economy is putting more emphasis on lean IT -- of identifying and eliminating waste across the data-center landscape. The top candidates, on several levels, are the silo-architected legacy applications and the aging IT systems that support them.

We'll also uncover a number of proven strategies on how to innovatively architect legacy applications for transformation and for improved technical, economic, and productivity outcomes. The podcasts coincidentally run in support of HP virtual conferences on the same subjects.

Here to start us off on our series on the how and why of transforming legacy enterprise applications are Paul Evans, worldwide marketing lead on Applications Transformation at HP. Welcome Paul.

Paul Evans: Hi, Dana.

Gardner: We're also joined by Luc Vogeleer, CTO for Application Modernization Practice in HP Enterprise Services. Welcome to the show, Luc.

Luc Vogeleer: Hello, Dana. Nice to meet you.

Gardner: Let's start with you, Paul, if you don't mind. You have this virtual conference coming up, and the focus is on a variety of use cases for transformation of legacy applications. I believe this has gone beyond the point in the market where people do this because it's a "nice to have" or a marginal improvement. We've seen it begin with a core of economic benefits here.

Evans: It's very interesting to observe what has happened. When the economic situation hit really hard, we definitely saw customers retreat, and basically say, "We don't know what to do now. Some of us have never been in this position before in a recessionary environment, seeing IT budgets reduce considerably."

That wasn't surprising. We sort of expected it across all of HP. People had prepared for that, and I think that's why the company has weathered the storm. But, at a very macro level, it was obvious that people would retrench and then scratch their heads and say, "Now what do we do?"

A different dynamic

Now, six months or nine months later, depending on when you believe the economic situation started, we're seeing a different dynamic. We're definitely seeing something like a two-fold increase in what you might call "customer interest." The number of opportunities we're seeing as a company has doubled over the last six or nine months.

I think that's based on the fact, as you pointed out, that if you ask any CIO or IT head, "Is application transformation something you want to do," the answer is, "No, not really." It's like tidying your garage at home. You know you should do it, but you don't really want to do it. You know that you benefit, but you still don't want to do it.

Because of the pressure that the economy has brought, this has moved from being something that maybe I should do to something that I have to do, because there are two real forces here. One is the force that says, "If I don't continue to innovate and differentiate, I go out of business, because my competitors are doing that." If I believe the economy doesn't allow me to stand still, then I've got it wrong. So, I have to continue to move forward.

Secondly, I have to reduce the amount of money I spend on my innovation, but at the same time I need a bigger payback. I've got to reduce the cost of IT. Now, with 80 percent of my budget being dedicated to maintenance, that doesn't move my business forward. So, the strategic goal is, I want to flip the ratio.

I want to spend more on innovation and less on maintenance. People now are taking a hard look at, "Where do I spend my money? Where are the proprietary systems that I've had around for 10, 20, 30 years? Where do these soak up money that, honestly, I don't have today anymore?"

One of the biggest challenges we face is that customers obviously believe that there is potential risk. Of course there is risk, and if people ask us, we'll tell them.



I've got to find a cheaper way, and I've got to find solutions that have a rapid return on investment (ROI), so that maybe I can afford them, but I can only afford them on the basis that they are going to repay me quickly. That's the dynamic that we're seeing on a worldwide basis.

That's why we've put together a series of webinars, virtual events that people can come to and listen to customers who've done it. One of the biggest challenges we face is that customers obviously believe that there is potential risk. Of course there is risk, and if people ask us, we'll tell them.

Our job is to minimize that risk by exposing them to customers who have done it before. They can view those best-case scenarios and understand what to do and what not to do. Remember, we do a lot of these things. We've built up massive skills experience in this space. We're going to share that on this global event, so that people get to hear real customers talking about real problems and the benefits that they've achieved from that.

We'll top-and-tail that with a session from Geoffrey Moore, who'll talk about where you really want to focus your investment in terms of core and context applications. We'll also hear from Dale Vecchio, vice president research of Gartner, giving us some really good insight as to best practices to move forward. That's really what the event is all about -- "It's not what I want to do, but it's what I am going to have to do."

Gardner: I've seen the analyst firms really rally around this. For example, this week I've been observing the Forrester conference via Twitter, reading the tweets of the various analysts and others at the conference. This whole notion of Lean IT is a deep and recurring topic throughout.

It seems to me that we've had this shift in psychology. You termed it a shift from "want to" to "must." I think what we've seen is people recognizing that they have to cut their costs and bite the bullet. It's no longer putting this off and putting this off and putting this off.

Still don't understand

Evans: No. Part of HP's portfolio is hardware. For a number of years, we've seen people who have consulted with us, bought our equipment to consolidate their systems and virtualize their systems, and built some very, very smart Lean IT solutions. But, when they stand back from it, they still say, "But, the line-of-business manager still giving me the heartache that it takes us six months to make a change."

We're still challenged by the fact that we don't really understand the structure of our applications. We're still challenged by the fact that the people who know about these applications are heading toward retirement. And, we're still challenged by the thought of what we're going to do when they're not here. None of that has changed.

Although every day we're finding inherently smarter ways to use silicon, faster systems, blade systems, and scaling out, the fundamental thing that has affected IT for so many years now is right, smack dab in the cross hairs of the target -- people saying that this is done properly, we'll improve our agility, our differentiation, and innovation, at the same time, cutting costs.

In a second, we'll hear about a case study that we are going to talk about at these events. This customer got an ROI in 18 months. In 18 months, the savings they had made -- and this runs into millions of dollars -- had been paid for. Their new system, in under 18 months, paid for itself. After that, it was pure money to the bottom-line, and that's what this series is all about.

Gardner: Luc, we certainly have seen both from the analysts as well as from folks like HP, a doubling or certainly a very substantial increase in inquires and interest in doing legacy transformation. The desire is there. Now, how do we go beyond theory and get into concrete practice?

Vogeleer: From an HP perspective, we take a very holistic approach and look at the entire portfolio of applications from a customer. Then, from that application portfolio -- depending on the usage of the application, the business criticality of the application, as well as the frequency of changes that this application requires -- we deploy different strategies for each application.

We not only focus on one approach of completely re-writing or re-platforming the application or replacing the application with a package, but we go for a combination of all those elements. By doing a complete portfolio assessment, as a first step into the customer legacy application landscape, we're able to bring out a complete road map to conduct this transformation.

This is in terms of the sequence in which the application will be transformed across one of the strategies that we will describe or also in terms of the sequence in time. We first execute applications that bring a quick ROI. We first execute quick wins and the ROI and the benefits from those quick wins are immediately reinvested for continuing the transformation. So, transformation is not just one project. It's not just one shot. It's a continuous program over time, where all the legacy applications are progressively migrated into a more agile and cost-effective platform.

Gardner: It certainly helps to understand the detail and approach to this through an actual implementation and a process. I wonder if you could tell us about the use case we're going to discuss, some background on that organization, and their story?

Vogeleer: The Italian Ministry of Instruction, University and Research (MIUR), is the customer we're going to cover with this case, is a large governmental organization and their overall budget is €55 billion.

This Italian public education sector serves 8 million students from 40,000 schools, and the schools are located across the country in more than 10,000 locations, with each of those locations connected to the information system provided by the ministry.

Very large employer

The ministry is, in fact, one of the largest employers in the world, with over one million employees. Its system manages both permanent and temporary employees, like teachers and substitutes, and the administrative employees. It also supports the ministry users, about 7,000 or 8,000 school employees. It's a very large employer with a large number of users connected across the country.

Why do they need to modernize their environment? In fact, their system was written in the early 1980s on IBM mainframe architecture. In early 2000, there was a substantial change in Italian legislation, which was called so-called a Devolution Law. The Devolution Law was about more decentralization of their process to school level and also to move the administration processes from the central ministry level into the regions, and there are 20 different regions in Italy.

This change implied a completely different process workflow within their information systems. To fulfill the changes, the legacy approach was very time-consuming and inappropriate. A number of strong application have been developed incrementally to fulfill those new organizational requirements, but very quickly this became completely unmanageable and inflexible. The aging legacy systems were expected to be changed quickly.

In addition to the element of agility to change application to meet the new legislation requirement, the cost in that context went completely out of control. So, the simple, most important objective of the modernization was to design and implement a new architecture that could reduce cost and provide a more flexible and agile infrastructure.

Gardner: We certainly get a better sense of the scope with this organization, a great deal of complexity, no doubt. How did you begin to get into such a large organization with so many different applications?

So, the simple, most important objective of the modernization was to design and implement a new architecture that could reduce cost and provide a more flexible and agile infrastructure.



Vogeleer: The first step we took was to develop a modernization road map that took into account the organizational change requirements, using our service offering, which is the application portfolio assessment.

From the standard engagement that we can offer to a customer, we did an analysis of the complete set of applications and associated data assets from multiple perspectives. We looked at it from a financial perspective, a business perspective, functionality and the technical perspective.

From those different dimensions, we could make the right decision on each application. The application portfolio assessment ensured that the client's business context and strategic drivers were understood, before commencing a modernization strategy for a given application in the portfolio.

A business case was developed for modernizing each application, an approach that was personalized for each group of applications and was appropriate to the current situation.

Gardner: How many people were devoted to this particular project?

Some 19,000 programs

Vogeleer: In the assessment phase, we did it with a staff of seven people. The seven people looked into the customer's 20 million lines of code using automated tools. There were about 19,000 programs involved into the analysis that we did. Out of that, we grouped the applications by their categories and then defined different strategies for each category of programs.

Gardner: How about the timing on this? I know it's a big complicated and can go on and on, but the general scoping, the assessment phase, how long do these sorts of activities, generally take?

Vogeleer: If we look at the way we conducted the program, this assessment phase took about three months with the seven people. From there, we did a first transformation pilot, with a small staff of people in three months.

After the pilot, we went into the complete transform and user-acceptance test, and after an additional year, 90 percent of the transformation was completed. In the transformation, we had about 3,500 batch processes. We had the transformation. We had re-architecting of 7,500 programs. And, all the screens were also transformed. But, that was a larger effort with a team of about 50 people over one year.

Gardner: Can you tell us about where they ended up? One of the things I understand about transformation is you still needed to asses what you’ve got, but you also need to know where you are going to take it?

We had the transformation. We had re-architecting of 7,500 programs.



Vogeleer: As I indicated at the beginning, we have a mixture of different strategies for modernization. First of all, we looked into the accounting and HR system, and the accounting and HR system for non-teacher employees. This was initially written on the mainframe and was carrying a low level of customization. So, there was a relatively limited need for integration with the rest of the application portfolio.

In that case, we selected Oracle HR Human Resources, Oracle Self-Service Human Resources, and Oracle Financial as the package to implement. The strategy for that component was to replace them with packaged applications. Twenty years ago, those custom accounting packages didn't exist and were completely written in COBOL. Now, with existing suitable applications, we can replace them.

Secondly, we did look into the batch COBOL applications on the mainframe. In that scenario, there were limited changes to those applications. So, a simple re-platforming of the application from the IBM 3070 onto a Linux database was sufficient as an approach.

More important were all the transactional COBOL/CICS applications. Those needed to be refracted and re-architected to the new platform. So, we took the legacy COBOL sources and transformed them into Java.

Also, different techniques were used there. We tried to use automated conversion, especially for non-critical programs, where they're not frequently changed. That represented 60 percent of the code. This code could be then immediately transferred by removing only the barriers in the code that prevented it from compiling.

All barriers removed

We had also frequently updated programs, where all barriers were removed and code was completely cleaned in the conversion. Then, in critical programs, especially, the conversion effort was bigger than the rewrite effort. Thirty percent of the programs were completely rewritten.

Gardner: You said that 60 percent of the code was essentially being supported through these expensive systems, doing what we might consider commodity functionality nowadays.

Vogeleer: Let me clarify what happens with those 60 percent.

We considered that 60 percent of the code was code that was not frequently changed. So, we used automatic conversion of this code from COBOL to Java to create some automatically translated Java procedures. By the way, this is probably not easy to read, but the advantage is that, because it was not often changed, the day that we need to change it, we already have Java source code from which we can start. That was the reason to not rewrite it, but to do automated conversion from COBOL to Java.

Gardner: Now we've certainly got a sense of where you started and where you wanted to end up. What were the results? What were some of the metrics of success -- technical, economic, and in productivity?

End-user productivity, as I mentioned, is doubled in terms of the daily operation of some business processes. Also, the overall application portfolio has been greatly simplified by this approach.



Vogeleer: The result, I believe, was very impressive. The applications are now accessed through a more efficient web-based user interface, which replaces the green screen and provides improved navigation and better overall system performance, including improved user productivity.

End-user productivity, as I mentioned, is doubled in terms of the daily operation of some business processes. Also, the overall application portfolio has been greatly simplified by this approach. The number of function points that we're managing has decreased by 33 percent.

From a financial perspective, there are also very significant results. Hardware and software license and maintenance cost savings were about €400,000 in the first year, €2 million in the second year, and are projected to be €3.4 million this year. This represents a savings of 36 percent of the overall project.

Also, because of the transfer from COBOL to Java technology and the low-cost of the programmers and the use of packaged application, development has now dropped by 38 percent.

Gardner: I think it's very impressive. I want to go quickly to Paul Evans. Is this unusual? Is this typical? How constant are these sorts of returns, when we look at a transformation project?

Evans: Well, of course, as a marketing person I'd say that every time we get this return, and everybody would laugh like you. In general, people are very keen on total cost of ownership (TCO) and ROI, especially the ROI. They say, "Look, maybe I can afford something, but I've got to feel certain that I am going to get my money back -- and quickly."

ROI of 18-24 months

I don't want to say that you're going to get it back in 10 years time. People just aren’t going to be around that long. In general, when we're doing a project, as we did here in Italy, which combines applications modernization and an infrastructure renew, an ROI of around 18-24 months is usually about the norm.

We have tools online. We have a thing called the TCO Challenge. People can insert the configuration of the current system today. Then, we promote a comparable system from HP in terms of power and performance and functionality. We provide not only the price of that system, but, more importantly, we provide the TCO and ROI data. Anyone can go online and try that, and what they'll see is an ROI of around 18 months.

This is why I think we're beginning to see this up-take in momentum. People are hearing about these case studies and are beginning to believe that this is not just smoke and mirrors, and it's not marketing people like me all the time.

People like Luc are out there at the coalface, working with customers who are getting these results. They are not getting the results because there is something special or different. This solution was a type that we deliver every day of the week, and these results are fairly commonplace.

. . . the new programing style is very much integrated with the convergence tool, with the migration tools, and allows the new generation of programmers to work with those migration tools very easily.



Gardner: Luc, certainly the scale of this particular activity, this project set, convinces me that the automation is really key. The scale and the size of the code base that you dealt with, the number of people, and the amount of time that were devoted are pretty impressive. What's coming next down the avenue in terms of the automation toolset? I can only assume that this type of activity is going to become faster, better, and cheaper?

Vogeleer: Yes, indeed. What we realized here is that, although we didn't rewrite all the code, 80 percent of the migrated code that we did by automated tools is very stable and
infrequently modified. We have a base from which we can easily rework.

Tools are improving, and we see also that those tools are growing in the direction of being integrated with integrated development environments (IDEs) that the programs can use. So, it becomes very common that the new programing style is very much integrated with the convergence tool, with the migration tools, and allows the new generation of programmers to work with those migration tools very easily.

Gardner: And, the labor pools around the world that produce the skill sets that are required for this are ready and growing. Is that correct?

Vogeleer: Yes, that's right. As I indicated, the savings that were achieved in terms of development cost by changing the programing language, because of the large pool of programmers that we can have and the lower labor cost, dropped the development cost by 38 percent.

Gardner: Very good. We've certainly learned a lot about the paybacks from transformation of legacy enterprise applications and systems. This podcast is the first in a series of three to examine application transformation getting to the bottom-line.

There is also a set of webinars and virtual conferences from HP on the same subject. I want to thank our guests for today’s insights and the use-case of the Italian Ministry of Instruction, University and Research (MIUR). Thanks, Paul Evans, worldwide marketing lead on Applications Transformation at HP.

Evans: Thanks, Dana.

Gardner: We’ve also been joined by Luc Vogeleer, CTO for the Application Modernization Practice in HP Enterprise Services. Thanks so much, Luc.

Vogeleer: Thank you, Dana.

Gardner: This is Dana Gardner, principal analyst at Interarbor Solutions. You’ve been listening to a sponsored BriefingsDirect podcast. Thanks for listening, and come back next time.



Gain more insights into "Application Transformation: Getting to the Bottom Line" via a series of HP virtual conferences Nov. 3-5. For more on Application Transformation, and to get real time answers to your questions, register to the virtual conferences for your region:
Register here to attend the Asia Pacific event on Nov. 3.
Register here to attend the EMEA event on Nov. 4.
Register here to attend the Americas event on Nov. 5.


Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Learn more. Sponsor: Hewlett-Packard.

Transcript of the first in a series of sponsored BriefingsDirect podcasts -- "Application Transformation: Getting to the Bottom Line" -- on the rationale and strategies for application transformation. Copyright Interarbor Solutions, LLC, 2005-2009. All rights reserved.