Showing posts with label mainframe. Show all posts
Showing posts with label mainframe. Show all posts

Sunday, October 25, 2009

Application Transformation Case Study Targets Enterprise Bottom Line with Eye-Popping ROI

Transcript of the first in a series of sponsored BriefingsDirect podcasts -- "Application Transformation: Getting to the Bottom Line" -- on the rationale and strategies for application transformation.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Learn more. Sponsor: Hewlett-Packard.


Gain more insights into "Application Transformation: Getting to the Bottom Line" via a series of HP virtual conferences Nov. 3-5. For more on Application Transformation, and to get real time answers to your questions, register to the virtual conferences for your region:
Register here to attend the Asia Pacific event on Nov. 3.
Register here to attend the EMEA event on Nov. 4.
Register here to attend the Americas event on Nov. 5.


Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you’re listening to BriefingsDirect.

This podcast is the first in the series of three to examine Application Transformation: Getting to the Bottom Line. We'll discuss the rationale and likely returns of assessing the true role and character of legacy applications, and then assess the true paybacks from modernization.

The ongoing impact of the reset economy is putting more emphasis on lean IT -- of identifying and eliminating waste across the data-center landscape. The top candidates, on several levels, are the silo-architected legacy applications and the aging IT systems that support them.

We'll also uncover a number of proven strategies on how to innovatively architect legacy applications for transformation and for improved technical, economic, and productivity outcomes. The podcasts coincidentally run in support of HP virtual conferences on the same subjects.

Here to start us off on our series on the how and why of transforming legacy enterprise applications are Paul Evans, worldwide marketing lead on Applications Transformation at HP. Welcome Paul.

Paul Evans: Hi, Dana.

Gardner: We're also joined by Luc Vogeleer, CTO for Application Modernization Practice in HP Enterprise Services. Welcome to the show, Luc.

Luc Vogeleer: Hello, Dana. Nice to meet you.

Gardner: Let's start with you, Paul, if you don't mind. You have this virtual conference coming up, and the focus is on a variety of use cases for transformation of legacy applications. I believe this has gone beyond the point in the market where people do this because it's a "nice to have" or a marginal improvement. We've seen it begin with a core of economic benefits here.

Evans: It's very interesting to observe what has happened. When the economic situation hit really hard, we definitely saw customers retreat, and basically say, "We don't know what to do now. Some of us have never been in this position before in a recessionary environment, seeing IT budgets reduce considerably."

That wasn't surprising. We sort of expected it across all of HP. People had prepared for that, and I think that's why the company has weathered the storm. But, at a very macro level, it was obvious that people would retrench and then scratch their heads and say, "Now what do we do?"

A different dynamic

Now, six months or nine months later, depending on when you believe the economic situation started, we're seeing a different dynamic. We're definitely seeing something like a two-fold increase in what you might call "customer interest." The number of opportunities we're seeing as a company has doubled over the last six or nine months.

I think that's based on the fact, as you pointed out, that if you ask any CIO or IT head, "Is application transformation something you want to do," the answer is, "No, not really." It's like tidying your garage at home. You know you should do it, but you don't really want to do it. You know that you benefit, but you still don't want to do it.

Because of the pressure that the economy has brought, this has moved from being something that maybe I should do to something that I have to do, because there are two real forces here. One is the force that says, "If I don't continue to innovate and differentiate, I go out of business, because my competitors are doing that." If I believe the economy doesn't allow me to stand still, then I've got it wrong. So, I have to continue to move forward.

Secondly, I have to reduce the amount of money I spend on my innovation, but at the same time I need a bigger payback. I've got to reduce the cost of IT. Now, with 80 percent of my budget being dedicated to maintenance, that doesn't move my business forward. So, the strategic goal is, I want to flip the ratio.

I want to spend more on innovation and less on maintenance. People now are taking a hard look at, "Where do I spend my money? Where are the proprietary systems that I've had around for 10, 20, 30 years? Where do these soak up money that, honestly, I don't have today anymore?"

One of the biggest challenges we face is that customers obviously believe that there is potential risk. Of course there is risk, and if people ask us, we'll tell them.



I've got to find a cheaper way, and I've got to find solutions that have a rapid return on investment (ROI), so that maybe I can afford them, but I can only afford them on the basis that they are going to repay me quickly. That's the dynamic that we're seeing on a worldwide basis.

That's why we've put together a series of webinars, virtual events that people can come to and listen to customers who've done it. One of the biggest challenges we face is that customers obviously believe that there is potential risk. Of course there is risk, and if people ask us, we'll tell them.

Our job is to minimize that risk by exposing them to customers who have done it before. They can view those best-case scenarios and understand what to do and what not to do. Remember, we do a lot of these things. We've built up massive skills experience in this space. We're going to share that on this global event, so that people get to hear real customers talking about real problems and the benefits that they've achieved from that.

We'll top-and-tail that with a session from Geoffrey Moore, who'll talk about where you really want to focus your investment in terms of core and context applications. We'll also hear from Dale Vecchio, vice president research of Gartner, giving us some really good insight as to best practices to move forward. That's really what the event is all about -- "It's not what I want to do, but it's what I am going to have to do."

Gardner: I've seen the analyst firms really rally around this. For example, this week I've been observing the Forrester conference via Twitter, reading the tweets of the various analysts and others at the conference. This whole notion of Lean IT is a deep and recurring topic throughout.

It seems to me that we've had this shift in psychology. You termed it a shift from "want to" to "must." I think what we've seen is people recognizing that they have to cut their costs and bite the bullet. It's no longer putting this off and putting this off and putting this off.

Still don't understand

Evans: No. Part of HP's portfolio is hardware. For a number of years, we've seen people who have consulted with us, bought our equipment to consolidate their systems and virtualize their systems, and built some very, very smart Lean IT solutions. But, when they stand back from it, they still say, "But, the line-of-business manager still giving me the heartache that it takes us six months to make a change."

We're still challenged by the fact that we don't really understand the structure of our applications. We're still challenged by the fact that the people who know about these applications are heading toward retirement. And, we're still challenged by the thought of what we're going to do when they're not here. None of that has changed.

Although every day we're finding inherently smarter ways to use silicon, faster systems, blade systems, and scaling out, the fundamental thing that has affected IT for so many years now is right, smack dab in the cross hairs of the target -- people saying that this is done properly, we'll improve our agility, our differentiation, and innovation, at the same time, cutting costs.

In a second, we'll hear about a case study that we are going to talk about at these events. This customer got an ROI in 18 months. In 18 months, the savings they had made -- and this runs into millions of dollars -- had been paid for. Their new system, in under 18 months, paid for itself. After that, it was pure money to the bottom-line, and that's what this series is all about.

Gardner: Luc, we certainly have seen both from the analysts as well as from folks like HP, a doubling or certainly a very substantial increase in inquires and interest in doing legacy transformation. The desire is there. Now, how do we go beyond theory and get into concrete practice?

Vogeleer: From an HP perspective, we take a very holistic approach and look at the entire portfolio of applications from a customer. Then, from that application portfolio -- depending on the usage of the application, the business criticality of the application, as well as the frequency of changes that this application requires -- we deploy different strategies for each application.

We not only focus on one approach of completely re-writing or re-platforming the application or replacing the application with a package, but we go for a combination of all those elements. By doing a complete portfolio assessment, as a first step into the customer legacy application landscape, we're able to bring out a complete road map to conduct this transformation.

This is in terms of the sequence in which the application will be transformed across one of the strategies that we will describe or also in terms of the sequence in time. We first execute applications that bring a quick ROI. We first execute quick wins and the ROI and the benefits from those quick wins are immediately reinvested for continuing the transformation. So, transformation is not just one project. It's not just one shot. It's a continuous program over time, where all the legacy applications are progressively migrated into a more agile and cost-effective platform.

Gardner: It certainly helps to understand the detail and approach to this through an actual implementation and a process. I wonder if you could tell us about the use case we're going to discuss, some background on that organization, and their story?

Vogeleer: The Italian Ministry of Instruction, University and Research (MIUR), is the customer we're going to cover with this case, is a large governmental organization and their overall budget is €55 billion.

This Italian public education sector serves 8 million students from 40,000 schools, and the schools are located across the country in more than 10,000 locations, with each of those locations connected to the information system provided by the ministry.

Very large employer

The ministry is, in fact, one of the largest employers in the world, with over one million employees. Its system manages both permanent and temporary employees, like teachers and substitutes, and the administrative employees. It also supports the ministry users, about 7,000 or 8,000 school employees. It's a very large employer with a large number of users connected across the country.

Why do they need to modernize their environment? In fact, their system was written in the early 1980s on IBM mainframe architecture. In early 2000, there was a substantial change in Italian legislation, which was called so-called a Devolution Law. The Devolution Law was about more decentralization of their process to school level and also to move the administration processes from the central ministry level into the regions, and there are 20 different regions in Italy.

This change implied a completely different process workflow within their information systems. To fulfill the changes, the legacy approach was very time-consuming and inappropriate. A number of strong application have been developed incrementally to fulfill those new organizational requirements, but very quickly this became completely unmanageable and inflexible. The aging legacy systems were expected to be changed quickly.

In addition to the element of agility to change application to meet the new legislation requirement, the cost in that context went completely out of control. So, the simple, most important objective of the modernization was to design and implement a new architecture that could reduce cost and provide a more flexible and agile infrastructure.

Gardner: We certainly get a better sense of the scope with this organization, a great deal of complexity, no doubt. How did you begin to get into such a large organization with so many different applications?

So, the simple, most important objective of the modernization was to design and implement a new architecture that could reduce cost and provide a more flexible and agile infrastructure.



Vogeleer: The first step we took was to develop a modernization road map that took into account the organizational change requirements, using our service offering, which is the application portfolio assessment.

From the standard engagement that we can offer to a customer, we did an analysis of the complete set of applications and associated data assets from multiple perspectives. We looked at it from a financial perspective, a business perspective, functionality and the technical perspective.

From those different dimensions, we could make the right decision on each application. The application portfolio assessment ensured that the client's business context and strategic drivers were understood, before commencing a modernization strategy for a given application in the portfolio.

A business case was developed for modernizing each application, an approach that was personalized for each group of applications and was appropriate to the current situation.

Gardner: How many people were devoted to this particular project?

Some 19,000 programs

Vogeleer: In the assessment phase, we did it with a staff of seven people. The seven people looked into the customer's 20 million lines of code using automated tools. There were about 19,000 programs involved into the analysis that we did. Out of that, we grouped the applications by their categories and then defined different strategies for each category of programs.

Gardner: How about the timing on this? I know it's a big complicated and can go on and on, but the general scoping, the assessment phase, how long do these sorts of activities, generally take?

Vogeleer: If we look at the way we conducted the program, this assessment phase took about three months with the seven people. From there, we did a first transformation pilot, with a small staff of people in three months.

After the pilot, we went into the complete transform and user-acceptance test, and after an additional year, 90 percent of the transformation was completed. In the transformation, we had about 3,500 batch processes. We had the transformation. We had re-architecting of 7,500 programs. And, all the screens were also transformed. But, that was a larger effort with a team of about 50 people over one year.

Gardner: Can you tell us about where they ended up? One of the things I understand about transformation is you still needed to asses what you’ve got, but you also need to know where you are going to take it?

We had the transformation. We had re-architecting of 7,500 programs.



Vogeleer: As I indicated at the beginning, we have a mixture of different strategies for modernization. First of all, we looked into the accounting and HR system, and the accounting and HR system for non-teacher employees. This was initially written on the mainframe and was carrying a low level of customization. So, there was a relatively limited need for integration with the rest of the application portfolio.

In that case, we selected Oracle HR Human Resources, Oracle Self-Service Human Resources, and Oracle Financial as the package to implement. The strategy for that component was to replace them with packaged applications. Twenty years ago, those custom accounting packages didn't exist and were completely written in COBOL. Now, with existing suitable applications, we can replace them.

Secondly, we did look into the batch COBOL applications on the mainframe. In that scenario, there were limited changes to those applications. So, a simple re-platforming of the application from the IBM 3070 onto a Linux database was sufficient as an approach.

More important were all the transactional COBOL/CICS applications. Those needed to be refracted and re-architected to the new platform. So, we took the legacy COBOL sources and transformed them into Java.

Also, different techniques were used there. We tried to use automated conversion, especially for non-critical programs, where they're not frequently changed. That represented 60 percent of the code. This code could be then immediately transferred by removing only the barriers in the code that prevented it from compiling.

All barriers removed

We had also frequently updated programs, where all barriers were removed and code was completely cleaned in the conversion. Then, in critical programs, especially, the conversion effort was bigger than the rewrite effort. Thirty percent of the programs were completely rewritten.

Gardner: You said that 60 percent of the code was essentially being supported through these expensive systems, doing what we might consider commodity functionality nowadays.

Vogeleer: Let me clarify what happens with those 60 percent.

We considered that 60 percent of the code was code that was not frequently changed. So, we used automatic conversion of this code from COBOL to Java to create some automatically translated Java procedures. By the way, this is probably not easy to read, but the advantage is that, because it was not often changed, the day that we need to change it, we already have Java source code from which we can start. That was the reason to not rewrite it, but to do automated conversion from COBOL to Java.

Gardner: Now we've certainly got a sense of where you started and where you wanted to end up. What were the results? What were some of the metrics of success -- technical, economic, and in productivity?

End-user productivity, as I mentioned, is doubled in terms of the daily operation of some business processes. Also, the overall application portfolio has been greatly simplified by this approach.



Vogeleer: The result, I believe, was very impressive. The applications are now accessed through a more efficient web-based user interface, which replaces the green screen and provides improved navigation and better overall system performance, including improved user productivity.

End-user productivity, as I mentioned, is doubled in terms of the daily operation of some business processes. Also, the overall application portfolio has been greatly simplified by this approach. The number of function points that we're managing has decreased by 33 percent.

From a financial perspective, there are also very significant results. Hardware and software license and maintenance cost savings were about €400,000 in the first year, €2 million in the second year, and are projected to be €3.4 million this year. This represents a savings of 36 percent of the overall project.

Also, because of the transfer from COBOL to Java technology and the low-cost of the programmers and the use of packaged application, development has now dropped by 38 percent.

Gardner: I think it's very impressive. I want to go quickly to Paul Evans. Is this unusual? Is this typical? How constant are these sorts of returns, when we look at a transformation project?

Evans: Well, of course, as a marketing person I'd say that every time we get this return, and everybody would laugh like you. In general, people are very keen on total cost of ownership (TCO) and ROI, especially the ROI. They say, "Look, maybe I can afford something, but I've got to feel certain that I am going to get my money back -- and quickly."

ROI of 18-24 months

I don't want to say that you're going to get it back in 10 years time. People just aren’t going to be around that long. In general, when we're doing a project, as we did here in Italy, which combines applications modernization and an infrastructure renew, an ROI of around 18-24 months is usually about the norm.

We have tools online. We have a thing called the TCO Challenge. People can insert the configuration of the current system today. Then, we promote a comparable system from HP in terms of power and performance and functionality. We provide not only the price of that system, but, more importantly, we provide the TCO and ROI data. Anyone can go online and try that, and what they'll see is an ROI of around 18 months.

This is why I think we're beginning to see this up-take in momentum. People are hearing about these case studies and are beginning to believe that this is not just smoke and mirrors, and it's not marketing people like me all the time.

People like Luc are out there at the coalface, working with customers who are getting these results. They are not getting the results because there is something special or different. This solution was a type that we deliver every day of the week, and these results are fairly commonplace.

. . . the new programing style is very much integrated with the convergence tool, with the migration tools, and allows the new generation of programmers to work with those migration tools very easily.



Gardner: Luc, certainly the scale of this particular activity, this project set, convinces me that the automation is really key. The scale and the size of the code base that you dealt with, the number of people, and the amount of time that were devoted are pretty impressive. What's coming next down the avenue in terms of the automation toolset? I can only assume that this type of activity is going to become faster, better, and cheaper?

Vogeleer: Yes, indeed. What we realized here is that, although we didn't rewrite all the code, 80 percent of the migrated code that we did by automated tools is very stable and
infrequently modified. We have a base from which we can easily rework.

Tools are improving, and we see also that those tools are growing in the direction of being integrated with integrated development environments (IDEs) that the programs can use. So, it becomes very common that the new programing style is very much integrated with the convergence tool, with the migration tools, and allows the new generation of programmers to work with those migration tools very easily.

Gardner: And, the labor pools around the world that produce the skill sets that are required for this are ready and growing. Is that correct?

Vogeleer: Yes, that's right. As I indicated, the savings that were achieved in terms of development cost by changing the programing language, because of the large pool of programmers that we can have and the lower labor cost, dropped the development cost by 38 percent.

Gardner: Very good. We've certainly learned a lot about the paybacks from transformation of legacy enterprise applications and systems. This podcast is the first in a series of three to examine application transformation getting to the bottom-line.

There is also a set of webinars and virtual conferences from HP on the same subject. I want to thank our guests for today’s insights and the use-case of the Italian Ministry of Instruction, University and Research (MIUR). Thanks, Paul Evans, worldwide marketing lead on Applications Transformation at HP.

Evans: Thanks, Dana.

Gardner: We’ve also been joined by Luc Vogeleer, CTO for the Application Modernization Practice in HP Enterprise Services. Thanks so much, Luc.

Vogeleer: Thank you, Dana.

Gardner: This is Dana Gardner, principal analyst at Interarbor Solutions. You’ve been listening to a sponsored BriefingsDirect podcast. Thanks for listening, and come back next time.



Gain more insights into "Application Transformation: Getting to the Bottom Line" via a series of HP virtual conferences Nov. 3-5. For more on Application Transformation, and to get real time answers to your questions, register to the virtual conferences for your region:
Register here to attend the Asia Pacific event on Nov. 3.
Register here to attend the EMEA event on Nov. 4.
Register here to attend the Americas event on Nov. 5.


Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Learn more. Sponsor: Hewlett-Packard.

Transcript of the first in a series of sponsored BriefingsDirect podcasts -- "Application Transformation: Getting to the Bottom Line" -- on the rationale and strategies for application transformation. Copyright Interarbor Solutions, LLC, 2005-2009. All rights reserved.

Wednesday, September 30, 2009

Doing Nothing Can Be Costliest IT Course When Legacy Systems and Applications Are Involved

Transcript of a BriefingsDirect podcast on the risks and drawbacks of not investing wisely in application modernization and data center transformation.

Listen to podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Learn more. Sponsor: Hewlett Packard.

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you’re listening to BriefingsDirect.

Today, we present a sponsored podcast discussion on the high, and sometimes underappreciated, cost for many enterprises of doing nothing about aging, monolithic applications. Not making a choice about legacy mainframe and poorly used applications is, in effect, making a choice not to transform and modernize the applications and their supporting systems.

Not doing anything is a choice to embrace an ongoing cost structure that may well prevent significant new spending for IT innovations. It’s a choice to suspend applications on, perhaps, ossified platforms and make their reuse and integration difficult, complex, and costly.

Doing nothing is a choice that, in a recession, hurts companies in multiple ways, because successful transformation is the lifeblood of near and long-term productivity improvements.

Here to help us better understand the perils of continuing to do nothing about aging legacy and mainframe applications, we’re joined by four IT transformation experts from Hewlett-Packard (HP). Please join me in welcoming our guests. First, Brad Hipps, product marketer for Application Lifecycle Management (ALM) and Applications Portfolio Software at HP. Welcome, Brad.

Brad Hipps: Thank you.

Gardner: Also, John Pickett from Enterprise Storage and Server Marketing at HP. Hello, John.

John Pickett: Hi. Welcome.

Gardner: Paul Evans, worldwide marketing lead on Applications Transformation at HP. Hello, Paul.

Paul Evans: Hello, Dana.

Gardner: And, Steve Woods, application transformation analyst and distinguished software engineer at EDS, now called HP Enterprise Services. Good to have you with us, Steve.

Steve Woods: Thank you, Dana.

Gardner: Let me start off by going to Paul. The recession has had a number of effects on people, as well as budgets, but I wonder what, in particular, the tight cost structures have had on this notion of tolerating mainframe and legacy applications?

Cost hasn't changed

Evans: Dana, what we’re seeing is that the cost of legacy systems and the cost of supporting the mainframe hasn’t changed in 12 months. What has changed is the available cash that companies have to spend on IT, as, over time, that cash amount may have either been frozen or is being reduced. That puts even more pressure on the IT department and the CIO in how to spend that money, where to spend that money, and how to ensure alignment between what the business wants to do and where the technology needs to go.

Given the fact that we knew already that only about 10 percent of an IT budget was spent on innovation before, the problem is that that becomes squeezed and squeezed. Our concern is that there is a cost of doing nothing. People eventually end up spending their whole IT budgets on maintenance and upgrades and virtually nothing on innovation.

At a time when competitiveness is needed more than it was a year ago, there has to be a shift in the way we spend our IT dollars and where we spend our IT dollars. That means looking at the legacy software environments and the underpinning infrastructure. It’s absolutely a necessity.

Gardner: So, clearly, there is a shift in the economic impetus. I want to go to Steve Woods. As an analyst looking at these issues, what’s changed technically in terms of reducing something that may have been a hurdle to overcome for application transformation?

Woods: For years, the biggest hurdle was that most customers would say they didn’t really have to make a decision, because the performance wasn’t there. The performance reliability wasn't there. That is there now. There is really no excuse not to move because of performance reliability issues.

What's still there, and is changing today, is the ability to look at a legacy source code application. We have the tools now to look at the code and visualize it in ways that are very compelling. That’s typically one of the biggest obstacles. If you look at a legacy application and the number of lines of code and number of people that are maintaining it, it’s usually obvious that large portions of the application haven’t really changed much. There's a lot of library code and that sort of thing.

That’s really important. We’ve been straight with our customers that we have the ability to help them understand a large terrain of code that they might be afraid to move forward. Maybe they simply don’t understand it. Maybe the people who originally developed it have moved on, and because nobody really maintains it, they have fear of going in the areas of the system.

Also, what has changed is the growth of architectural components, such as extract transform and load (ETL) tools, data integration tools, and reporting tools. When we look at a large body of, say, 10 million lines of COBOL and we find that three million lines of that code is doing reporting or maybe two million is doing ETL work, we typically suggest they move that asymmetrically to a new platform that does not use handwritten code.

That’s really risk aversion -- doing it very incrementally with low intrusion, and that’s also where the best return on investment (ROI) picture can be portrayed. You can incrementally get your ROI, as you move the reports and the data transformation jobs over to the new platform. So, that’s really what’s changed. These tools have matured so that we have the performance and we also have the tools to help them understand their legacy systems today.

Gardner: Now, one area where economics and technology come together quite well is the hardware. Let’s go to John with regards to virtualization and reducing the cost of storage. How has that changed the penalty increase for doing nothing?

Functionality gap

Pickett: Typically, when we take a look at the high-end of applications that are going to be moving over and sitting on a legacy system, many times they’re sitting on a mainframe platform. With that, one of the things that have changed over the last several years is the functionality gap between what exists in the past 5 or 10 years ago in the mainframe. That gap has not only been closed, but, in some cases, open systems exceed what’s available on the mainframe.

So, just from a functionality standpoint, there is certainly plenty of capability there, but to hit on the cost of doing nothing, and implementing what you currently have today is that it’s not only the high cost of the platform. As a matter of fact, one of our customers who had moved from a high-end mainframe environment onto an Integrity Superdome, calculated that if you were to take their cost savings and to apply that to playing golf at one of the premier golf places in the world, Pebble Beach, you could golf every day with three friends for 42 years, 10 months, and a couple of days.

It’s not only a matter of cost, but it’s also factoring in the power and cooling as well. Certainly, what we’ve seen is that the cost savings that can be applied on the infrastructure side are then applied back into modernizing the application.

Gardner: I suppose the true cost benefits wouldn’t be realized until after some sort of a transformation. Back to Paul Evans. Are there any indications from folks who have done this transformation as to how substantial their savings can be?

Evans: There are many documented cases that HP can provide, and, I think, other vendors can provide as well, In terms of looking at their applications and the underpinning infrastructure, as John was talking about, there are so many documented cases that point people in the direction that there are real cost savings to be made here.

There's also a flip side to this. Some research that McKinsey did earlier in the year took a sample of 100 companies as they went into the recession. They were brand leadership companies. Coming out of the recession, only 60 of those companies were still in a leadership position. Forty percent of those companies just dropped by the wayside. It doesn’t mean they went out of business. Some did. Some got acquired, but others just lost their brand leadership.

That is a huge price to pay. Now, not all of that has to do with application transformation, but we firmly believe that it is so pivotal to improve services and revenue generation opportunities that, in tough times, need to be stronger and stronger.

What we would say to organizations is, "Take a hard look at this, because doing nothing could be absolutely the wrong thing to do. Having a competitive differentiation that you continue to exploit and continue to provide customers with improving level of service is to keep those customers at a tough time, which means they’ll be your customers when you come out of the recession."

Gardner: Let’s go to Brad. I’m also curious on a strategic level about flexibility and agility, are there prices to be paid that we should be considering in terms of lock in, fragility, or applications that don’t easily lend themselves to a wider process.

'Agility' an overused term

Hipps: This term "agility" is the right term to use, but it gets used so often that people tend to forget what it means. The reality of today’s modern organization -- and this is contrasted even from 5, certainly 10 years ago -- is that when we look at applications, they are everywhere. There has been an application explosion.

When I started in the applications business, we were working on a handful of applications that organizations had. That was the extent of the application in the business. It was one part of it, but it was not total. Now, in every modern enterprise, applications really are total -- big, small, medium size. They are all over the place.

When we start talking about application transformation and we assign that trend to agility, what we’re acknowledging is that for the business to make any change today in the way it does business, in any new market initiative, in any competitive threat it wants to respond to, there is going to be an application -- very likely "applications," plural, that are going to need to be either built or changed to support whatever that new initiative is.

The fact of the matter is that changing or creating the applications to support the business initiative becomes the long pole to realizing whatever it is that initiative is. If that’s the case, you begin to say, "Great. What are the things that I can do to shrink that time or shrink that pole that stands between me and getting this initiative realized in the market space?”

From an application transformation perspective, we then take that as a context for everything that’s motivating a business with regard to its application. The decisions that you're going to make to transform your applications should all be pointed at and informed by shrinking the amount of time that takes you to turn around and realize some business initiative.

So, in 500 words or less, that's what we’re seeking with agility. Following pretty closely behind that, you can begin to see why there is a promise in cloud. It saves me a lot of infrastructural headaches. It’s supposed to obviate a lot of the challenges that I have around just standing up the application and getting it ready, let alone having to build the application itself. So I think that is the view of transformation in terms of agility and why we’re seeing things like cloud. These other things really start to point the direction to greater agility.

Gardner: It sounds as if there is a penalty to be paid or a risk to be incurred by being locked

That pool of data technology only gets bigger and bigger the more changes that I have coming in and the more changes that I'm trying to do.

into the past.

Hipps: That’s right, and you then take the reverse of that. You say, "Fine. If I want to keep doing things as is, that means that every day or every month that goes by, I add another application or I make bigger my current application pool using older technologies that I know take me longer to make changes in.

In the most dramatic terms, it only gets worse the longer I wait. That pool of data technology only gets bigger and bigger the more changes that I have coming in and the more changes that I'm trying to do. It’s almost as though I’ve got this ball and chain that I’ve attached to my ankle. I'm just letting that ball part get bigger and bigger. There is a very real agility cost, even setting aside what your competition may be doing.

Gardner: So, the inevitability of transformation goes from a long horizon to a much nearer and dearer issue. Let’s go back to Steve Woods of EDS. What are some misconceptions about starting on this journey? Is this really something that’s going to highly disrupt an organization or are there steps to do it incrementally? What might hold people back that shouldn't?

More than one path

Woods: I think probably one of the biggest misconceptions is when somebody has a large legacy application written in a second-generation language such as COBOL or perhaps PL/1 and they look at the code and imagine the future with still having handwritten code. But, they imagine maybe it’ll be in Java or C# or .Net, and they don’t pick the next step and say, "If I had to look at the system and rebuild it today, would I do it the same way?" That’s what you are doing if you just imagine one path to modernization.

Some of the code they have in their business logic might find their way into some classes in Java and some classes in .Net. What we prefer to do is a functional breakdown on what the code is actually doing functionally and then try to imagine what the options are that we have going forward. Some of that will become handwritten code and some of it will move to those sorts of implementations.

So, we really like to look at what the code is doing and imagine other areas that we could possibly implement those changes in. If we do that, then we have a much better approach to moving them. The worse thing to do -- and a lot of customers have this impression -- is to automatically translate the code from COBOL into Java.

Java and C# are very efficient languages to generate a function point, where there’s a measure of

It’s looking at the code, at what you want to do from a business process standpoint, and looking at the underlying platform.

functionality. Java takes about eight or ten lines of code. In COBOL, it takes about a 100 lines.

Typically, when you translate automatically from COBOL to Java, you still get pretty much the same amount of code. In actuality, you’ve taking the maintenance headache and making it even larger by doing this automated translation. So, we prefer to take a much more thoughtful approach, look at what the options are, and put together an incremental modernization strategy.

Gardner: Paul Evans, this really isn’t so much pulling the plug on the mainframe, which may give people some shivers. They might not know what to expect over a period of decades or what might happen when they pull the plug.

Evans: We don't profess that people unplug mainframes. If they want to, they may plug in an HP system in its place. We’d love them to. But, being very pragmatic, which is what we like to be, it's looking at what Steve was talking about. It’s looking at the code, at what you want to do from a business process standpoint, and looking at the underlying platform.

It's understanding what quality of service you need to deliver and then understand the options available. In even base technologies like this is, with microprocessors, the power that can be delivered these days means we can do all sorts of things at prices, speed, size, power output, and CO2 emissions that we could only dream of a few years ago. This power enables us to do all sorts of things.

The days where there was this walled-off area in a data center, which no other technology could match, are long gone. Now, the emphasis has been on consolidation and virtualization. There is also a big focus on legacy modernization. CIOs and IT directors, or whatever they might be, do understand there’s an awful lot of money spent on maintaining, as Steve said, handwritten legacy code that today run the organization and need to continue to provide these business processes.

Bite-size chunks

There are far faster, cheaper, and better ways to do that, but it has to be something that is planned for. It has to be something that is executed flawlessly. There's a long-term view, but you take bite-sized chunks out of it along the way, so that you get the results you need. You can feed those good results back into the system and then you get an upward spiral of people seeing what is truly possible with today’s technologies.

Gardner: John Pickett, are there any other misconceptions or perhaps under-appreciated points of information from the enterprise storage and server perspective?

Pickett: Typically, when we see a legacy system, what we hear, in a marketing sense, is that the often high-end -- and I’ll just use that as an example -- mainframes could be used as a consolidation factor. What we find is that if you're going to be moving applications or you’re modernizing applications onto an open-system environment to take advantage of the full gamut of tools and open system applications that are out there, you're not going to be doing that on a legacy environment. We see that the more efficient way of going down that path is onto an open-standard server platform.

Also, some of the other misconceptions that we see, again in a marketing sense, are that a mainframe is very efficient. However, if you compare that to a high-end HP system, for example, and just take a look at the heat output -- which we know is very important -- there is more heat. The difference in heat between a mainframe and an Integrity Superdome, for example, is enough to power a two-burner gas grill, a Weber grill. So, there's some significant heat there.

On the energy side, we see that the Superdome consumes 42 percent less energy. So, it's a very

. . . Some of the other misconceptions that we see, again in a marketing sense, are that a mainframe is very efficient.

efficient way of handling the operating-system environment, when you do modernize these applications.

Gardner: Brad Hipps, when we talk about modernizing, we’re not just modernizing applications. It’s really modernizing the architecture. What benefits, perhaps underappreciated ones, come with that move?

Hipps: I tend to think that in application transformation in most ways they’re breaking up and distributing that which was previously self-contained and closed.

Whether you're looking at moving to some sort of mainframe processing to distributed processing, from distributed processing to virtualization, whether you are talking about the application team themselves, which now are some combination of in-house, near-shore, offshore, outsourced sort of a distribution of the teams from sort of the single building to all around the world, certainly the architectures themselves from being these sort of monolithic and fairly brittle things that are now sort of services driven things.

You can look at any one of those trends and you can begin to speak about benefits, whether it’s leveraging a better global cost basis or on the architectural side, the fundamental element we’re trying to do is to say, "Let’s move away from a world in which everything is handcrafted."

Assembly-line model

Let’s get much closer to the assembly-line model, where I have a series of preexisting trustworthy components and I know where they are, I know what they do, and my work now becomes really a matter of assembling those. They can take any variety of shapes on my need because of the components I have created.

We're getting back to this idea of lower cost and increased agility. We can only imagine how certain car manufacturers would be doing, if they were handcrafting every car. We moved to the assembly line for a reason, and software typically has lagged what we see in other engineering disciplines. Here we’re finally going to catch up. We're finally be going to recognize that we can take an assembly line approach in the creation of application, as well, with all the intended benefits.

Gardner: And, when you standardize the architecture, instead of having to make sure there is a skillset located where the systems are, you can perhaps bring the systems to where the different skills are?

Hipps: That’s right. You can begin to divorce your resources from the asset that they are creating, and that’s another huge thing that we see. And, it's true, whether you're talking about a service or a component of an application or whether you're talking about a test asset. Whatever the case may be, we can envision a series of assets that make an application successful. Now, those can be distributed and geographically divorced from the owners.

Gardner: Where this has been a "nice to have" or "something on the back-burner" activity,

The pressure it’s bringing to bear on people is that the time is up when people just continue to spend their dollars on maintaining the applications . . . They can't just continue to pour money after that.

we're starting to see a top priority emerge. I’ve heard of some new Forrester research that has shown that legacy transformation is becoming the number one priority. Paul, can you offer some more insight on that?

Evans: That’s research that we're seeing as well, Dana, and I don’t know why. ... The point is that this may not be what organizations "want" to do.

They turn to the CIO and say, "If we give you $10 million, what is that you'd really like to do." What they're actually saying is this is what they know they've got to do. So, there is a difference between what they like and what they've got to do.

That goes back to when we started in the current economic situation. The pressure it’s bringing to bear on people is that the time is up when people just continue to spend their dollars on maintaining the applications, as Steve and Brad talked about and the infrastructure that John’s talked about. They can't just continue to pour money after that.

There has to be a bright point. Someone has got to say, “Stop. This is crazy. There are better ways to do this.” What the Forrester research is pointing is that if you go around to a worldwide audience and talk to a thousand people in influential positions, they're now saying, "This is what we 'have' to do, not what we 'want' to do. We're going to do this, and we're going to take time out and we're going to do it properly. We're going to take cost out of what we are doing today and it’s not going to come back."

Flipping the ratio

All the things that Steve and Brad have talked about in terms of handwritten code, once we have done it, once we have removed that handwritten code, that code that is too big for what it needs to be in terms to get the job done. Once we have done it once, it’s out and it’s finished with and then we can start looking at economics that are totally different going forward, where we can actually flip this ratio.

Today, we may spend 80 percent or 90 percent of our IT budget on maintenance, and 10 percent on innovation. What we want to do is flip it. We're not going to flip it in a year or maybe even two, but we have got to take steps. If we don’t start taking steps, it will never go away.

Hipps: I've got just one thing to add to that in terms of the aura of inevitability that is coming with the transformation. When you look at IT over the last 30 years, you can see that, fairly consistently, you can pick your time frame, and somewhere in neighborhood of every seven to nine years, there has been sort of an equivalent wave of modernization. The last major one we went through was late '90s or early 2000s, with the combination of Y2K and Web 1.0. So, sure enough, here we are, right on time with the next wave.

What’s interesting is this now number-one priority hasn’t reached the stage of inevitability. I

When you look at IT over the last 30 years, you can see that, fairly consistently, you can pick your time frame, and somewhere in neighborhood of every seven to nine years, there has been sort of an equivalent wave of modernization.

look back and think about what organizations in 2003 were still saying, "No, I refuse the web. I refuse the network world. It’s not going to happen. It’s a passing fancy," and whatever the case maybe. Inasmuch as there were organizations doing that, I suspect they're not around anymore, or they're around much smaller than they were. I do think that’s where we are now.

Cloud is reasonably new, but outsourcing is another component where transformation has been around long enough that most people have been able to look it square the eye and figure out, "You know what. There is real benefit here. Yes, there are some things I need to do on my side to realize that benefit. There is no such thing as a free lunch, but there is a real benefit here and it’s I am going to suffer, if not next year, then three years from now, if I don’t start getting my act together now."

Gardner: John Pickett, are there any messages from the boosters of mainframes that perhaps are no longer factors or are even misleading?

Pickett: There are certainly a couple of those. In the past, the mainframe was thought to be the harbinger of RAS -- reliability, availability, and serviceability. Many of those features exist on open systems today. It’s not something that is dedicated just to the high-end of the mainframe environment. They are out there on systems that are open-system platforms, significantly cheaper. In many cases, the RAS of these systems far exceeds what we’ll see on the mainframe.

That’s just one piece. Other misconceptions and things that you typically saw historically have been on the mainframe side, such as being able to drive a business-based objective or to be able to prioritize resources for different applications or different groups of users. Well, that’s something that has existed for a number of years on the open system side -- things such as backup and recovery and being able to provide very high levels of disaster recovery.

Misleading misconception

A misconception that this is something that can only be done in a mainframe environment, is not only misleading, but also not making the move to an open-system platform, continues to drive IT budget unnecessarily into an infrastructure that could be applied to either the application modernization that we have been talking about here or into the skills -- people resources within the data center.

Gardner: We seem to have a firm handle on the cost benefits over time. Certainly, we have a total cost picture, comparing older systems to the newer systems. Are there more qualitative, or what we might call "soft benefits," in terms of the competitiveness of an organization? Do we have any examples of that?

Evans: What we have to think about is the target audience out there. More and more people have access to technology. We have the generation now coming up that wants it now and wants it off the Web. They are used to using social networking tools that people have become accustomed to. So, it's one of the soft, squidgy areas as people go through this transformation.

I think that we can put hard dollars -- or pounds or euros -- against this for the moment, the inclusion of Web 2.0 or Enterprise 2.0 capabilities into applications. We have customers who are now trying that, some of it inside the firewall and some of it beyond. One, this can provide a much richer experience for the user. Secondly, you begin to address an audience that is used to analyzing these things in their day-to-day life anyway.

Why, when they step into the world of the enterprise, do they have to step back 50 years in terms

More and more people have access to technology. We have the generation now coming up that wants it now and wants it off the Web.

of capability? You just can’t imagine that certain things that people require are being done in batch mode anymore. The real-time enterprises are what people now expect and want.

So, as people go through this transformation, not only can they do all the plethora of things we have talked about in terms of handwritten code, mainframes, structure, and service-oriented architecture (SOA), but they can also start taking steps towards how they can really get these applications in line and embed them within an intimate culture.

If they start to take on board some of the newer concepts around cloud to experiment they have to understand that people aren’t going to just make this big leap of faith. At the end of the day, it's enterprise apps. We make things, apply things and count things -- and people have got to continue to do that. At the same time, they need to take these pragmatic steps to introduce these newer technologies that really can help them not only retain their current customer base, but attract new customers as well.

Gardner: Paul, when organizations go through this transformation, modernize, and go to open systems, does that translate into some sort of a business benefit, in terms of making that business itself more agile, maybe in a mergers and acquisition sense? Would somebody resist buying a company because they've got a big mainframe as an albatross around its neck?

Fit for purpose

Evans: Definitely, to have your IT fit for purpose, is something that is part of the inherent health of the organization. For organizations whose IT is way behind where it is today, it's definitely part of the health check.

To some degree, if you don’t want to get taken over or merged or acquired, maybe you just let your IT sag to where it is today, with mainframes and legacy apps, and nobody would want you. But then, you’re back to where we were earlier. You become one of those 40 percent of the companies that disappear off the face of the planet. So, it’s a sort of a double-edged sword, you make yourself attractive and you could get merged or acquired. On the other hand, you don’t do it and you’re going to go out of business. I still think I prefer the former rather than the latter.

Gardner: Let’s talk more specifically about what HP is bringing to the table. We’ve flushed out this issue quite a bit. Is there a long history at HP of modernization?

Evans: There are two things. There is what we have done internally, within the organization in the company. We’ve had to sort of eat our own dog food, in the sense that there are companies that were merged and companies that were acquired -- HP, Compaq, Digital, EDS, whatever.

It’s just not acceptable anymore to run these as totally separate IT organizations. You have to

When you take a look at the history of what we've been able to do, migrating legacy applications onto an open system platform, we actually have a long history of that.

quickly understand how to get this to be an integrated enterprise. It’s been well documented what we have done internally, in terms of taking massive amount of cash out of our IT operations, and yet, at the same time, innovating and providing a better service, while reducing our applications portfolio from something like 15,000 to 3,000.

So, all of these things going at the same time, and that has been achieved within HP. Now, you could argue that we don't have mainframes, so maybe it’s easier. Maybe that’s true, but, at the same time, modernization has been growing, and now we're right up there in the forefront of what organizations need to do to make themselves cost-effective, agile, and flexible, going forward.

Gardner: John Pickett, what about the issue around standards, neutrality, embracing heterogeneity, community and open source? Are these issues that HP has found some benefits from?

Pickett: Without a doubt. When you take a look at the history of what we've been able to do, migrating legacy applications onto an open system platform, we actually have a long history of that. We continue to not only see success, but we’re seeing acceleration in those areas.

A couple of drivers that we ended up seeing are really making the case for customers, not only the significant cost savings that we have talked about earlier. So, we're talking 50 percent or 70 percent total cost of ownership (TCO) savings driving from a legacy of mainframe environment over to an HP environment.

Additional savings

In addition to that, you also have the power savings. Simply by moving, the amount of energy that saved is enough to light 80 houses for one year. We’ve already talked about the heat and the space savings. It’s about a third of what you’re going to be seeing for a high-end mainframe environment for a similar system from HP with similar capabilities.

Why that’s important is because if customers are running out of data-center room and they’re looking at increasing their compute capacity, but they don’t have room within their data center, it just makes sense to go with a more efficient, more densely packed power system, with less heat and energy than what you’ll see on a legacy environment.

Gardner: Brad Hipps, about this issue about of being able to sell from a fairly neutral perspective, based on a solutions value, does that bring something to the table?

Hipps: We alluded earlier to the issue of lock in. If we’re going to, as we do, fly under the banner of bringing flexibility and agility to an organization, it’s tough to wave that banner without being pretty open in who you’re going to play with and where.

Organizations have a very fine eye for what this is going to mean for me not just six months from now, but two years from now, and what it’s going to mean to successors in line in the organization. They don’t want to be painted into a corner. That’s something that HP is very cognizant of, and has been very good about.

This may be a little bit overly optimistic, but you have to be able to check that box. If you’re going to make a credible argument to any enterprise IT organization, you have to show your openness and you have to check the box that says we’re not going to paint you into a corner.

Gardner: Steve Woods, for those folks who need to get going on this, where do you get started? We mentioned that iterative nature, but there must be perhaps low-hanging fruit, demonstrations of value that then set up a longer record of success.

Woods: Absolutely. What we find with our customers is that there are various levels in the processes of understanding their legacy systems. Often, we find some of them are quite mature and have gone down the road quite a bit. We offer some assessments based upon single applications and also portfolio of applications. We do have a modernization assessment and we do have a portfolio assessment. We also offer a best-shore assessment to ensure that you are using the correct resources.

Often, we find that we walk in, and the customers just don’t know anything about what their

We have the visual intelligence tools that very quickly allow us to see inside the system, see the duplicate source code, and provide them with high level cost estimates.

options are. They haven’t done any sort of analysis thus far. In those cases, we offer what we’re calling a Modernization Opportunity Workshop.

It's a very quick, usually 4-8 hour, on-site, and it takes about four weeks to deliver the entire package. We use some tools that I have created at HP that look at the clone code within the application. It’s very important to understand the pattens of the clone code and have visualizations. We have the visual intelligence tools that very quickly allow us to see inside the system, see the duplicate source code, and provide them with high level cost estimates.

We use a tool called COCOMO and we use Monte Carlo simulation. We’re able very quickly to give them a pretty high-level, 30-page report that indicates the size. Often, size is something that is completely misunderstood. We have been into customers who tell us they have four million lines of code, and we actually count the code as only 400,000 lines of code. So, it’s important to start with a stake in the ground and understand exactly where you’re at with the size.

We also do functionality composition support to understand that. That’s all delivered with very little impact. We know the subject matter experts are very busy, and we try to lessen the impact of doing that. That’s one of the places we can start, when the customer just has some uncertainty and they're not even sure where to start.

Gardner: We’ve been discussing the high penalties that can come with inaction around applications and legacy systems. We’ve been talking about how that factors into the economy and the technological shifts around the open systems and other choices that offer a path to agility and multiple-sourcing options.

I want to thank our panelists today for our discussion about the high costs and risks inherent in doing nothing around legacy systems. We’ve been joined by Brad Hipps, product marketer for Application Lifecycle Management and Applications Portfolio Software at HP. Thank you Brad.

Hipps: Thank you.

Gardner: John Pickett, Enterprise Storage and Server Marketing at HP. Thank you, John.

Pickett: Thank you Dana.

Gardner: Paul Evans, Worldwide Marketing Lead on Applications Transformation at HP. Thank you, Paul.

Evans: Thanks Dana.

Gardner: And Steve Woods, applications transformation analyst and distinguished software engineer at EDS. Thank you Steve.

Woods: Thank you Dana.

Gardner: This is Dana Gardner, principal analyst at Interarbor Solutions. You’ve been listening to a sponsored BriefingsDirect podcast. Thanks for listening and come back next time.

Listen to podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Learn more. Sponsor: Hewlett Packard.

Transcript of a BriefingsDirect podcast on the risks and drawbacks of not investing wisely in application modernization and data center transformation. Copyright Interarbor Solutions, LLC, 2005-2009. All rights reserved.

Tuesday, June 02, 2009

Mainframes Provide Fast-Track Access to Private Cloud Benefits for Enterprises, Process Ecosystems

Transcript of a BriefingsDirect podcast on the role and benefits of mainframes and their position as private cloud infrastructure in today's efficiency-minded enterprises.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com. Learn more. Sponsor: CA.

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you're listening to BriefingsDirect. Today, we present a sponsored podcast discussion on how mainframes can help enterprises reach cloud-computing benefits faster.

We'll be looking at what defines cloud computing, with an emphasis on private clouds or those computing models that enterprises can control on-premises, but that also favor and provide cloud-like efficiency with lower-end costs and a heightened ability to deliver services that support agile business processes.

We'll examine how new developments in mainframe automation and supporting the use of mainframes allow for cloud-computing advantages and the ability to solve some of the more contemporary computing challenges.

To help us understand how mainframe is the cloud, we're joined by Chris O'Malley, executive vice president and general manager for CA's Mainframe Business Unit. Welcome to the show, Chris.

[UPDATE: CA's purchase today of some assets of Cassatt bolsters the role of mainframes' and CA's management capabilities as foundations for private cloud efficiencies.]

Chris O'Malley: Dana, thank you very much. I'm glad to be here.

Gardner: Chris, we've heard a tremendous amount about cloud computing and there's a buzz around this whole topic. From your perspective, what makes cloud so appealing and feasible right now?

O'Malley: Cloud as a concept is, in its most basic sense, virtualizing resources within the data center to gain that scale of efficiency and optimization you just discussed. It's a big topic of discussion right now, especially given the recession we're sitting in.

It's very visible physically that there are many, many servers that support the ongoing operations of the business. CFOs and CEOs are starting to ask simple, but insightful, questions about why we need all these servers and to what degree are these servers being utilized.

When they get answers back and it's something like 15, 10, or 5 percent utilization, it begs for a solution to the problem to start bringing a scale of virtualization to optimize the overall data center to what has been done on the mainframe for years and years.

We're now seeing the availability of the technology -- VMware is an example -- to start to create almost mainframe-like environments on the distributed side. So, it's both the need from a business standpoint of trying to respond to reduced cost of computing and increased efficiency at a time when the technologies are becoming increasingly available to customers to manage distributed environments or open systems in a way similar to the mainframe.

Gardner: I suppose there's also an issue around integration. When people talk about cloud computing, we hear them refer to it as an application-development or platform-as-a-service (PaaS) affair. We also hear software as a service (SaaS) or just great delivery of the applications. Then, there's this notion of infrastructure fabric or infrastructure as a service (IaaS).

But, to relate and manage all of those things is something we haven't yet seen in this whole cloud market. I imagine that at a private level, if you were to use mainframe and associated technologies, you might start to see some of those integration points among these different levels or aspects of cloud computing.

O'Malley: You're right. It's a maturity curve that we're going through, and it's very likely that larger customers are using their mainframe in a highly virtualized way. They've been doing it for 30 years. It was the genesis of the platform. It's a fixed asset that was very expensive way back, or at least relatively expensive, that they try to get as much out of it as they possibly can. So, from its beginning, it was virtualized.

You see the same big customers, though, having application needs outside of what they've done themselves. What customer relationship management (CRM) and salesforce.com have done creates a duality of the mainframe acting as a cloud and using SaaS to support how they work their markets. It's very important that those things start to become integrated. CRM obviously fits into things like order entry, and tying those efforts together.

As you go through this maturity cycle, there is always a level of effort to integrate these things. The viability of things like salesforce.com, CRM, and the need to coordinate that data with what for most customers is 80 percent of their mission-critical information residing on the mainframe is making people figure out how to fix those problems. It's making this cloud slowly, but pragmatically, come true and become a reality in helping to better support their businesses.

Gardner: So, that would lead, at some point, to a cloud of clouds and hybrid models. We've been worried about integration vertically and now horizontally. I suppose we'll have to start worrying about it across organizational boundaries as well.

Barriers to adoption

O'Malley: Absolutely. There are other barriers that exist as well. The distributed environment and the open-system environment, in terms of its genesis, was the reverse of what I described in the mainframe. The mainframe, at some point, I think in the early '90s, was considered to be too slow to evolve to meet the needs of business. You heard things like mounting backlog and that innovation wasn't coming to play.

In that frustration, departments wanted their server with their application to serve their needs. It created a significant base of islands, if you will, within the enterprise that led to these scenarios where people are running servers at 15, 10, or 5 percent utilization. That genesis has been the basic fiber of the way people think in most of these organizations.

It's not just the technical barriers and the complexity of it. It's a cultural shift of an acceptance by players across the business. They all start to use a shared commodity in fulfilling their needs, and the recession helps that. Good CEOs and good CFOs never let a recession go to waste. They explain to their executive management, "We need a greater level of efficiency. We need to transform our thinking, so that we can start to take advantage of these technologies, decrease our overall cost, and increase our ability to serve our market."

They are not just technical issues. There is also people's disposition on the way IT should be run. That has to change as well.

Gardner: I suppose we've gone along with the pendulum swing, from centralized, to decentralized, and now we're coming back. I've spoken to a number of people that say the shortcomings of distributed computing are, in fact, the set of requirements for cloud computing. Do you agree with that?

O'Malley: I absolutely do. This 15 or 10 percent utilization is what we consistently see, customer after customer after customer. Recently, I was with an international customer. They took me on a data center tour, and one of the first things I see is an air conditioning unit the size of a school bus. I see walls that are three-and-a-half feet thick, poured

Time and time, I hear there is not a CEO or a CFO interested in adding yet another square foot of data-center floor space or adding people to manage the environment at a scale equal to the increasing capacity.

concrete. I see cabling that looks like it weighs tons and football fields of floor space. In the midst of the tour, somebody tells me, "Here is a blade server that cost us next to nothing."

The difficulty in bringing and using these things in an efficient fashion, the cost of all those moving parts, and everything that has to be managed as a single thing, rather than in a virtualized form, has caused a scale of waste that you cannot hide.

Time and time, I hear there is not a CEO or a CFO interested in adding yet another square foot of data-center floor space or adding people to manage the environment at a scale equal to the increasing capacity. They should be getting economies of scale and are just not seeing it.

You're seeing the pendulum come back. This is just getting too expensive, too complex, and too hard to keep up with business demands, which sounds a lot like what people's objections were about the mainframe 20 years ago. We're now seeing that maybe a centralized model is a better way to serve our needs.

Gardner: A lot of what attracts people to the cloud model -- because it is still rather amorphous, and not well-defined -- is this notion of elasticity. That's both, as you say, to help on utilization when it's low, but also to allow for the spikes to be managed externally or to take workloads and apply them across multiple machines in the case of a private cloud.

O'Malley: Exactly.

Gardner: How do you see this attraction to elasticity of compute resources and infrastructure? How does that relate to where the modern mainframe is?

On-demand engine

O'Malley: The modern mainframe is effectively an on-demand engine. IBM has created now an infrastructure that, as your needs grow, for example, you need to turn on additional engines that are already housed in the box. With peak processing in December around the retail uptake -- it will happen again here in the not too distant future -- or a quarter end for most organizations, they have the capacity to turn engines on and off and then be charged effectively, like a utility.

With the z10, IBM has a platform that is effectively an in-house utility and, obviously, outsourcers offer that option in a purer fashion. This is not the mainframe your grandpa bought in 1976. It had always been a strong platform in terms of being able to drive high degrees of utilization. You don't see a bad mainframe customer. They're all at 95 percent throughput on those processors.

Now, with the z10 and the ability to expand capacity on demand, it's very attractive for customers to handle these peaks, but not pay for it all year long. So, that's strength. Obviously, with companies like Salesforce.com, that's an option on the distributed side as well. You're paying for only that which you need at a given moment.

Gardner: Another issue that I've encountered in exploring these cloud issues is a common idea that this is for commodity-level services -- email, maybe some business applications, sales-force automation, CRM, for example. But, those peaks and troughs are also something that affect mission-critical applications, particularly if they're batch or something to be done at a certain frequency.

How do you take advantage of the compute capacity, when you're in between those frequencies and those batches? Do you see cloud computing as something that is destined for commodity-level IT,

The attributes that make up that which is required for a mission-critical application are basically what make your brand. So, the mainframe has always been the home for those kinds of things. It will continue to be.

or is this something that also makes a great deal of sense for the most mission-critical types of transactions and applications?

O'Malley: As it specifically relates to mainframe, it absolutely does. The mainframe has always been the home, if you're a manufacturer, for your logistics, which sit on the mainframe. It's a core process to the organization.

If you're a bank, the ATMs, the DDL, all of that stuff tends to be mainframe apps. You're right. There's a strong variability in the types of processing that is, in fact, being done. The hardware allows you the capacity to handle those things and reduce your consumption in a way that affects your cost.

Gardner: It's the virtualization, management, and governance of what's going on with the infrastructure that's the genesis of this elasticity. I think what you're describing is a value-add on top of the platform.

O'Malley: Absolutely. The mainframe has always been very good at resilience from a security standpoint. The attributes that make up that which is required for a mission-critical application are basically what make your brand. So, the mainframe has always been the home for those kinds of things. It will continue to be. We're just making the economics better over time. The attributes that are professed or promised for the cloud on the distributed side are being realized today by many customers and are doing great work. It's not just a hope or a promise.

Gardner: There is some disconnect, though, cultural and even generational. A lot of the younger folks brought up with the Web, think of cloud applications as being Web applications, built with scripting languages, perhaps delivered with rich interfaces, but primarily Web applications.

But, there's nothing to say that a Web application, a client-server application, a virtualized application, or even a virtualized desktop -- referred to as virtualized desktop infrastructure (VDI) -- can't find a place on a mainframe that supports different applications and different platforms beneath those applications.

Moving away from green screen

O'Malley: Correct. As an example, Linux runs on the mainframe. Just to take what you're saying a little bit deeper and state the obvious, one of the knocks on the mainframe is that it's the home of green screens. It was put to me recently by a customer that it's like showing garlic to a vampire. They just don't see that as the answer to the future, and it's not driving them to want to work on a platform that looks like it came out of 2001: A Space Odyssey or something.

Despite all these good things that I've said about the mainframe, there are still some nagging issues. The people who tend to work on them tend to be the same ones who worked on them 30 years ago. The technology that wraps it hasn't been updated to the more intuitive interfaces that you're talking about.

So, CA is taking a lead in re-engineering our toolset to look more like a Mac than it does like a green screen. We have a brand new strategy called Mainframe 2.0, which we introduced at CA World last year. We're showing initial deliverables of that technology here in May. The first thing that we're coming out with is a common service that looks in every way like InstallShield from the mainframe.

If you were to walk up to a 22-year-old system programmer and looked over their shoulder, there's no way that you'd see any difference between what they were working on and what somebody may be working on in the open-system side.

So, you're right that the mainframe technologically can do a lot, if not everything you can do on the distributed side, especially with what z/Linux offers. But, we've got to take what is a trillion dollars of investment that runs in the legacy VOS environment and bring that up to 2009 and beyond. CA, through our strategy of Mainframe 2.0, is in

We've had a cloud for 40 years. It’s called 'the mainframe.'

fact making that happen relative to the usage of our technology, but ultimately in terms of how the day-to-day workers interact with the mainframe and having it look, we believe, even more productive than what they're accustomed to on a distributed platform.

Gardner: It sounds as if we're really dealing with semantics as it addresses infrastructure. If you have a person who's been in the business for several decades and has some experience and you want to reassure them, you could say. "Well, it's running on the mainframe," they'll probably feel good about that. For somebody a little bit younger, you might say, "Well, it's running on the private cloud." It's really the same thing.

O'Malley: Absolutely. I listened to VMware presentation the other day, and they were, I think, speaking with ADP. I think that's what they said. They described the cloud. At the end of it, they said, "We've had a cloud for 40 years. It’s called 'the mainframe.'" But, you're right. It becomes semantics at that point. People will think differently. The mainframe has an image that will be altered dramatically with what we're doing with Mainframe 2.0.

It has its virtues, but it has its limits. The open system has its virtues and has its limits. We're raising the abstract to the point where, in a collective cloud, you're just going to use what's best and right for the nature of work you're doing without really even knowing whether this is a mainframe application -- either in z/OS, or z/Linux -- or it's Linux on the open system side or HP-UX. That's where things are going. At that point, the cloud becomes true in the promise where it's being touted at the moment.

Gardner: What about this? Going back to the issue to integration, if there is been this long-term ability to manage virtualized instances on the mainframe, eventually, as we get into this cloud of clouds and hybrid model future scenario, the buck must stop some place.

There's going to need to be one throat to choke somewhere, even if the services are emanating from a variety of sources. Is it a stretch to think that your on-premises mainframe that's being used as a cloud would also become a hub, rather than a spoke, in terms of how you would govern, manage, and integrate across multiple cloud types of implementation?

Benefits of centralization

O'Malley: One of the aspects that's wonderful about the mainframe is that the scale of discipline allows a very few people to manage a very large environment. That's been developed over 40 years and really is the benefit of this centralized model.

Increasingly, we're seeing customers come to the conclusion that there are certain things -- security and storage management for example -- that have been perfected in terms of their optimization and efficiency on the mainframe.

You're right. They're thinking of how to take certain disciplines that would probably be best done by the hub or the mainframe to manage the overall environment. That's definitely what we're thinking about from a strategy perspective. Security and storage management are two strong examples of the place those disciplines are done throughout the data center.

Gardner: We've discussed some of the issues around expense and the economics around utilization, control, and lower risk with governance and security. We've also addressed the perception, the gap, if you will, on culture and age -- "my grandfather's mainframe" and that sort of thing.

But, there's also this nagging concern in the market around skills and whether the mainframe needs to be sunset because of a lack of support, or whether its going to become, as we just described, the hub for the future. What is it that you bring to your clients in order to ameliorate their concerns around this skills issue?

O'Malley: There are two dimensions to it. One, we have to transform the technology, because we can't be naive. There is an 18-year-old man or woman out there someplace who's about to get into college.

It's very important that we bring a cool factor to the mainframe to make it a platform that's equally compelling to any other. When you do that, you create some interesting dynamics to getting the next generation excited about it.

They're going to have to see a renewed mainframe that is more like what they've been accustomed to, if we're going to have them invest a college career to develop their skills and pursue a career in the mainframe space.

They're used to intuitive interfaces that they don't need a manual for and that they can dig into. They eventually get into the depths of it, but they need a nice entry point into it. They need something that, through just their generalized knowledge, they can get into. A green screen is the opposite of that. It's a heavy-lifting exercise in the front end.

To be very honest, it's very important that we bring a cool factor to the mainframe to make it a platform that's equally compelling to any other. When you do that, you create some interesting dynamics to getting the next generation excited about it. One is that there's a vacuum of talent in that space. So, you've got a career escalator within mainframe that is just not available to you on the distributed side, and we're trying to set the example.

Our first technology within Mainframe 2.0, which I talked about, is called the Mainframe Software Manager. It's effectively InstallShield for the mainframe. We developed that with 20-somethings. In our Prague data center, we recruited 120 students out of school and they developed that in Java on a mainframe.

We're trying to set the example for what you can do in terms of bringing college students, making them effective, and having them do new and creative things on a platform that, at least in the recent history, they hadn't seen a lot of. They can get a sense of confidence between the dynamic of CA redressing the platform and our showing a formula to bring in college students, rapidly make them effective, and have them actually deliver technology that changes the way this platform is managed forever. It changes a lot of people's thinking and gives confidence to our customers and management.

We're also going on the road. I'm speaking at many universities, talking to both existing computer science students, as well as high school students that plan to go to those universities. I'm talking about making the mainframe one that's a friendly platform to them, if you will, and talking about the career opportunities that are offered to them.

Just to give you the sense of amazement, have 25-year-old people in Prague that have written lines of code that, within the next 12 months, we'll be running at the top 1,000 companies on the face of the earth. There aren't a lot of jobs in life that present you that kind of opportunity. But, we've got to get those two dimensions right. We've got to show that the platform is friendly. Its one where we have a formula to bring new college students in, make them effective, and then get the word out there, so that more and more students look at this as a career option for them.

Gardner: I'm just curious. When you speak to high school and college students, are there any particular skill sets that put them into the right track for what they need for mainframes, or is it just mainstream computer science?

A need for urgency

O'Malley: It's mainstream computer science, but there's a need for a level of urgency to get things done. The product that we're coming out within May, Mainframe Software Manager, was written from beginning to end in less than 12 months. One of the things that this project taught us was the capacity of these students to come out and connect with customers. There has been some atrophy in terms of our capacity to communicate, of being able to understand customer needs -- what are the issues -- and then being able to apply new paradigms.

Have no fear. We need almost a level of innocence in looking at things in a far different way that the students can bring and then working very hard in a systematic way in conjunction with a having a transparency with customers to never make a mistake. We can't go down a cul de sac with these kinds of activities -- developing the communication skills, the technical skills, or the discipline to master what I've just described. Those are the big things that we're looking for.

I'll be honest with you. With this younger crowd, there's a lot they don't know, but there is a new dimension that they bring and a level of innovation and creativity that we didn't have without them.

Gardner: They're not intimidated easily, right?

O'Malley: They're not intimidated, and they look at things differently. What others may say can never be done, shouldn't be done, or isn't necessary, they say, "That ain't right." A month later, they're doing something that almost creates a shock and awe from customer. It's a wonderful thing for me to be part of and to witness.

Gardner: Let's look at some examples, if you have any, around how organizations that have heard the cloud model attributes, requirements, and benefits, wanted to get there quickly, and probably had some things in place. Have we examples of taking the mainframe model, elevating it to the cloud model in terms of how it's being utilized, and then perhaps some attributes? Are there metrics of successes as to how that works?

O'Malley: For a long time the higher-end mainframe customers aggressively used their big iron to do things in the way you've described. What's more interesting is that recently we're seeing smaller customers start to look at cloud, more specifically virtualization, being pushed to the mainframe in unconventional ways.

We have an insurance company up in Minneapolis that ran SAP, which is a financial system that competes against Oracle, and they elected initially to run it in client-server fashion. They ran the database server

Some interesting things happen when you bring it up to the mainframe. There's no physical network at that point. It's all hypersockets. So, it has drastically reduced the cost from a networking standpoint.

under DB2 on their z/OS. They ran the application server on an Intel platform. They got to a point where they required an upgrade to that application.

Usually customers follow conventional wisdom. They do what they always did. They upgrade their hardware in place and they leave the application as it was. In this case, this company has a charter to sell insurance only in the state of Minnesota. As a result of that, when Target stores let people go because of the recession, it's not like they can go to Wisconsin and sell somebody else insurance to increase their overall revenue. Cost efficiency, cost per member, is not just an IT issue. It’s a CEO issue.

So, rather than just upgrading this application with all they have, they said, "Let's pause and take a hard look at this environment. Let's look at options and see if there are better things we could be doing to serve the business."

Ultimately, they decided on bringing the application server up to z/Linux, effectively encasing all of SAP in a single server, effectively creating an internal cloud for SAP to handle the scalability requirements and drive down cost.

Some interesting things happen when you bring it up to the mainframe. There's no physical network at that point. It's all hypersockets. So, it has drastically reduced the cost from a networking standpoint.

As you talked about earlier, z/OS effectively becomes a hub to the effort of management. The few people who did system programmer type function on the mainframe could now do it for what is a consolidated distributed environment, where they brought up 40 servers to the mainframe.

The thing that's also interesting is that, because of the maturity of virtualization on the mainframe, you can't just share SAP to 40 SAP servers, but you can also share with Web services and other applications. This is much, much more difficult to do on a distributed side with things like VMware.

Now, they've gotten nearly all their distributed environment up to the mainframe. On that platform, things like disaster recovery, where it was extremely difficult to bring up the environment when they did their testing, now comes up in 90 minutes. In fact, it takes half an hour to bring it up, an hour of certification validation, and they're up and running.

They've seen effectively half the cost, with a greater level of security, resilience, and all the things that the mainframe offers. You saw things like that in the big banks and the big insurance companies that had the capacity and people and smartness skills to do it.

You seldom saw that on the smaller end, but, given the recession and the maturity of the platform, the innovation that's been brought to the mainframe, all the enhancements that have taken place over the last eight years, and the efforts that CA is doing, it's making people look at it differently. That is, I think, a perfect example of a cloud up and running, and making a massive difference to support an organization's charter, which is to serve their customers at the lowest possible cost.

Gardner: I should think that that's not only going to be payback in a short-term but will improve over time as they need to do patches, administration, and upgrades. They'll have a smaller set or perhaps even a singular application set to apply those to to get the benefits of what a SaaS provider can do, but we're now bringing this downstream to a smaller company that can deliver their own on-demand model.

O'Malley: Absolutely. The evil in IT is moving parts and too many of them. The more that you can reduce change and reduce the need to manage change, the more you're going to reduce your overall cost.

The recession eventually will end, and you're right. The people who have taken these steps to drive efficiency, the steps that I just went through, are going to be in a much better competitive position when we come out of this recession not only to grow at a rate that their customers do, but do it in a more cost effective fashion than their competitors.

Gardner: Well, we've covered a lot of territory in terms of understanding some of the issues, the attractiveness of cloud. We've talked about the fact that it's still immature, but that there are a number of elements in the requirements list for cloud that are in place and simply need to be applied. We've discussed some of the issues around age, expense, and skill sets that are being addressed.

I want to thank our guest today, Chris O'Malley. He is the executive vice president and general manager for CA's Mainframe Business Unit. I appreciate your time, Chris.

O'Malley: Dana, thank you very much.

Gardner: We've been learning about how mainframes can help enterprises reach cloud benefits faster, and how in many respects the mainframe is already the cloud. I want to thank the sponsor for this discussion, CA, for their underwriting of its production. This is Dana Gardner, principal analyst at Interarbor Solutions. Thanks for listening, and come back next time.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com. Learn more. Sponsor: CA.

Transcript of a BriefingsDirect podcast on the role and benefits of mainframes and their position as private cloud infrastructure in today's efficiency-minded enterprises. Copyright Interarbor Solutions, LLC, 2005-2009. All rights reserved.