Showing posts with label Application Transformation. Show all posts
Showing posts with label Application Transformation. Show all posts

Tuesday, May 18, 2010

IT's New Recipe for Success: Modernize Applications and Infrastructure While Taking Advantage of Alternative Sourcing

Transcript of a sponsored BriefingsDirect podcast on making the journey to improved data-center operations via modernization and creative sourcing in tandem.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: HP.

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you’re listening to BriefingsDirect.

Today, we present a sponsored podcast discussion on improving overall data-center productivity by leveraging all available sourcing options and moving to modernized applications and infrastructure.

IT leaders now face a set of complex choices, as they look for ways to manage their operational budget, knowing that discretionary and capital spending remain tight, even as demand on their systems increases.

One choice that may be the least attractive is to stand still as the recovery gets under way and demands on energy and application support outstrips labor, systems supply, and available electricity.

Economists are now seeing the recession giving a way to growth, at least in several important sectors and regions. Chances are that demands on IT systems to meet growing economic activity will occur before IT budgets appreciably open up.

So what to do? Our panel of experts today examines how to gain new capacity from existing data centers through both modernization and savvy exploitation of all sourcing options. By outsourcing smartly, migrating applications strategically, and modernizing effectively, IT leaders can improve productivity, while operating under tightly managed costs.

Economists are now seeing the recession giving a way to growth, at least in several important sectors and regions.



We'll also look at some data-center transformation examples with some executives from HP to learn how effective applications and infrastructure modernization improves enterprise IT capacity outcomes. And, we'll examine modernization in the context of outsourcing and hybrid sourcing, so that the capacity goals facing IT leaders can be more easily and affordably met, even in the midst of a fast-changing economy.

As we delve into applications and infrastructure modernization best practices, please join me in welcoming our panel: Shawna Rudd, Product Marketing Manager for Data Center Services at HP. Welcome, Shawna.

Shawna Rudd: Thank you.

Gardner: We're also here with Larry Acklin, Product Marketing Manager for Applications Modernization Services at HP. Welcome, Larry.

Larry Acklin: Hello.

Gardner: And, Doug Oathout, Vice President for Converged Infrastructure in HP’s Enterprise Services. Welcome, Doug.

Doug Oathout: Thank you, Dana. I'm glad to be here.

Gardner: Let me start with you, Doug. We're seeing some additional green shoots now across the economy, and IT services are also being taxed by an ongoing data explosion, the proliferation of mobile devices, use of social media, and new interfaces. So, what happens when the supply of budget -- that is to say, the available funding for innovation in new applications -- is lacking, even as the demand starts to pick up? What are some of the options that IT leaders have?

Tackling the budget

Oathout: Dana, when you look at the budgets still being tight in the tight economy, but business is starting to grow again, IT leaders really need to look strategically at how they're going to tackle their budget problem.

There are multiple sourcing options, there are multiple modernization tasks as well as application culling that they could do to improve their cost structure. What they need to do is to start to think about how, and what major projects they want to take on, so that they can improve their cash flow in the short-term while improving their business outcomes in the long-term.

At HP, we look at: how do I source products that are more beneficial to me -- outsourcing cloud and such -- to give us a better economic picture, and also using modernization techniques for application and infrastructure to improve the long-term cost structures.

At HP we also look at modernization of the software, and we look at outsourcing options and cloud options as ways to improve the financial situation for IT managers.

Gardner: Looking at this historically, have the decisions around outsourcing been made separately from decisions around modernization and infrastructure? Is it now time to bring two disparate decision processes together?

Oathout: Yes. In the past, companies have looked at outsourcing as a final step to IT, versus an alternate step in IT. We're seeing more clients, especially in the tight economy, that we have gone through, looking at a hybrid model.

How do I source things smartly that are non-mission critical or non-business critical to me to the outside world and then keep the stuff that is critical to my business within the four walls of the data center? There is a model evolving, a hybrid model between outsourcing and in-sourcing of different types of applications in different types of infrastructure.

Gardner: Let's go to you, Shawna. When we think about the decisions around sourcing, as Doug just pointed out, there seems to be a different set of criteria being brought to that. How do you view the decision-making around sourcing options as being different now than two, three or five years ago?

Rudd: Clients or companies have a wider variety of outsourcing mechanisms to choose from. They can choose to fully outsource or selectively out-task specific functions that should, in most cases, be able to provide them with substantial savings by looking at their operating expenses. Alternatively, as Doug just pointed out, we can provide many transformational and modernization type of projects that don’t require any outsourcing at all. Clients just have a wider variety of options to choose from.

Gardner: To you, Larry. As folks look at their current infrastructure and try to forecast new demands on applications and what new applications are going to be coming into play, are they faced with an either/or? Is this about rip and replace? How does modernization fit differently into this new set of decisions?

Acklin: It's definitely becoming a major challenge. The problem is that if you look purely at outsourcing in order to have additional investment for innovation, it will take you so far. It will take you to a point.

There needs to be a radical change in most businesses, because they have such a build-up of legacy technology, applications and so forth. There needs to be a radical change in how they move forward so they can free up additional investment dollars to be put back into the business.

Realigning the business and IT

More importantly, it's necessary to realign the business and the application portfolio, so that they're working together in order to address the new challenges that everyone is facing. These are challenges around growth: How do you grow so that, when you come out of a tough economy situation, the business is ready to go.

Investors are expecting that your company is going to accelerate into the future, providing better services to your market. How can you do that when your hands are completely tied, based on your current budget?

You know your IT budgets aren't going to increase rapidly, that there may be a delay before that can happen. So how do you manage that in the interim? That’s really where the combination of modernization and using various sourcing options is going to add additional benefit to be an enabler to get you to that agility that you want to get to.

Gardner: Larry, what would be some of the risks, if this change or shift in thinking and approach doesn't happen? What are some of the risks of doing nothing?

Acklin: We call that "the cost of doing nothing." That's the real challenge. If you look at your current spend and how you are spending your IT budgets today, most see a steady increase in expenses from year-by-year, but aren't seeing the increases in IT budgets. By doing nothing, that problem is just going to get worse and worse, until you're at a point where you're just running to keep the lights on. Or, you may not even be able to keep up.

The number of changes that have been requested by the business continues to grow. You're putting bandages on your applications and infrastructure to keep them alive. Pretty soon, you're going to get to a point, where you just can't stay ahead of that anymore. This is the cost of doing nothing.

If you don’t take action early enough, your business is going to have expectations of your IT and infrastructure that you can't meet. You're going to be directly impacting the ability for the company to grow. The longer you wait to get started on this journey to start freeing up and enabling the integration between your portfolio and your business the more difficult and challenging it's going to be for your business.

Gardner: Doug and Shawna, it sounds as if combining the decisions around modernizing your infrastructure and applications with your sourcing options is, in a sense, an insurance policy against the unknown. Is that overstating the opportunity here, Shawna?

Rudd: I don’t think so. Obviously, to Larry’s point, it's not going to get any cheaper to continue to do nothing. To support legacy infrastructure and applications it's going to require more expensive resources. It's going to require more effort to maintain it.

The same applies for any non-virtualized or unconsolidated environment. It costs more to manage more boxes, more software, more network connections, more floor space, and also for more people to manage all of that.

Greater risk

The risk of managing these more heterogeneous, more complex environments is going to be greater -- a greater risk of outages -- and the expense to integrate everything and try to automate everything is going to be greater.

Working with a service provider can help provide a lot of that insurance associated with the management of these environments and help you mitigate a lot of that risk, as well as reduce your cost.

Gardner: Doug, we can pretty safely say that the managed service providers out there haven’t been sitting around the past two or three years, when the economy was down. Many of them have been building out additional services, offering additional data and application support services. So, IT departments are now not only competing against themselves and their budgets, they are competing against managed service providers. How does that change somebody’s decision processes?

Oathout: It actually gives IT managers more of a choice. If you look at what's critical to your business, what's informational to your business, and you look at what is kind of the workflows that go on in your business, IT managers have many more choices of where they want to go source those applications or those job functions from?

As you look at service providers or outsourcers, there is a better menu of options out there for customers to choose from. That better menu allows you to compare and contrast yourself from a cost, service availability, and delivery standpoint, versus the providers in the marketplace.

IT managers have choices on where to source, but they also have choices on how to handle the capacity that fits within their four walls of the data center.



We see a lot of customers really looking at: how do I balance my needs with my cost and how do I balance what I can fit inside my four walls, and then use outsourcing or service providers to handle my peak workloads, some of my non-critical workloads, or even handle my disaster recovery for me?

So IT managers have choices on where to source, but they also have choices on how to handle the capacity that fits within their four walls of the data center.

Gardner: Let’s look at how you get started. What are some of the typical ways that organizations explore sourcing options and modernization opportunities? As I understand it, you have a methodology, a basic three-step approach: outsource, migrate, and modernize.

Let’s take each one of these and start with outsourcing smartly. Shawna, what does that mean, when we talk about these three steps in getting to the destination?

Rudd: From an outsourcing standpoint, it’s simply one mechanism that clients can leverage to facilitate or help facilitate this transformation journey that they may be looking to, as they go on to help generate some savings, which will help fund other maybe more significant modernization or transformational efforts.

We help clients maintain their legacy environments and increase asset utilization, while undertaking those modernization and transformation efforts. From an outsourcing standpoint, the types of things that a client can outsource could vary, and the scope of that outsourcing agreement could vary -- the delivery mechanism or model or whether we manage the environment at a client’s facility or within a leveraged facility.

Bringing value

All those variables can bring value to a client, based upon their specific business requirements. But then, as the guys will talk about in a second, the modernization or the migration and the modernization yields additional savings to those clients’ business.

So, from an outsourcing standpoint, it’s that first thing that will help generate savings for a client and can help fund some of the efforts that will generate incremental savings down the road.

Gardner: The second step involves migration. Who wants to handle that, and what does that really mean?

Oathout: Let me start and then I'll hand it over to Larry. When we talk about migration, we can look at different types of applications that migrate simply to modern infrastructure. Those applications can be consolidated onto fewer platforms into a more workflow-driven automated process.

We can get a 10:1 consolidation ratio on servers. We can get a 5-6:1 consolidation ratio on storage platforms. Then, with virtual connectivity or virtual I/O, we can actually have a lot less networking gear associated with running those applications on the servers and the storage platform.

When you look at modernizing your applications and look at modernizing infrastructure, they have to match.



So, if we look at just standard applications, we have a way to migrate them very simply over to modern infrastructure, which then gives you a lower cost point to run those applications.

Gardner: Now, not all applications are created or used equally. Is there a difference between what we might refer as core or context applications, and does that come into play when we think about this migration?

Oathout: Oh, it definitely does. There are some core applications that are associated with certain platforms that we can consolidate on the bigger boxes, and you get more users that way. Then, there are context applications, which are more information-driven, and which can easily continue to grow. That's one of the application areas that continues to grow, and you can't see how fast it's going to grow, but you can scale that out onto modern platforms.

As you have more work, you have more information, and you can grow those systems over time. You don't have to build the humongous systems to support the application, when it’s just starting out. You can build it over time.

There's a lot we can do with the different types of applications. When you look at modernizing your applications and look at modernizing infrastructure, they have to match. If you have a plan, you don't have to buy extra capacity when you start. You can buy the right capacity then grow it, as you need it.

Specific path

Acklin: Let me add a little bit to that. When we look at these three phases together, we ordered them this way for a specific path to minimize the risk as part of it. Outsourcing can drive some initial savings, maybe up to 40 percent, depending on the scope of what you're looking at for a client. That's a significant improvement on its own.

Not every client sees that high of a saving, but many do. The next step, that migration step that we’ve talked about, where we’re also migrating over to a consolidated infrastructure, allows you to take immediate actions on some of your applications as well.

In that application space, you can move an application that may be costing you significant amounts of the dollars whether it be, license fees or due to a lack of skilled resources and so forth on a legacy platform. Migrating those or keeping the application intact, running on that new infrastructure, can save you significant dollars, in addition to the initial work you did as part of the outsourcing.

The nice thing, as you do these things in parallel, is that it's a phase journey that you are going through, where they all integrate. But, you don't have to. You can separate them. You can do them one without the other, but you can work on this whole holistic journey throughout.

The migration of those applications, basically leaves those applications intact, but allows them to have a longer lifespan than you may typically would. A great example of this is, if you had an application that you want to eventually replace with a ERP system of some sort, or that business process is going to be changed in the future in some way, but we still need to do something about this cost-saving problem today.

When you move into that modernized phase, you're really trying to change the structure of those applications, so that you can take advantage of the latest technology to run cloud computing and everything operating as a service.



It's a great middle step. We can still drive significant 40-50 percent saving, just through this migration phase of moving that application onto this new infrastructure environment and changing the way that those cost structures around software and so forth are allocated towards that. It frees up short-term gain that can turn around to be reinvested in the entire modernization journey that we're talking about.

Gardner: So, if I understand that correctly, when we get to the modernization phase, we've been able to develop the capacity and develop a transformation of the budget from operations into something that can be devoted to additional new innovation capacity.

Acklin: Right. Then as you continue that journey, you're starting to get your cost structures aligned and you're starting to get to a place where your infrastructure is now flexible and agile. You’ve got the capacity to expand. When you move into that modernized phase, you're really trying to change the structure of those applications, so that you can take advantage of the latest technology to run cloud computing and everything operating as a service.

Future technologies allow us to enable the business for growth in the marketplace. Right now, many of our applications handcuff the business. It takes months to get a new product or service out to the market. By changing over to a service-oriented model, you're saving a lot of cost component here, but you're adding that agility layer to your applications and allowing your business to expand and grow.

Gardner: Before we go to some examples, I'm curious about what happens. What benefits can occur when you play these three aspects of this journey together?

There is sort of a dance, if you will, of three partners. When you apply them to the specific needs, requirements, and growth patterns within specific companies, what types of benefits do we get? Is this about switching to a more pay-as-you-go basis? Is this about reduced labor or improved automation?

Let's start with you, Shawna. What are some of the paybacks that companies typically get when they do this correctly?

Some 30 percent savings

Rudd: They can achieve about 30 percent savings, obviously depending on what they outsource and how much they outsource. Those savings will be achieved through the use of best-shore resources through the right sizing of their hardware and software environments, consolidation, virtualization, automation, standardization, processes, and technologies.

And, then they'll achieve incremental cost savings. As Larry said, it can be upward of 40-60 percent from migrating some of that low-hanging fruits, or those applications that are easily lifted and shifted to lower cost platforms. So, they'll reduce the associated IT and application expenses that are also the ongoing management expense. Then, as they continue to modernize those environments, they'll achieve additional efficiencies and potentially some additional savings.

In that scenario, in which they have combined everything, when they work with a single source provider to help them go through that journey and help facilitate that journey, the transitions, the hand-offs, and all of that should go much more smoothly.

The risk to the client, to the client's business, should be better mitigated, because they're not having to coordinate with four or five different vendors, internal organizations, etc. They have one partner who can help them and can handle everything.

Gardner: Doug, to you. When this is done properly, what are some of the high-level payoffs? What changes in terms of productivity at the most general level?

IT is now seen as adding value to the business versus just being the cost center, and the paybacks are unbelievable.



Oathout: The big thing that changes, Dana, is that when you go through this journey at the end, IT is aligned to the businesses. So, when a business wants to bring on a new application or a new product line, IT can then respond and stand up a new application in hours instead of months.

They can flex the environment to meet a marketing campaign, so you have the ability to do the transactions when a major TV advertisement goes on or when something happens in the industry. You get the flexibility and you get the efficiencies, but what you really get is IT is acting as a service provider to the line of business, and IT is now a partner with the business versus being a cost center to that business.

That's the big transformation that happens through this three-step process. IT is now seen as adding value to the business versus just being the cost center, and the paybacks are unbelievable.

You move from deploying an application in months to two hours. The productivity of your IT department gets two or three times better. You can now plan to run your data centers or your IT at normal workloads. Then, when peaks come in, you can outsource some of the work to service providers or to your outsource partner.

Your actual IT is running at average load, and you don't have to put all the extra equipment in there for the peak. You actually outsource it, when that peak comes. So, at the end of this journey, there is a whole different business model that is much more efficient, much more elastic, and much more cost-effective to run the business of the future.

Gardner: Larry, to you. What are your more salient takeaways in terms of benefits from doing this all correctly?

Don't have to wait

Acklin: I’ll just add to what Shawna and Doug have said already. One of the bigger benefits that you achieve is that the business doesn't have to wait. Many times, if you're a CIO, you have to tell your business-owners that you've got to wait. "I need to go through. I'm in the midst of this outsourcing operation. I'm trying to change the way we're providing service to the business." That can take time."

The idea of putting the outsourced, migrated, modernized phases together is that they're not sequential. You don't have to do one, then the other, and then the other. You can actually start these activities in parallel. So, you can start giving benefits back to the business immediately.

For example, while you're doing the outsourcing activities and getting that transition set up, you're starting to put together what your future architecture is going to look like for your future state. You have to plan how the business processes should be implemented within the application and the strategic value of each application that you currently have in your portfolio.

You're starting to build that road map of how you are going to get to the end state. And then Even as you continue through that cycle, you're constantly providing benefits back to both the business and IT at the same time.

You really build that partnership between the two. So, when you reach the end, that is the completely well-oiled machine working together -- both the business and IT -- to reach their objectives.

Even as you continue through that cycle, you're constantly providing benefits back to both the business and IT at the same time.



Gardner: Let’s look at some examples that we mentioned earlier. This can vary dramatically from organization to organization, and coming at this from different angles means that they might prioritize it in different ways. Perhaps we can look at a couple of examples to illustrate how this can happen and what some of the payoffs are. Who wants to step up first for an example on doing these three steps?

Oathout: I'll go first. One example that we worked very closely was in services with our customer France Telecom. France Telecom transitioned 17 data centers to two green data centers. Their total cost of ownership (TCO) calculation said that they were going to save €22 million (US $29.6 million) over a three-year period.

They embarked on this journey by looking at how they were going to modernize their infrastructure and how they were going to set up their new architecture so that it was more flexible to support new mobile phone devices and customers as they came online. They looked at how to modernize their applications so they could take advantage of the new converged infrastructure, the new architectures, that are available to give them a better cost point, a better operational expense point.

France Telecom is a normal example where you consolidate 17 data centers to two, but it’s not abnormal when a company goes through this three-step process, to make a significant change to the IT footprint, make a significant change in how they do their business to support the lines of businesses that require new applications and new users to come online relatively quickly.

Gardner: Doug, how would you characterize the France Telecom approach? Which of the three did they emphasize?

Emphasis on migration

Oathout: They really emphasized the migration as the biggest one. They migrated a number of applications to newer architectures and they also modernized their application base. So, they focused on the last two, the modernization and the migration, as the key components for them in getting their cost reductions.

Gardner: Okay, any other examples?

Acklin: I'll talk about another one. The Ministry of Education in Italy (MIUR) is another good example, where a client has gone on this whole journey. In that situation, they had outsourced some of their capabilities to us -- some of their IT management. But, they were challenged with some difficult times. The economy hit them hard, and being a government agency, they were under a lot of pressure to consolidate IT departments globally.

It’s a very, very large organization built up over the years. Most of the applications were built back in the early 1980s or earlier than that. They were mainframe-based, COBOL, CICS, DB2 type applications, and they really weren’t servicing the business very well. They were really making it a challenge.

In addition to all of the legacy technologies, the CIO also had the challenge of consolidating IT departments. They had distributed IT departments. So, they had to consolidate their IT departments as part of this activity.

On top of all that, they were given the challenge to reduce their headcount significantly due to the economic crisis. So, it became a very urgent journey for this client to go on, and they began going through that. Their goal was, as I said, reducing IT, improving agility, being able to respond to change, and doing a lot more with a lot less people in a consolidated manner.

At the end they ended up seeing a 2X productivity improvement and return on investment (ROI) in less than 18 months. They reduced their app support by over 30 percent and they reduced their new development cost by close to 40 percent.



As they went through their transformation, they went through the whole thing. They assessed what they had. They put their strategy together and where they wanted to go. They figured out what applications they needed and how they were going to operate.

They optimized the road map for them to reach their future state, established a governance program to keep everything in alignment while they went on this journey, and then they executed this journey.

They used a variety of methods for modernizing their applications and migrating over to the lower cost platforms. Some of them they re-architected into new service-based models to provide services to their students and teachers through the web.

At the end they ended up seeing a 2X productivity improvement and return on investment (ROI) in less than 18 months. They reduced their app support by over 30 percent and they reduced their new development cost by close to 40 percent.

Those are significant challenges that the CIO took on, and the combination of improving their applications and infrastructure through outsourcing and modernization model helped them achieve their goal. The CIO will tell you that they could never have survived all the pressure they were under without going on a journey like this.

Gardner: Shawna, do we have a third example?

No particular order

Rudd: This is an example, not naming a specific client, but also making another point, that the things we're talking about don't have to occur in this particular order -- this one, two, three step order.

I know of other clients for whom we've saved around 20 percent by outsourcing their mainframe environments. Then, after successfully completing the transition of those management responsibilities, we've been able to further reduce their cost by another 20 percent simply by identifying opportunities for code optimization. This was duplicate code that was able to be eliminated or dead code, or runtime inefficiency that enabled us to reduce the number of apps that they required to manage their business. They reduced the associated software cost, support cost, etc.

Then there were other clients for whom it made more sense for us to consider outsourcing after the completion of their modernization or migration activities. Maybe they already had modernization and migration efforts underway or they had some on the road map that were going to be completed fairly quickly. It made more sense to outsource as a final step of cost reduction, as opposed to an upfront step that would help generate some funding for those modernization efforts.

Gardner: For those folks who see the need in their organization and understand the rationale behind these various steps, where do they get started, how can they find more information? Let me start with you, Doug. Are the information resources easily available.

Oathout: Well, Dana, there are a ton of different places to start. There's your HP reseller, the HP website, and HP Services. If a customer is thinking about embarking on this journey, I'd contact HP Services and have them come out and do a consulting engagement or an assessment to lay out the steps required.

If you're embarking on the journey on modernization, contact your HP reseller and HP seller and have them come show you how to do consolidation and virtualization to really modernize your infrastructure. If you're having the conversation about applications, contact HP Services. They can look at your application portfolio and show you the experience that they have in modernizing those applications or migrating those applications to modern equipment.

We'll cover everything from how to figure out what you have, what you are planning, how to build the road map for getting into the future state, as well as all the different ways that will impact your business and enterprise along the way.



Gardner: Any additional paths to how to start from your perspective, Larry?

Acklin: Let me add to that. If you're in situation where you think modernization, but you're not positive, you're still trying to get a good understanding of what's involved, go on one of these trainings. We offer something that's called the Modernization Transformation Experience Workshop. It's basically a one-day activity workshop, a slide-free environment, where we bring you and take you through the whole journey that you'll go on.

We'll cover everything from how to figure out what you have, what you are planning, how to build the road map for getting into the future state, as well as all the different ways that will impact your business and enterprise along the way, whether you are talking technology infrastructure, architecture, applications, business processes, or even the change management of how it impact your people.

We go through that entire journey through this workshop. So you come out understanding what's you're getting yourself into and how it can really affect you as you go forward. But, that's not the only starting point. You can also jump into this modernization journey at any point in the space.

Maybe, for example, you've already figured out that you needed to do this, maybe you've tried some things on your own in the past, but really need to get external help. We have assessment activities that allow us to jump in at any point along this journey.

Whether it's to help you see where there are code vulnerabilities within your existing applications that visually show you what those things look like and where opportunities are for modernization, or whether it's to do a full assessment of your environment and figure out how your apps and your infrastructure are working for your business or, in most cases not working for your business, it allows you to jump in at any stage throughout that whole journey.

As Doug mentioned, HP can help you figure out the right place for beginning that journey. We have hundreds of modernization experts globally who can help you figure out where to start.

Gardner: Do we have any other closing thoughts on the process of getting started?

Acklin: Let me just mention one other item. We talked about this cost of doing nothing. Don't let any fears or doubts about this journey stop you from beginning the journey. There are many things that can get you in trouble with that cost of doing nothing. That time is coming for you, when you're not going to be able to make those changes. So, don't let those fears stop you from going on that journey.

An example of this is financial. Many of our clients we talk to, don’t know how they would pay for a journey like this. Actually, you have a lot of options right in front of you that you can take advantage of. Our modernization consultants can give you some good methods on how to cover this, how to put things together like these three phase activities, or how to go on these journeys that can still work for you even in tough financial times.

Gardner: Great. We've been talking about improving overall data-center productivity by leveraging available sourcing options as well moving to modernized applications and infrastructure. I want to thank our guest for today's panel. We've been here with Shawna Rudd, Product Marketing Manager for Data Center Services at HP. Thank you, Shawna.

Rudd: Thank you.

Gardner: And Larry Acklin, Product Marketing Manager for Application Modernization Services at HP. Thank you, Larry.

Acklin: Thank you.

Gardner: And Doug Oathout, Vice President of Converged Infrastructure at HP Enterprise Services. Thanks, Doug.

Oathout: Thank you, Dana.

Gardner: This is Dana Gardner, Principal Analyst at Interarbor Solutions. You've been listening to a sponsored BriefingsDirect podcast. Thanks for listening, and come back next time.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: HP.

Transcript of a sponsored BriefingsDirect podcast on making the journey to improved data-center operations via modernization and creative sourcing in tandem. Copyright Interarbor Solutions, LLC, 2005-2010. All rights reserved.

You may also be interested in:

Monday, January 18, 2010

Technical and Economic Incentives Mount Around Seeking Alternatives to Mainframe Applications

Transcript of the third in a series of sponsored BriefingsDirect podcasts on the rationale and strategies for application transformation.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Learn more. Sponsor: Hewlett-Packard.


Gain more insights into "Application Transformation: Getting to the Bottom Line" via a series of HP virtual conferences. For more on Application Transformation, and to get real time answers to your questions, register to access the virtual conferences for your region:

Access the Asia Pacific event.
Access the EMEA event.
Access the Americas event.


Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you’re listening to BriefingsDirect.

Today, we present a sponsored podcast discussion on why it's time to exploit alternatives to mainframe computing applications and systems. As enterprises seek to cut their total IT costs, they need to examine alternatives to hard to change and manage legacy systems. There are a growing number of technical and economic incentives for modernizing and transforming applications and the data center infrastructure that support them.

Today, we'll examine some case studies that demonstrate how costs can be cut significantly, while productivity and agility are boosted by replacing aging systems with newer, more efficient standards-based architectures.

This podcast is the third and final episode in a series that examines "Application Transformation: Getting to the Bottom Line." The podcast, incidentally, runs in conjunction with a series of Hewlett-Packard (HP) webinars and virtual conferences on the same subject.

Access the Asia Pacific event. Access the EMEA event. Access the Americas event.

Here with us now to examine alternatives to mainframe computing, is John Pickett, Worldwide Mainframe Modernization Program manager at HP. Hello, John.

John Pickett: Hey, Dana. How are you?

Gardner: Good, thanks. We're also joined by Les Wilson, America's Mainframe Modernization director at HP. Welcome to the show, Les.

Les Wilson: Thank you very much, Dana. Hello.

Gardner: And, we're also joined by Paul Evans, Worldwide Marketing Lead on Applications Transformation at HP. Welcome back, Paul.

Paul Evans: Hello, Dana.

Gardner: Paul, let me start with you if you don't mind. We hear an awful lot about legacy modernization. We usually look at it from a technical perspective. But it appears to me that in many of the discussions I have with organizations is that they are looking for more strategic levels of benefit, to finding business agility and flexibility benefits. The technical and economic considerations, while important in the short-term, pale in comparison to some of the longer term and more strategic benefits.

Pushed to the top

Evans: Where we find ourselves now -- and it has been brought on by the economic situation -- is that it has just pushed to the top an issue that's been out there for a long time. We have seen organizations doing a lot with their infrastructure, consolidating it, virtualizing it, all the right things. At the same time, we know, and a lot of CIOs or IT directors listening to this broadcast will know, that the legacy applications environment has somewhat been ignored.

Now, with the pressure on cost, people are saying, We've got to do something, but what can come out of that and what is coming out of that?" People are looking at this and saying, "We need to accomplish two things. We need a longer term strategy. We need an operational plan that fits into that, supported by our annual budget."

Foremost is this desire to get away from this ridiculous backlog of application changes, to get more agility into the system, and to get these core applications, which are the ones that provide the differentiation and the innovation for organizations, able to communicate with a far more mobile workforce.

At an event last week in America, a customer came up to me and said, "Seventy percent of our applications are batch running on a mainframe. How do I go to a line-of-business manager and say that I can connect that to a guy out there with a smartphone? How do I do that?" Today, it looks like an impossible gap to get across.

What people have to look at is where we're going strategically with our technology and our business alignment. At the same time, how can we have a short-term plan that starts delivering on some of the real benefits that people can get out there?

Gardner: In the first two parts of our series, we looked at several case studies that showed some remarkable return on investment (ROI). So, this is not just a nice to have strategic maturity process, but really pays dividends financially, and then has that longer term strategic roll-out.

Evans: Absolutely. These things have got to pay for themselves. An analyst last week, looked me in the face and said, "People want to get off the mainframe. They understand now that the costs associated with it are just not supportable and are not necessary."

One of the sessions you will hear in the virtual conference will be from Geoffrey Moore, where he talks about this whole difference between core applications and context -- context being applications that are there for productivity reasons, not for innovation or differentiation.

Lowest-cost platform


With a productivity application you want to get delivery on the lowest-cost platform you possibly can. The problem is that 20 or 30 years ago, people put everything on the mainframe. They wrote it all in code. Therefore, the challenge now is, what do you not need in code that can be in a package? What do you not need on the mainframe that could be running on a much more lower cost infrastructure or a completely different means of delivery, such as software as a service (SaaS).

The point is that there are demonstrably much less expensive ways of delivering these things. People have to just lift their heads up and look around, come and talk to us, and listen to the series and they will begin to see people who have done this before, and who have demonstrated that it works, as well as some of the amazing financial rewards that can be generated from this sort of work.

Gardner: John Pickett, let's go to you. We've talked about this, but I think showing it is always more impressive. The case studies that demonstrate the real-world returns tend to be the real education points. Could you share with us some of the case studies that you will be looking at during the upcoming virtual conference and walk us through how the alternative to mainframe process works?

Pickett: Sure, Dana. As Paul indicated, it's not really just about the overall cost, but it's about agility and being able to leverage the existing skills as well.

One of the case studies that I will go over is from the National Agricultural Cooperative Federation (NACF). It's a mouthful, but take a look at the number of banks that the NACF has. It has 5,500 branches and regional offices, so essentially it's one of the largest banks in Korea.

One of the items that they were struggling with was how to overcome some of the technology and performance limitations of the platform that they had. Certainly, in the banking environment, high availability and making sure that the applications and the services are running were absolutely key.

At the same time, they also knew that the path to the future was going to be through the IT systems that they had and they were managing. What they ended up doing was modernizing their overall environment, essentially moving their core banking structure from their current mainframe environment to a system running HP-UX. It included the customer and account information. They were able to integrate that with the sales and support piece, so they had more of a 360 degree view of the customer.

We talk about reducing costs. In this particular example, they were able to save $40 million on an annual basis. That's nice, and certainly saving that much money is significant, but, at the same time, they were able to improve their system response time two- to three-fold. So, it was a better response for the users.

But, from a business perspective, they were able to reduce their time to market. For developing a new product or service, that they were able to decrease that time from one month to five days.

Makes you more agile

If you are a bank and now you can produce a service much faster than your competition, that certainly makes it a lot easier and makes you a lot more agile. So, the agility is not just for the data center, it's for the business as well.

To take this story just a little bit further, they saw that in addition to the savings I just mentioned, they were able to triple the capacity of the systems in their environment. So, it's not only running faster and being able to have more capacity so you are set for the future, but you are also able to roll out business services a whole lot quicker than you were previously.

Gardner: I imagine that with many of these mainframe systems, particularly in a banking environment, they could be 15 or 20 years old. The requirements back then were dramatically different. If the world had not changed in 20 years, these systems might be up to snuff, but the world has changed dramatically. Look at the change we have seen in just the last nine months. Is that what we are facing here? We have a general set of different requirements around these types of applications.

Pickett: There are a couple of things, Dana. It's not only different requirements, but it's also being driven by a couple of different factors. Paul mentioned the cost and being able to be more efficient in today's economy. Any data center manager or CIO is challenged by that today. Given the high cost of legacy and mainframe environment, there's a significant amount of money to be saved.

It's not a one-size-fits-all. It's identifying the right choice for the application, and the right platform for the application as well.



Another example of what we were just talking about is that, if we shift to Europe, Middle East, and Africa region, there is very large insurance company in Spain. It ended up modernizing 14,000 million instructions per second (MIPS). Even though the applications had been developed over a number of years and decades, they were able to make the transition in a relatively short length of time. In a three- to six-month time frame they were able to move that forward.

With that, they saw a 2x increase in their batch performance. It's recognized as one of the largest batch re-hosts that are out there. It's just not an HP thing. They worked with Oracle on that as well to be able to drive Oracle 11g within the environment.

So, it's taking the old, but also integrating with the new. It's not a one-size-fits-all. It's identifying the right choice for the application, and the right platform for the application as well.

Gardner: So, this isn't a matter of swapping out hardware and getting a cheaper fit that way. This is looking at the entire process, the context of the applications, the extended process and architectural requirements in the future, and then looking at how to make the transition, the all important migration aspect.

Pickett: Yes. As we heard last week at a conference that both Paul and I were at, if all you're looking to do is to take your application and put it on to a newer, shinier box, then you are missing something.

Gardner: Let's go now to Les Wilson. Les, tell us a little bit about some studies that have been done and some of the newer insights into the incentives as to why the timing now for moving off of mainframes is so right.

Customer cases

Wilson: Thanks, Dana. I spend virtually every day talking directly to customers and to HP account teams on the subject of modernizing mainframes, and I'll be talking in detail about two particular customer case studies during the webinar.

Before I get into those details though, I want to preface my remarks by giving you some higher level views of what I see happening in the Americas. First of all, the team here is enjoying an unprecedented demand for our services from the customer base. It's up by a factor of 2 over 2008, and I think that some of the concepts that John and Paul have discussed around the reasons for that are very clear.

There's another point about HP's capabilities, as well, that makes us a very attractive partner for mainframe modernization solutions. Following the acquisition of EDS, we are really able to provide a one-stop shop for all of the services that any mainframe customer could require.

That includes anything from optimization of code, refactoring of code on the mainframe itself, all the way through re-hosting, migration, and transformation services. We've positioned ourselves as definitely the alternative to IBM mainframe customers.

In terms of customer situations, we've always had a very active business working with organizations in manufacturing, retail, and communications. One thing that I've perceived in the last year specifically -- it will come as no surprise to you -- is that financial institutions, and some of the largest ones in the world, are now approaching HP with questions about the commitment they have to their mainframe environments.

We're seeing a tremendous amount of interest from some of the largest banks in the United States, insurance companies, and benefits management organizations, in particular.

Second, maybe benefiting from some of the stimulus funds, a large number of government departments are approaching us as well. We've been very excited by customer interest in financial services and public sector. I just wanted to give you that by way of context.

In terms of the detailed case studies, when John Pickett first asked me to participate in the webinar, as well as in this particular recording, I was kind of struck with a plethora of choices. I thought, "Which case study should I choose that best represents some of the business that we are doing today?" So, I've picked two.

The first is a project we recently completed at a wood and paper products company. This is a worldwide concern. In this particular instance we worked with their Americas division on a re-hosting project of applications that are written in the Software AG environment. I hope that many of the listeners will be familiar with the database ADABAS and the language, Natural. These applications were written some years ago, utilizing those Software AG tools.

Demand was lowered

They had divested one of the major divisions within the company, and that meant that the demand for mainframe services was dramatically lowered. So, they chose to take the residual applications, the Software AG applications, representing about 300-350 MIPS, and migrate those in their current state, away from the mainframe, to an HP platform.

Many folks listening to this will understand that the Software AG environment can either be transformed and rewritten to run, say, in an Oracle or a Java environment, or we can maintain the customer's investment in the applications and simply migrate the ADABAS and Natural, almost as they are, from the mainframe to an alternative HP infrastructure. The latter is what we did.

By not needing to touch the mainframe code or the business rules, we were able to complete this project in a period of six months, from beginning to end. They are saving over $1 million today in avoiding the large costs associated with mainframe software, as well as maintenance and depreciation on the mainframe environment.

They're very, very pleased with the work that's being done. Indeed, we're now looking at an additional two applications in other parts of their business with the aim of re-hosting those applications as well.

They are saving over $1 million today in avoiding the large costs associated with mainframe software, as well as maintenance and depreciation on the mainframe environment.



The more monolithic approach to applications development and maintenance on the mainframe is a model that was probably appropriate in the days of the large conglomerates, where we saw a lot of companies trying to centralize all of that processing in large data centers. This consolidation made a lot of sense, when folks were looking for economies of scale in the mainframe world.

Today, we're seeing customers driving for that degree of agility you have just mentioned. In fact, my second case study represents that concept in spades. This is a large multinational manufacturing concern. They never allow their name to be used in these webcasts, so we will just refer to them as "a manufacturing company." They have a large number of businesses in their portfolio.

Our particular customer in this case study is the manufacturer of electronic appliances. One of the driving factors for their mainframe migration was precisely what you just said, Dana, that the ability to divest themselves from the large mainframe corporate environment, where most of the processing had been done for the last 20 years.

They wanted control of their own destiny to a certain extent, and they also wanted to prepare themselves for potential investment, divestment, and acquisition, just to make sure that they were masters of their own future.

Gardner: You mentioned earlier, John, about a two-times increase in the demand since 2008. I wonder if this demand increase is a blip. Is this something that is just temporary, or has the economy -- and some people call it the reset economy, actually changed the game -- and therefore IT needs to respond to that?

In a nutshell the question is whether this is going to be a two-year process, or are we changing the dynamic of IT and how business and IT need to come together in general?

Not a blip

Pickett: First, Dana, it's not a blip at all. We're seeing increased movement from mainframe over to HP systems, whether it's on an HP-UX platform or a Windows Server or SQL platform. Certainly, it's not a blip at all.

As a matter of fact, just within the past week, there was a survey by AFCOM, a group that represents data-center workers. It indicated that, over the next two years, 46 percent of the mainframe users said that they're considering replacing one or more of their mainframes.

Now, let that sink in -- 46 percent say they are going to be replacing high-end systems over the next two years. That's an absurdly high number. So, it certainly points to a trend that we are seeing in that particular environment -- not a blip at all.

Dana, that also points to the skills piece. A lot of times when we talk to people in a mainframe environment the question is, "I've got a mainframe, but what about the mainframe people that I have? They're good people, they know the process, and they have been around for a while." We found that HP, and moving to an HP centralized environment is really a soft landing for these people.

They can use the process skills that they have developed over time. They're absolutely the best at what they do in the process environment, but it doesn’t have to be tied to the legacy platform that they have been working on for the last 10 or 20 years.

We've found that there is a very strong migration for those skills and landing in a place where they can use and develop them for years to come.



We've found that you can take those same processes and apply them to a large HP Integrity Superdome environment, or NonStop environment. We've found that there is a very strong migration for those skills and landing in a place where they can use and develop them for years to come.

Gardner: Les, why do you see this as a longer term trend, and what are the technological changes that we can expect that will make this even more enticing, that is to say, lower cost, more efficient, and higher throughput systems that allow for the agility to take place as well?

Wilson: Good question, Dana, and you have two parts to it. Let me address the first one about the trend. I've been involved in this kind of business on and off since 1992. We have numbers going back to the late 1980s as to the fact that at that time there were over 50,000 mainframes installed worldwide.

When I next got into this business in 2004, the analyst firms confirmed that the number was now around 15,000-16,000. Just this week, we have had information, confirmed by another analyst, that the number of installed mainframes is now at about 10,400. We've seen a 15-20 year trend away from the mainframe, and that will continue, given this unprecedented level of interest we are seeing right now.

You talked about technology trends. Absolutely. Five years ago, it would have been fair to say that there were still mainframe environments and applications that could not be replaced by their open-system equivalents. Today, I don't think that that's true at all.

Airline reservation system

To give you an example, HP, in cooperation with American Airlines, has just announced that we're going to be embarking on a three-year transition of all of the TPF-based airline reservation systems that we HP has been providing as services to customers for 20 years.

That TPF environment will be re-engineered in its entirety over the course of the next three years to provide those same and enhanced airline reservation systems to customers on a Microsoft-HP bladed environment.

That's an unprecedented change in what was always seen as a mainframe centric application, airlines reservations, with the number of throughputs and the amount of transactions that need to be done every second. When these kinds of applications can be transformed to open systems' platforms, it's open season on any mainframe application.

Furthermore, the trend in terms of open-systems price performance improvement continues at 30-35 percent per annum. You just need to look at the latest Intel processors, whether they be x86 or Itanium-based, to see that. That price performance trend is huge in the open systems market.

I've been tracking what's been going on in the IBM System Z arena, and since there are no other competitors in this market, we see nothing more than 15 percent, maybe 18 percent, per annum price performance improvement. As time goes on, HP and industry standard platforms continue, and will continue, to outpace the mainframe technology. So, this trend is bound to happen.

People have to take considered opinions. Investments here are huge. The importance of legacy systems is second to none.



Gardner: Paul, we've heard quite a bit of compelling information. Tell us about the upcoming conference, and perhaps steps that folks can take to get more information or even get started as they consider their alternatives to mainframes?

Evans: Based on what you've heard from John and Les, there is clearly an interest out there in terms of understanding. I don't think this is, as they say in America, a slam dunk. The usual statement is, "How do you eat an elephant? and the answer is, "One bite at a time."

The point here is that this is not going to happen overnight. People have to take considered opinions. Investments here are huge. The importance of legacy systems is second to none. All that means that the things that John and Les are talking about are going to happen strategically over a long time. But, we have people coming to us every day saying, "Please, can you help me understand how do I start, where do I go, where do I go now, or where do I go next week, next year, or next month?

The reason behind the conference was to take a sort of multi-sided view of this. One side is the business requirement, which people like Geoffrey Moore will be talking about -- where the business is going and what does it need.

We'll be looking at a customer case study from the Italian Ministry of Education, looking at how they used multiple modernization strategies to fix their needs. We'll be looking at tools we developed, so that people can understand what the code is doing. We'll be hearing from Les, John, and customers -- Barclays Bank in London -- about what they have been doing and the results they have been getting.

Then, at the very end, we'll be hearing from Dale Vecchio, vice president of Gartner research, about what he believes is really going on.

Efficiency engine

The thing that underpins this is that the business requirement several decades ago drove the introduction of the mainframe. People needed an efficiency engine for doing payroll, human resources, whatever it may be, moving data around. The mainframe was invented and was built perfectly. It was an efficiency engine.

As time has gone on, people look at technology now to become an effectiveness engine. We've seen the blending of technologies between mainframes, PCs, and midrange systems. People now take this whole efficiency thing for granted. No one runs their own payroll, even to the point that people now look to BPOs or those sorts of things.

As we go forward, with people being highly mobile, with mobile devices dramatically exploding all over the place in terms of smartphones, Net PCs, or whatever, people are looking to blend technologies that will deliver both the efficiency and the effectiveness, but also the innovation. Technology is now the strategic asset that people will use going forward. There needs to be a technological response to that.

Over the last year or two, either John or Les referred to the enormous amounts of raw power we can now get from, say, an Intel microprocessor. What we want to do is harness that power and give people the ability to innovate and differentiate, but, at the same time, run those context applications that keep their companies alive.

That's really what we're doing with the conference -- demonstrating, in real terms, how we can get this technology to the bottom-line and how we can exploit it going forward.

Gardner: Well, great. We've been hearing about some case studies that demonstrate how costs can be cut significantly, while productivity and agility are boosted.

I want to thank our guests in today’s discussion. We've been joined by John Pickett, Worldwide Mainframe Modernization Program manager. Thank you, John.

Pickett: Thank you, Dana.

Gardner: We've also been joined by Les Wilson, America’s Mainframe Modernization director. Thank you, Les.

Wilson: Thank you for the opportunity, Dana.

Gardner: And also Paul Evans, worldwide marketing lead on Applications Transformation at HP. Thanks again Paul.

Evans: It's a pleasure.

Gardner: This is Dana Gardner, principal analyst at Interarbor Solutions. You have been listening to a sponsored BriefingsDirect podcast. Thanks for listening, and come back next time.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Learn more. Sponsor: Hewlett-Packard.




Gain more insights into "Application Transformation: Getting to the Bottom Line" via a series of HP virtual conferences. For more on Application Transformation, and to get real time answers to your questions, register to access the virtual conferences for your region:

Access the Asia Pacific event.
Access the EMEA event.
Access the Americas event.




Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Learn more. Sponsor: Hewlett-Packard.


Transcript of the third in a series of sponsored BriefingsDirect podcasts on the rationale and strategies for application transformation. Copyright Interarbor Solutions, LLC, 2005-2009. All rights reserved.

Thursday, October 29, 2009

Separating Core from Context Brings High Returns in Legacy Application Transformation

Transcript of the second in a series of sponsored BriefingsDirect podcasts on the rationale and strategies for application transformation.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Learn more. Sponsor: Hewlett-Packard.


Gain more insights into "Application Transformation: Getting to the Bottom Line" via a series of HP virtual conferences Nov. 3-5. For more on Application Transformation, and to get real time answers to your questions, register to the virtual conferences for your region:
Register here to attend the Asia Pacific event on Nov. 3.
Register here to attend the EMEA event on Nov. 4.
Register here to attend the Americas event on Nov. 5.


Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you’re listening to BriefingsDirect.

Today, we present a sponsored podcast discussion on separating core from context, when it comes to legacy enterprise applications and their modernization processes. As enterprises seek to cut their total IT costs, they need to identify what legacy assets are working for them and carrying their own weight, and which ones are merely hitching a high cost -- but largely unnecessary -- ride.

The widening cost-in-productivity division exists between older, hand-coded software assets, supported by aging systems, and replacement technologies on newer, more efficient standards-based systems. Somewhere in the mix, there are core legacy assets distinct from so-called contextal assets. There are peripheral legacy processes and tools that are costly vestiges of bygone architectures. There is legacy wheat and legacy chaff.

Today we need to identify productivity-enhancing resources and learn how to preserve and modernize them -- while also identifying and replacing the baggage or chaff. The goal is to find the most efficient and low-cost means to support them both, through up-to-date data-center architecture and off-the-shelf components and services.

This podcast is the second in a series of three to examine Application Transformation: Getting to the Bottom Line. We will discuss the rationale and likely returns from assessing the true role and character of legacy applications and their actual costs. The podcast, incidentally, runs in conjunction with some Hewlett-Packard (HP) webinars and virtual conferences on the same subject.

Register here to attend the Asia Pacific event on Nov. 3. Register here to attend the EMEA event on Nov. 4. Register here to attend the Americas event on Nov. 5.

With us to delve deeper into the low cost, high reward transformation of legacy enterprise applications is Steve Woods, distinguished software engineer at HP. Hello, Steve.

Steve Woods: Hello. How are you doing?

Gardner: Good. We are also joined by Paul Evans, worldwide marketing lead on Applications Transformation at HP. Hello, Paul.

Paul Evans: Hello, Dana. Thank you.

Gardner: We talked in the earlier podcast in our series, a case study, about transformation and why that's important through the example of a very large education organization in Italy and what they found. We looked at how this can work very strategically and with great economic benefit, but I think now we are trying to get into a bit more of the how.

Tell us a little bit, Paul, about what the stakes are. Why is it so important to do this now?

Evans: In a way, this podcast is about two types of IT assets. You talked before about core and context. That whole approach to classifying business processes and their associated applications was invented by Geoffrey Moore, who wrote Crossing the Chasm, Inside the Tornado, etc.

He came up with this notion of core and context applications. Core being those that provide the true innovation and differentiation for an organization. Those are the ones that keep your customers. Those are the ones that improve the service levels. Those are the ones that generate your money. They are really important, which is why they're called "core."

Lower cost

The "context" applications were not less important, but they are more for productivity. You should be looking to understand how that could be done in terms of lower cost provisioning. When these applications were invented to provide the core capabilities, it was 5, 10, 15, or 20 years ago. What we have to understand is that what was core 10 years ago may not be core anymore. There are ways of effectively doing it at a much different price point.

As Moore points out, organizations should be looking to build "core," because that is the unique intellectual property of the organization, and to then buy "context." They need to understand, how do I get the lowest-cost provision of something that doesn't make a huge difference to my product or service, but I need it anyway.

An human resources system may not be something that you are going to build your business model on, but you need one. You need to be able to service your employees and all the things they need. But, you need to do that at the lowest-cost provision. As time has gone on, this demarcation between core and context has gotten really confused.

As you said, we're putting together a series of events, and Moore will be the keynote speaker on these events. So, we will elucidate more around core and context.

The other speaker at the event is also an inventor, this time from inside HP, Steve Woods. Steve has taken this notion of core and context and has teamed it with some extremely exciting technology and very innovative thinking to develop some unique tools that we use inside the services from HP, which allow us then really to dive into this. That's going to be one of the sessions that we're also going to be delivering on this series of events.

Gardner: Okay, Steve Woods, we can use a lot of different terms here, "core and context," "wheat and chaff." I thought another metaphor would be "baby and bathwater." What happens is that it's difficult to separate the good from the potentially wasteful in the legacy inventory.

I think this has caused people to resist modernizing. They have resisted tinkering with legacy installations in the past. Why are they willing to do it now? Why the heightened interest at this time?

Woods: A good deal of it has to do with the pain that they're going through. We have had customers who had assessments with us before, as much as a year ago, and now they're coming back and saying they want to get started and actually do something. So, a good deal of the interest is caused by the need to drive down costs.

Also, there's the realization that a lot of these tools -- extract, transform, and load (ETL) tools, enterprise application integration (EAI) tools, reporting, and business process management (BPM) -- are proving themselves now. We can't say that there is a risk in going to these tools. They realize that the strength of these tools is that they bring a lot of agility, solve skill sets issues, and make you much more responsive to the business needs of the organization.

Gardner: This definition of core, as Paul said, is changing over time and also varies greatly from organization to organization. Is there no one size fits all approach to this?

Context not code

Woods: I don't think there really is a one size fits all, but as we use our tools to analyze code, we find sometimes as much as 65 percent or more of an application could really not be core. It could just be context.

As we make these discoveries, we find that in the organization there are political battles to be fought. When you identify these elements that are not core and that could be moved out of handwritten code, you're transferring power from the developers -- say, of COBOL -- to the users of the more modern tools, like the BPM tools.

So there is always an issue. What we try to do, when we present our findings, is to be very objective. You can't argue that we found that 65 percent of the application is not doing core. You can then focus the conversation on something more productive. What do we do with this? The worst thing you could possibly do is take a million lines of COBOL that's generating reports and rewrite that in Java or C# hard-written code.

We take the concept of core versus context not just to a possible off-the-shelf application, but at architectural component level. In many cases, we find that this is helpful for them to identify legacy code that could be moved very incrementally to these new architectures.

Gardner: What's been the holdup? What's difficult? You did mention politics, and we will get into that later, but what's been the roadblock from perspective of these tools? Why has that been decreasing in terms of the ability to automate and manage these large projects?

Woods: A typical COBOL application -- this is true of all legacy code, but particularly mainframe legacy code -- can be as much as 5, 10, or 15 million lines of code. I think the sheer idea of the size of the application is an impediment. There is some sort of inertia there. An object at rest tends to stay at rest, and it's been at rest for years, sometimes 30 years.

So, the biggest impediment is the belief that it's just too big and complex to move and it's even too big and complex to understand. Our approach is a very lightweight process, where we go in and answer to a lot of questions, remove a lot of uncertainty, and give them some very powerful visualizations and understanding of the source code and what their options are.

Gardner: So, as we've progressed in terms of the tools, the automation, and the ability to handle large sets of code, the inertia also involves the nontechnical aspects. What do we mean by politics? Are there fiefdoms? Are there territories? Is this strictly a traditional kind of human nature thing? Perhaps you could help us understanding that a bit better.

Doing things efficiently

Woods: Organizations that we go in have not been living in a vacuum, so many of have been doing greenfield development; when they start out to say they need a system that does primarily reporting, or a system that does primarily data integration. In most organizations those fiefdoms, if you will, have grown pretty robust, and they will continue to grow. The realization is that they actually can do those things quite efficiently.

When you go to the legacy side of the house, you start finding that 65 percent of this application is just doing ETL. It's just parsing files and putting them into databases. Why don't you replace that with a tool? The big resistance there is that, if we replace it with a tool, then the people who are maintaining the application right now are either going to have to learn that tool or they're not going to have a job.

So, there's a lot of resistance in the sense that we don't want to lose anymore ground to the target architecture fiefdom, so we are going to not identify this application as having so many elements of context functionality. Our process, in a very objective way, just says that these are the percentages that we're finding. We'll show you the code, you can agree or disagree that's what it is doing, and then let's make decisions based upon those facts.

If we get the facts on the table, particularly visually, then we find that we get a lot of consensus. It may be partial consensus, but it's consensus nonetheless, and we open up the possibilities and different options, rather than just continuing to move through with hand-written code.

If you look at this whole core-context thing, at the moment, organizations are still in survival mode.



Gardner: Paul, you've mentioned in the past that we've moved from the nice-to-have to the must-have, when it comes to legacy applications transformation and modernization. The economy has changed things in many respects, of course, but it seems as if the lean IT goal is no longer something that's a vision. It's really moved up the pecking order or the hierarchy of priorities.

Is this perhaps something that's going to impact this political logjam? Are other organizations and business and financial outcome folks, who are just going to steamroll these political issues?

Evans: Well, I totally think so, and it's happening already. If you look at this whole core-context thing, at the moment, organizations are still in survival mode. Money is still tight in terms of consumer spending. Money is still tight in terms of company spending. Therefore, you're in this position where keeping your customers or trying to get new customers is absolutely fundamental for staying alive. And, you do that by improving service levels, improving your services, and improving your product.

If you stay still and say, "Well, we'll just glide for the next 6 to 12 months and keep our fingers crossed," you're going to be in deep trouble. A lot of people are trying to understand how to use the newer technologies, whether it's things like Web 2.0 or social networking tools, to maintain that customer outreach.

Those of us who went to the business school, marketing school remember -- it takes $10 to get a customer into your store, but it only takes $1 to keep them coming back. People are now worrying about those dollars. How much do we have to spend to keep our customer base?

Therefore, the line-of-business people are now pushing on technology and saying, "You can't back off. You can't not give us what we want. We have to have this ability to innovate and differentiate, because that way we will keep our customers and we will keep this organization alive."

Public and private sectors

That applies equally to the public and private sectors. The public sector organizations have this mandate of improving service, whether it's in healthcare, insurance, tax, or whatever. So all of these commitments are being made and people have to deliver on them, albeit that the money, the IT budget behind it, is shrinking or has shrunk.

So, the challenge here is, "Last year I ran my IT department on my theoretical $100. I spent $80 on keeping things going, and $20 on improving things." That was never enough for the line-of-business manager. They will say, "I want to make a change. I want it now, or I want it next week. I don't want it in six months time. So explain to me how you are going to do that."

That was tough a year ago, but the problem now is that your $100 IT budget is now $80. Now, it's a bit of a challenge, because now all the money you have got you are going to spend on keeping the old stuff alive. I don't think the line-of-business managers, or whoever they are, are going to sit back and say, "That's okay. That's okay. We don't mind." They're going to come and say that they expect you to innovate more.

This goes back to what Steve was talking about, what we talked about, and what Moore will raise in the event, which is to understand what drives your company. Understand the values, the differentiation, and the innovations that you want and put your money on those and then find a way of dramatically reducing the amount of money you spend on the contextual stuff, which is pure productivity.

The point of the tools is that they allow us to see the code. They allow us to understand what's good and bad and to make very clear, rational, and logical decision.



Steve's tools are probably the best thing out there today that highlight to an organization, "You don't need this in handwritten code. You could put this to a low cost package, running on a low cost environment, as opposed to running it in COBOL on a mainframe." That's how people save money and that's how we've seen people get, as we have talked earlier, a return on investment (ROI) of 18 months or less.

So it is possible, it can be done, and it's definitely not as difficult as people think. The point of the tools is that they allow us to see the code. They allow us to understand what's good and bad and to make very clear, rational, and logical decision.

Gardner: Steve Woods, we spoke earlier about how the core assets are going to be variable from organization to organization, but are there some common themes with the contextual services? We certainly see a lot of very low-cost alternatives now creeping up through software as a service (SaaS), cloud-based, outsourced, mix-sourced, co-located, and lots of different options. Is there some common theme now among what is not core that organizations need to consider?

Woods: Absolutely. One of the things that we do find, when we're brought in to look at legacy applications, is that by virtue of the fact that they are still around, the applications have resisted all the waves of innovation that have preceded. Sometimes, they tend to be of a very definite nature.

A number of them tend to be big data hubs. One of the first things we ask is for the architectural topology diagram, if they have it, or we just draw it on a whiteboard,, they tend to be big spiders. There tends to be a central hub database and you see that they start drawing all these different lines to other different systems within the organization.

The things that have been left behind -- this is the good news -- tend to be the very things that are very amenable for moving to modern architecture in a very incremental way. It's not unusual to find 50-65 percent of an application is just doing ETL functionality.

A good thing

The real benefit to that -- and this is particularly true in a tough economy -- is that if I can identify 65 percent of the application that's just doing data integration, and I create or I have already established the data integration center of excellence within the organization, already have those technologies, or implement those technologies, then I can incrementally start moving that functionality over to the new architecture. When I say incrementally, that's a good thing, because that's beneficial in two ways.

It reduces my risk, because I am doing it a step at a time. It also produces a much better ROI, because the return on the incremental improvement is going to be trickling over time, rather than waiting for 18 months or two years for some big bang type of improvement. Identifying this context code can give you a lot of incremental ROI opportunities, and make you a much more solid IT investment decision picture.

Gardner: So, one of these innovations that's taken place for the past several years is the move towards more distributed data, hosting that data on lower-cost storage architectures, and virtualizing behind the database or the storage itself. That can reduce cost dramatically.

Woods: Absolutely. One of the things that we feel is that decentralizing the architecture improves your efficiency and your redundancy. There is much more opportunity for building a solid, maintainable architecture than there would be if you kept a sort of monolithic approach that's typical on the mainframe.

Gardner: Once we've done this exercise, variable as it may be from organization to organization, separating the core from the non-core, what comes next? What's the next step that typically happens as this transformation and modernization of legacy assets unfolds?

So, if you accept the premise of moving context code to componentized architecture, then the next thing you should be looking for is where is the clone code and how is it arranged?



Woods: That's a very good question. It's really important to understand this leap in logic here. If I accept the notion that a majority of the code in a legacy application can be moved to these model driven architectures, such as BPM and ETL tools, the next premise is, "If I go out and buy these tools, a lot of functionality is provided with these tools right out of the box. It's going to give me my monitoring code, my management code, and in many cases, even some of the testing capabilities are sort of baked into the product."

If that's true, then the next leap of logic is that in my 1.5 million lines of COBOL or my five million lines of COBOL there is a lot of code that's irrelevant, because it's performing management, monitoring, logging, tracing, and testing. If that's true, I need to know where it's at.

The way you find where it's at is identifying the duplicate source code, what we call clone code. Because when you find the clone code, in most cases, it's a superset of that code that's no longer relevant, if you are making this transformation from handwritten code to a model-driven architecture.

What I created at HP is a tool, an algorithm, that can go into any language legacy code and find the duplicate code, and not only find it, but visualize it in very compelling ways. That helps us drill down to identify what I call the unintended design. When we find these unintended designs, they lead us to ask very critical questions that are paramount to understanding how to design the transformation strategy.

So, if you accept the premise of moving context code to componentized architecture, then the next thing you should be looking for is where is the clone code and how is it arranged?

Gardner: Do we have any examples of how this has worked in practice? Are there use cases or an actual organization that you are familiar with? What have been some of the results of going through this process? How long did it take? What did they save? What were the business outcomes?

Viewing the application

Woods: We've often worked with financial services companies and insurance companies, and we have just recently worked with one that gave us an application that was around 1.2 or 1.5 million lines of code. They said, "Here is our application," and they gave us the source code. When we looked into the source code, we found that there were actually four applications, if you looked at just the way the code was structured, which was good news, because it gives us a way of breaking down the functionality.

In this one organization, we found that a high percentage of that code was really just taking files, as I said before, unbundling those files, parsing them, and putting them into databases. So they have kind of let that be the tip of the spear. They said, "That's our start point," because they're often asking themselves where to start.

When you take handwritten code and move it to an ETL tool, there's ample industry evidence that a typical ROI over the course of four years can be between 150 percent and 450 percent improvement in efficiencies. That's just the magic of taking all this difficult-to-maintain spaghetti code and moving it to a very visually oriented tool that gives you much more agility and allows you to respond to changes in the business and the business' needs much more quickly and with skill sets that are readily available.

Gardner: You know, Paul, I've heard a little different story from some of the actual suppliers of legacy systems. A lot of times they say that the last thing you want to do is start monkeying around with the code. What you really want to do is pull it off of an old piece of hardware and put it on a new piece of hardware, perhaps with a virtualization layer involved as well. Why is that not the right way to go?

Evans: Now you've put me in an interesting position. I suppose our view is that there are different strategies. We don't profess one strategy to help people transform or modernize their apps. The first thing they have to do is understand them, and that's what Steve's tools do.

The point is that we don't have a preconceived view of what this thing should run on. That's one thing. We're not wedded to one architectural style.



It is possible to take an approach that says that all we need to do is provide more horsepower. Somebody comes along and says, "Hey, transaction rates are dropping. Users are getting upset because an ATM transaction is taking a minute, when it should take 15 seconds. Surely all we need to do is just give the thing more horsepower and the problem goes away."

I would say the problem goes away -- for 12 months, maybe, or if you're lucky 18 -- but you haven't actually fixed the problem. You've just treated the symptoms.

At HP, we're not wedded to one style of computer architecture as the hub of what we do. We look at the customer requirement. Do we have systems that are equal in performance, if not greater, than a mainframe? Yeah, you bet we do. Our Superdome systems are like that. Are they the same price? No, they are considerably less. Do we have blades, PCs, and normal distributed service? Yeah.

The point is that we don't have a preconceived view of what this thing should run on. That's one thing. We're not wedded to one architectural style. We look at the customer's requirements and then we understand what's necessary in terms of the throughput TP rates or whatever it may be.

So, there is obviously an approach that people can say, "Don't jig around." It's very easy to inject fear into this and just say to put more power underneath it, don't touch the code, and life will be wonderful. We're totally against that approach, but it doesn't mean that one of our strategies is not re-hosting. There are organizations whose applications would benefit from that.

We still believe that can be done on relatively inexpensive hardware. We can re-host an application by keeping the business logic the same, keeping the language the same, but moving it from an expensive system to a less expensive system.

Freeing up cash

People use that strategy to free up cash very quickly. It's one of the fastest ROIs we have, and they are beginning to save instantly. They make the decision that says, "We need to put that money back in the bank, because we need to do that to keep our shareholders happy." Or, they can reinvest that into their next modernization project, and then they're on an upward spiral.

There are approaches to everything, which is why we have seven different strategies for modernization to suit the customer's requirement, but I think the view of just putting more horsepower underneath, closing your eyes, and hoping is not the way forward.

Gardner: Steve, do you have anything more to add to that, treating the symptom rather than the real issues?

Woods: As Paul said, if you treat this as a symptom, we refer to that as a short-term strategy, just to save money to reinvest into the business.

The only thing I would really add to that is that the problem is sometimes not nearly as big as it seems. If you look at the analogy of the clone codes that we find, and all the different areas that we can look at the code and say that it may not be as relevant to a transformation process as you think it is.

The subject matter experts and the stakeholders very slowly start to understand that this is actually possible. It's not as big as we thought.



I do this presentation called "Honey I Shrunk the Mainframe." If you start looking at these different aspects between the clone code and what I call the asymmetrical transformation from handwritten code to model driven architecture, you start looking at these different things. You start really seeing it.

We see this, when we go in to do the workshops. The subject matter experts and the stakeholders very slowly start to understand that this is actually possible. It's not as big as we thought. There are ways to transform it that we didn't realize, and we can do this incrementally. We don't have to do it all at once.

Once we start having those conversations, those who might have been arguing for a re-host suddenly realize that rearchitecting is not as difficult as they think, particularly if you do it asymmetrically. Maybe they should reconsider the re-host and consider going to those context-core concept and start moving the context to these well-proven platforms, such as the ETL tools, the reporting tools, and service-oriented architecture (SOA).

Gardner: Steve, tell us a little bit about how other folks can learn more about this, and then give us a sneak peek or preview into what you are going to be discussing at the upcoming virtual event.

Woods: That's one of the things that we have been talking about -- our tools called the Visual Intelligence Tools. It's a shame you can't see me, because I'm gesturing with my hands as I talk, and If I had the visuals in front of me, I would be pointing to them. This is something to really appreciate -- the images that we give to our customers when we do the analysis. You really have to see it with your own eyes.

We are going to be doing a virtual event on November 3, 4, and 5, and during this you will hear some of the same things I've been talking about, but you will hear them as I'm actually using the tools and showing you what's going to happen with those tools, what those images look like, and why they are meaningful to designing a transformation strategy.

Gardner: Very good. We've been learning more about Application Transformation: Getting to the Bottom Line, and we have been able to separate core from context, and appreciate better how that's an intriguing strategy for approaching this legacy modernization problem and begin to enjoy much greater economic and business benefits as a result.

Helping us weave through this has been Steve Woods, distinguished software engineer at HP. Thanks for your input, Steve.

Woods: Thank you.

Gardner: We've also been joined by Paul Evans, worldwide marketing lead on Applications Transformation at HP. Paul, you are becoming a regular on our show.

Evans: Oh, I'm sorry. I hope I am not getting too repetitive.

Gardner: Not at all. Thanks again for your input.

This is Dana Gardner, principal analyst at Interarbor Solutions. You've been listening to a sponsored BriefingsDirect podcast. Thanks for listening, and come back next time.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Learn more. Sponsor: Hewlett-Packard.


Gain more insights into "Application Transformation: Getting to the Bottom Line" via a series of HP virtual conferences Nov. 3-5. For more on Application Transformation, and to get real time answers to your questions, register to the virtual conferences for your region:
Register here to attend the Asia Pacific event on Nov. 3.
Register here to attend the EMEA event on Nov. 4.
Register here to attend the Americas event on Nov. 5.


Transcript of the second in a series of sponsored BriefingsDirect podcasts on the rationale and strategies for application transformation. Copyright Interarbor Solutions, LLC, 2005-2009. All rights reserved.