Sunday, March 29, 2009

HP Advises Strategic View of Virtualization So Enterprises Can Dramatically Cut Costs, Gain Efficiency and Usher in Cloud Benefits

Transcript of a BriefingsDirect podcast on virtualization strategies and best practices with Bob Meyer, HP's worldwide virtualization lead.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com. Learn more. Sponsor: Hewlett-Packard.

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you're listening to BriefingsDirect. Today, we present a sponsored podcast discussion on the business case and economic rationale for virtualization implementation and best practices.

****Access More HP Resources on Virtualization****

Virtualization has become more attractive to enterprises as they seek to better manage their resources, cut total costs, reduce energy consumption, and improve the agility of their data centers and IT operations. But, virtualization is more than just installing hypervisors. The effects and impacts of virtualization cut across many aspects of IT operations, and the complexity of managing virtualization IT runtime environments can easily slip out of control.

In this podcast, we're going to examine how virtualization can be applied as a larger process and a managed IT undertaking with sufficient tools for governance that allow for rapid, but reasoned, virtualization adoption. We'll show how the proper level of planning and management can more directly assure a substantive economic return on the investments enterprises are making through virtualization.

The goal is to do virtualization right and to be able to scale the use of virtualization in terms of numbers of instances. We also want to extend virtualization from hardware to infrastructure, data, and application support, all with security, control, visibility, and lower risk, and while also helping to make the financial rationale ironclad.

To help provide an in-depth look at how virtualization best practices make for the best economic outcome we're joined by Bob Meyer, the worldwide virtualization lead in Hewlett-Packard’s (HP) Technology Solutions Group (TSG). Welcome to the show, Bob.

Bob Meyer: Thank you very much, Dana.

Gardner: Virtualization is really becoming quite prominent, and we're even seeing instances now where the tough economic climate is accelerating the use and adoption of virtualization. This, of course, presents a number of challenges.

First, could you provide some insight, from HP’s perspective, of how you see virtualization being used in the market now, and how that perhaps has shifted over the past six months or so?

Meyer: When we talk about virtualization -- obviously it’s been around for quite a long time -- it's typically the virtualization of Windows Servers where people start to think about it. For a couple of years now, that’s been the hot value proposition within IT.

The allure there is that when you consider the percentage of budget spent on data center facilities, hardware, and IT operations management, virtualization can have a profound effect on all of these areas.

Moving off the fence

For the last couple of years, people have realized the value in terms of how it can help consolidate servers or how it can help do such things as backup and recovery faster. But, now with the economy taking a turn for the worse, anyone who was on the fence, who wasn’t sure, who didn’t have a lot of experience with it, is now rushing headlong into virtualization. They realize that it touches so many areas of their budget, it just seems to be a logical thing to do in order for them to survive these economic times and come out a leaner, more efficient IT organization.

The change that we see is that previously virtualization was for very targeted use and now it’s gone to virtualization everywhere, for everything -- "How much can I put in and how fast can I put it in."

Gardner: When you move from a tactical orientation to exploit virtualization at this more strategic level, that requires different planning and different methodologies. Tell us what that sort of shift should mean.

Meyer: To be clear, we're not just talking about virtualization of servers. We're talking about virtualizing your infrastructure -- servers, storage, network, and even clients on the desktop. People talk about going headlong into virtualization. It has the potential to change everything within IT and the way IT provides services.

The potential is that you can move your infrastructure around much faster. You can provision a new server in minutes, as opposed to a few days. You can move a virtual machine (VM) from one server to another much faster than you could before.

When you move that into a production environment, if you're talking about it from a services context, a server usually has storage attached to it. It has an IP address, and just because you can move the server around faster doesn’t mean that the IP address gets provisioned any faster or the storage gets attached any faster.

So, when you start moving headlong into virtualization in a production environment, you have to realize that now these are part of services. The business can be affected negatively, if the virtualized infrastructure is managed incompletely or managed outside the norms that you have set up for best practices.

Gardner: I guess it also makes sense that the traditional IT systems-management approaches also need to adjust. If you had standalone stacks, each application with its own underlying platform, physical server, and directly attached data and little bits of middleware for integration, you had a certain setup for managing that. What’s different about managing the virtualized environments, as you are describing them?

Meyer: There are a couple of challenges. First of all, one of the blessings of virtualization is its speed. That’s also a curse in this case, because in traditional IT environments, you set up things like a change advisory board and, if you did a change to a server, if you moved it, if you had to move to a new network segment, or if you had to change storage, you would put it through a change advisory board. There were procedures and processes that people followed and received approvals.

In virtualization, because it’s so easy to move things around and it can be done so quickly, the tendency is for people to say, "Okay, I'm going to ignore that best practice, that governance, and I am going to just do what I do best, which is move the server around quickly and move the storage around." That’s starting to cause all sorts of IT issues.

The other issue is not just the mobility of the infrastructure, but also the visibility of that infrastructure. A lot of the tools that many people have in place today can manage either physical or virtual environments, but not both. What you're heading for when that’s the case is setting up dual management structures. That’s never good for IT. You're just heading for service outages and disruptions when you go in that direction.

Gardner: It sounds like some safeguards are required for managing and allowing automation to do what it does well, but without it spinning out of control and people firing off instances of applications and getting into some significant waste or under-utilization, when in fact that’s what you are trying to avoid.

Shifting the cost

Meyer: Certainly. A lot of what we're seeing is the initial gains of virtualization. People came in and they saw these initial gains in server consolidation. They went from, let’s say, 12 physical boxes down to one physical box with 12 virtual servers. The initial gains get wiped out after a while, and people push the cost from hardware to management, because it becomes harder to manage these dual infrastructures.

Typically, big IT projects get a lot of the visibility. The initial virtualization projects probably get handled with improper procedures. As you come back to day-to-day operations of the virtualized environment, that’s where you start to lose the headway that you gained originally.

That might be from non-optimized infrastructure that is not made to move as fast or to be instrumented as fast as virtualization allows it to be. It could be from management tools that don’t support virtual and physical environments, as we mentioned before. It can even be governance. It can be the will of the IT organization to make sure that they adopt standards that they have in place in this new world of moving and changing environments.

Gardner: For a lot of organizations, with many IT aspects or approaches these days, security and compliance need to be brought into the picture. What does this flexible virtualization capability mean, if you're in a business that has strict compliance and security oversights?

Meyer: Again, it produces its own set of challenges for the reasons similar to what we talked about before. Compliance has many different facets. If you have a service infrastructure that’s in compliance today in a physical environment, it might take days to move that around, and to change the components. People are likely to have much more visibility. That window of change tends to take a lot longer.

With virtualization, because of the speed, the mobility, and the ease of moving things around, things can come out of compliance faster. They could be out of regulatory compliance. They could be out of license compliance, because it’s much easier to spin up new instances of virtual machines and much harder to track them.

So, the same blessing of speed and mobility and ease of instrumentation can take a hit on the compliance and security side as well. It’s harder to keep up with patches. A lot of people do virtual machines through images. They'll create a virtual machine image, and once that image is created, that becomes a static image. You deploy it on one VM and then another and then another. Over time, patches come out, and those patches might not be deployed to that particular image. People are starting to see problems there as well.

Gardner: Just to throw another log on the fire of why this is a complex undertaking, we're probably going to be dealing with hybrid environments, where we have multiple technologies, and multiple types of hypervisors. As you pointed out, the use of virtualization is creeping up beyond servers, through infrastructure storage, and so forth. What’s the hurdle, when it comes to having these mixed and hybrid environments?

Mixed environments are the future

Meyer: That’s a reality that we are going to be dealing with from here on out. Everybody will have a mix of virtual and physical environments. That’s not a technology fad. That’s just a fact. There will be services -- cloud computing, for example -- that will extend that model.

The reality is that the world we live in is both physical and virtual, when it comes to that infrastructure. To have to start looking at it from that perspective, you have to start asking, "Do I have the right solutions in place from an infrastructure perspective, from a management perspective, and from a process perspective to accommodate both environments?"

The danger is having parallel management structures within IT. It does no one any good. If you look at it as a means to an end, which virtualization is, the end of all this is more agile and cost-effective services and more agile and cost-effective use of infrastructure.

Just putting a hypervisor on a machine doesn’t necessarily get you virtualization returns. It allows you to virtualize, but it has to be put on the construct of what you're trying to do. You're trying to provide IT-enabled services for the business at better economies of scale, better agility, and low risk, and that’s the construct that we have to look at.

Gardner: So, if we have a strategic requirement set to prevent some of these blind alleys and pitfalls, then we need to have a strategic process and management overview. This is something that cuts across hardware, software, management, professional conduct and culture, and organization. How do you get started? How do you get to the right level of doing this with that sort of completeness in mind?

Meyer: That’s the problem in a nutshell right there. The way virtualization tends to come in is unique, because it's a revolutionary technology that has the potential to change everything. But, because of the way it comes in, people tend to look at it from a bottom-up perspective. They tend to look at it from, "I have this hypervisor. This hypervisor enables me to do virtual machines. I will manage the hypervisor and the virtual machines, differently than other technologies."

Service-oriented architecture (SOA) and Web services aren't able to creep into an IT environment. They have to come from a top-down perspective. At least somebody has to mandate that they would implement this architecture. So, there's more of a strategy involved.

When we look back at virtualization, the technology is no different than other technologies in the sense that it has to be managed from a strategic perspective. You have to take that top-down look and say, "What does this do for me and for the business?"

At HP, this is where organizations come to us and say, "We have virtualization in our test and development environment, and we are looking to move it into production. What’s the best way to do that?" We come in and assess what they are looking to do, help them roll that up into what’s the bigger picture, what are they trying to get out of this today, and what do they want to get out of this a year from now.

We map out what technologies are in place, how to mix that, how to take the hypervisor environment and make that part of the overall operational management structure, before they move that into the operational environment.

If somebody's already using it and has a number of applications or services they're ready to virtualize, they're already experiencing some of the pain. So, that’s a little bit more prescriptive. Somebody will come in and say, "I'm experiencing this. I'm seeing my management cost rise." Or, "When a service goes down, it’s harder for me to pinpoint where it is, because my infrastructure is more complex."

This is where typically we'll have a spot engagement that then leads to a broader conversation to say, "Let’s fix your pain today, but let’s look at it in the broader context of a service." We have a set of services to do that.

There's a third alternative as well. Sometimes people come to us. They realize the value of virtualization, but they also realize that they don’t have the expertise in house or they don’t have the time to develop that longer-term strategy for themselves. They can also come to HP for outsourcing that virtual and physical environment.

Gardner: It sounds as if the strategic approach to virtualization is similar to what we've encountered in the past, when we've adopted new technologies. We have had to take the same approach of let’s not go just bottom up. Let’s look strategically. Can you offer some examples of how this compares to earlier IT initiatives and how taking that solution approach turned out to be the best cost-benefit approach?

Potential to change everything

Meyer: As an example from an earlier technology perhaps, I always look at client-server computing. When that came out, it had the potential to change everything. If you look at computing today, client-server really did change the way that applications and services were provided.

If you look at the nature of that technology, it required rewriting code and complete architectures. The nature of the technology lent itself to have that strategic view. It was deployed and, over time, a lot of the applications that people were using went to client-server and tier architecture. But, that was because the technology lent itself to that.

Virtualization, in that sense, is not very different. It is a game changer from a top-down perspective. The value you get when you take that top-down perspective is that you have the time to understand that, for example, "I have a set of management tools in place that allow me to monitor my servers, my storage, my network from a service perspective, and they will let me know whether my end users are getting the transaction rates they need on their Web services."

Gardner: Let me just explore that a little bit more. Back when client-server arrived, it wasn’t simply a matter of installing the application on the server and then installing the client on the PCs. Suddenly, there were impacts on the network. Then, there were impacts on the size of the server and capabilities in maintaining simultaneous connections, which required a different approach to the platform.

Then, of course, there was a need for extending this out to branch offices and for wider area networks to be involved. That had a whole other set of issues about performance across the wide area network, the speed of the network, and so on -- a ripple effect. Is that what we're seeing as well with virtualization?

Meyer: We do, absolutely. With the bottom-up approach, people look at it from a hypervisor and a server perspective. But, it really does touch everything that you do, and that everything is not just from a hardware perspective. It not only touches the server itself or the links between the server, the storage, and the network, but it also touches the management infrastructure and the client infrastructure.

So, even though it’s easier to deploy and it can seep in, it touches just about everything. That’s why we keep coming back to this notion of saying that you need to take a strategic look at it, because the more you deploy, the more it will have that ripple effect, as you call it, on all the other systems within IT, and not just a server and hypervisor.

Gardner: Tell us about HP’s history with virtualization. How long has HP been involved with it, and what’s its place and role in the market right now?

Meyer: HP has been doing virtualization for a long time. When most people think of virtualization, they tend to think of hypervisors and they tend to think of it on x86 or Windows servers. That’s really where it has caused this to become popular. But HP has had virtualization in it for quite a while, and we've been doing virtualization on networks for quite a while. So, we are not newcomers to the game.

When it comes to where we play today, there are companies that are experts on the x86 world, and they're providing hypervisors. VMware, Citrix, and Microsoft are really good at what they do. HP doesn’t intend to do that.

Well-managed infrastructure

What we intend to do is take that hypervisor and make sure that it's part of a well-managed infrastructure, a well-managed service, well-managed desktops, and bringing virtualization into the IT ecosystem, making it part of your day-to-day management fabric.

That’s what we do with hardware that’s optimized out of the box for virtualization. You can wire your hardware once and, as you move your virtual components around, the hardware can take care of the rewiring, the IP network, the IP address, and the storage.

We handle that with IT operations and management offerings that have one solution to heterogeneously manage virtual and physical environments. We do that with client architecture, so that you can extend virtualization onto the desktops, secure the desktops, and take a lot of the cost out of managing them. If you look at what HP is about, it’s taking that hypervisor and actually delivering business value out of a virtual environment.

Gardner: Of course, HP is also in the server hardware business. Does that provide you a benefit in virtualization? Some conventional thinking might be, well gee, why would the hardware people want to increase utilization? Aren’t they in the business of selling more standalone servers?

Meyer: Certainly, we're in the business of selling hardware as well, but the benefit comes in many different areas. Actually, more people today are running virtualization on HP servers than any other platform out there. So, virtualization is an area that allows us to be more creative and more innovative in a server environment.

One of the hottest areas right now in server growth is in blade servers, where you have a bladed enclosure that’s made specifically for virtualization. It allows you to lower the cost of power and cooling, lower the floor space of the data center, and move your virtual components around much faster. Where we might see utilization rates decline in some areas, we're certainly seeing the uptake in others. So, it’s certainly an opportunity for us.

Gardner: So, helping your clients cut the total cost of computing is what’s going to keep you in the hardware business in the long run?

Meyer: That’s exactly right. If you look at the overall benefits, the immediate allure of virtualization is all about the cost and the agility of the service. If you look at it from the bigger picture, if you get virtualization right, and you get it right from a strategic perspective, that’s when you start to feel those gains that we were talking about.

Data centers are very expensive. There's floor space in there. Power and cooling are very expensive. People are talking about that. If we help them get that right and knock the cost out of the infrastructure, the management, the client architectures, and even insourcing or outsourcing, that’s beneficial to everyone.

What are the payoffs?

Gardner: We've talked about how virtualization is a big deal in the market and how it’s being driven by economic factors. We've looked at how a tactical knee-jerk approach can lead to the opposite affect of higher expense and more complexity. We've recognized that taking an experienced, methodological, strategic approach makes a lot of sense.

Now, what is it that we can get, if we do this right? What are the payoffs? Do you have examples of companies you work with, or perhaps within HP itself? I know you guys have done an awful lot in the past several years to refine and improve your IT spend and efficiency. What are the payoffs if you do this right?

Meyer: There are a number of areas. You can look at it in terms of power and cooling. So right off the bat, you can save 50 percent of your power and cooling, if you get this right and get an infrastructure that works together.

From a client-computing perspective, you can save 30 percent off the cost of client computing, off the management of your client endpoints, if you virtualize the infrastructure.

If you look at outsourcing the infrastructure, the returns are manifold there, because you're really taking not just the cost of running it. You're actually leveraging the combined knowledge of thousands and thousands of people who understand how to run the infrastructure from the experience they have of doing multiple outsourcing.

So, we see particular gains in power and cooling, as I mentioned before, and the cost of administration. We'll see significant gains in server-admin ratios. We'll see a threefold increase in the number of servers that people can manage.

If you look across the specific examples, they really do touch a lot of the core areas that people are looking at today -- power and cooling, the cost of maintaining and instrumenting that infrastructure, and the cost of maintaining desktops.

Gardner: Doesn’t this help too, if you have multiple data centers and you're trying to whittle that down to a more efficient, smaller number? Does virtualization have a role in that?

The next generation

Meyer: Absolutely. Actually, throughout the data center, virtualization is one of those key technologies that help you get to that next generation of the consolidated data center. If you just look at from a consolidation standpoint, a couple of years ago, people were happy to be consolidating five servers into one or six servers into one. When you get this right, do it on the right hardware with the right services setup, 32 to 1 is not uncommon -- a 32-to-1 consolidation rate.

If you think about what that equates to, that’s 32 fewer physical servers, less floor space, less power and cooling. So, when you get it right, you go from, "Yes, I can consolidate and I can consolidate it five to one, six to one or 12 to one" to "I'm consolidating, and I am really having a big impact on the business, because I'm consolidating at 24 to 1 or 32 to 1 ratios." That’s really where the payoff starts coming in.

Gardner: I suppose that while you are consolidating, you might as well look at what applications on which platforms are going to be sunset. So, there's a modernization impact. Virtualization helps you move certain apps out to pasture, maybe reusing the logic and the data in the future. What’s the modernization impact that virtualization can provide?

Meyer: Virtualization is absolutely an enabler of that in a number of different ways. Sometimes, when people are modernizing apps, they go to our outsourcing business and say, "I'm modernizing an application and I need some compute capacity. Do you have it?" They can tap into our compute capacity in a virtual way to provide a service, while they're moving, updating, or modernizing an architecture, and the end user doesn’t notice the difference. There's a continuity aspect there, as they provide the application.

There are also the backup and recovery aspects of it. There are a lot of safeguards that come in while you are modernizing applications. In this case, virtualization is an enabler for that. It allows that move to happen. Then, as that application moves onto more up-to-date or more modern architecture, it allows you to quickly scale up or scale down the capacity of that application. Again, the end user experience isn't diminished.

Gardner: So, these days when we are not just dealing with the dollars-and-cents impacts of the economy, we are also looking at dynamic business environments, where there are mergers, acquisitions, bankruptcies, and certain departments being sloughed off, sold, or liquidated. It sounds like the strategic approach to virtualization has a business outcome in that environment too.

Meyer: That’s really where the sort of the flip side of virtualization comes in -- the automation side. Virtualization allows you to quickly spin up capacity and do a series of other things, but automation allows you to do that at scale.

If you have a business that needs to change seasonally, daily, weekly, or at certain times, you need to make much more effective use of that compute capacity. We talk a lot about cost, but it’s automation that makes it cost effective and agile at the same time. It allows you to take a prescribed set of tasks related to virtualization, whether that’s moving a workload, updating a new service, or updating an entire stack and make that happen much faster and at much lower cost, as well.

Gardner: One last area, Bob. I want to get into the benefits of managed virtualization as insurance for the future. You mentioned cloud computing a little earlier. If you do this properly, you start moving toward what we call on-premises or private clouds. You create a fabric of storage, or a fabric of application support, or a fabric of platform infrastructure support. That’s where we get into some of those even larger economic benefits.

This is a vision for many people now, but doing virtualization right seems to me like a precursor to being able to move toward that. You might even be able to start employing SOA more liberally, and then take advantage of external clouds, and there is a whole vision around that. Am I correct in assuming that virtualization is an initial pillar to manage, before you're able to start realizing any of that vision?

Meyer: Certainly. The focus right now is, "How does it save me money?" But, the longer-term benefit, the added benefit, is that, at some point the economy will turn better, as it always does. That will allow you to expand your services and really look at some of the newer ways to offer services. We mentioned cloud computing before. It will be about coming out of this downturn more agile, more adaptable, and more optimized.

No matter where your services are going -- whether you're going to look at cloud computing or enacting SOA now or in the near future -- it has that longer term benefit of saying, "It helps me now, but it really sets me up for success later."

We fundamentally believe, and CIOs have told us a number of times that virtualization will set them up for long-term success. They believe it’s one of those fundamental technologies that will separate their company as winners going into any economic upturn.

Gardner: So, making virtualization a core competency, sooner rather than later, puts you at an advantage across a number of levels, but also over a longer period of time?

Meyer: Yes. Right now everybody is reacting to an economic climate. Those CIOs who are acting with foresight, looking ahead and saying, "Where will this take me," are the ones who are going to be successful as opposed to the people who are just reacting to the current environment and looking to cut and slash. Virtualization has a couple of benefits that allow you to save and optimize, but also sets you up for that -- to boomerang you whenever the economic recovery comes.

Gardner: Well, great. We've been talking with Bob Meyer, the worldwide virtualization lead in HP’s Technology Solutions Group. We've been examining the effects and impacts of virtualization adoption and how to produce the best businesses and financial outcomes from your virtualization initiatives. I want to thank you, Bob, for joining us. It's been a very interesting discussion.

Meyer: Thank you for the opportunity.

****Access More HP Resources on Virtualization****

Gardner: We also want to thank our sponsor, Hewlett-Packard, for supporting this series of podcasts. This is Dana Gardner, principal analyst at Interarbor Solutions. You've been listening to BriefingsDirect. Thanks and come back next time.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com. Learn more. Sponsor: Hewlett-Packard.

Transcript of BriefingsDirect podcast on virtualization strategies and best practices with Bob Meyer, HP's worldwide virtualization lead. Copyright Interarbor Solutions, LLC, 2005-2009. All rights reserved.

Sunday, March 22, 2009

BriefingsDirect Analysts List Top 5 Ways to Cut Enterprise IT Costs Without Impacting Performance in Economic Downturn

Edited transcript of BriefingsDirect Analyst Insights Edition podcast, Vol. 38 on how businesses should react to the current economic realities and prepare themselves to emerge stronger.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com. Charter Sponsor: Active Endpoints. Also sponsored by TIBCO Software.

Special offer: Download a free, supported 30-day trial of Active Endpoint's ActiveVOS at www.activevos.com/insight.

Dana Gardner: Hello, and welcome to the latest BriefingsDirect Analyst Insights Edition, Volume 38. I'm your host and moderator, Dana Gardner, principal analyst at Interarbor Solutions.

This periodic discussion and dissection of IT infrastructure related news and events, with a panel of industry analysts and guests, comes to you with the help of our charter sponsor, Active Endpoints, maker of the ActiveVOS, visual orchestration system. We also come to you through the support of TIBCO Software.

Out topic this week of March 9, 2009 centers on the economics of IT. It's clear that the financial crisis has spawned a yawning global recession on a scale and at a velocity unlike anything seen since the 1930s. Yet, our businesses and our economy function much differently than they did in the 1930s. The large and intrinsic role of information technology (IT) is but one of the major differences. In fact, we haven't had a downturn like this since the advent of widespread IT.

So, how does IT adapt and adjust to the downturn? This is all virgin territory. Is IT to play a defensive role in helping to slash costs and reduce its own financial burden on the enterprise, as well as to play a role in propelling productivity forward despite these wrenching contractions?

Or, does IT help most on the offensive, in transforming businesses, or playing a larger role in support of business goals, with the larger IT budget and responsibility to go along with that? Does IT lead the way on how companies remake themselves and reinvent themselves during and after such an economic tumult?

We're asking our panel today to list the top five ways that IT can help reduce costs, while retaining full business -- or perhaps even additional business functionality. These are the top five best ways that IT can help play economic defense.

After we talk about defense, we're going to talk about offense. How does IT play the agent of change in how businesses operate and how they provide high value with high productivity to their entirely new customer base?

Join me in welcoming our analyst guests this week. Joe McKendrick, independent IT analyst and prolific blogger on service-oriented architecture (SOA), business intelligence (BI), and other major IT topics. Welcome back, Joe.

Joe McKendrick: Thanks, Dana. Glad to be here.

Gardner: We're also joined by Brad Shimmin, principal analyst at Current Analysis.

Brad Shimmin: Hello, Dana.

Gardner: Also, JP Morgenthal, independent analyst and IT consultant. Hi, JP.

JP Morgenthal: Hi. Thanks.

Gardner: We're also joined by Dave Kelly, founder and president of Upside Research, who joins us for the first time. Welcome, Dave.

Dave Kelly: Hey, Dana. Thanks for having me. It's great to be here.

Gardner: Let's go first to Joe McKendrick at the top of the list. Joe, let's hear your five ways that IT can help cut costs in enterprises during our tough times.

Previous downturns

McKendrick: First of all, I just want to comment. You said this is virgin territory for IT in terms of managing through downturns. We've seen in our economy some fairly significant downturns in the past -- the1981-82 period, 1990-91 period, and notably 2001-2002. Those were all major turning points for IT, and we can get into that later. I'll give you my five recommendations, and they're all things that have been buzzing around the industry.

First, SOA is a solution, and I think SOA is alive and well and thriving. SOA promotes reuse and developer productivity. SOA also provides a way to avoid major upgrades or the requirement for major initiatives in enterprise systems such as enterprise resource planning (ERP).

Second, virtualize all you can. Virtualization offers a method of consolidation. You can take all those large server rooms -- and some companies have thousands of servers -- and consolidate into more centralized systems. Virtualization paves the path to do that.

Third, cloud computing, of course. Cloud offers a way to tap into new sources of IT processing, applications, or IT data and allows you to pay for those new capabilities incrementally rather than making large capital investments.

The fourth is open source -- look to open-source solutions. There are open-source solutions all the way up the IT stack, from the operating system to middleware to applications. Open source provides a way to, if not replace your more commercial proprietary systems, then at least to implement new initiatives and move to new initiatives under the budget radar, so to speak. You don't need to get budget approval to establish or begin new initiatives.

Lastly, look at the Enterprise 2.0 space. Enterprise 2.0 offers an incredible way to collaborate and to tap into the intellectual capital throughout your organization. It offers a way to bring a lot of thinking and a lot of brainpower together to tackle problems.

Gardner: It sounds like you feel that IT has a lot of the tools necessary and a lot of the process change necessary. It's simply a matter of execution at this point.

McKendrick: Absolutely. All the ingredients are there. I've said before in this podcast that I know of startup companies that have invested less than $100 in IT infrastructure, thanks to initiatives such as cloud computing and open source. Other methodologies weigh in there as well.

Gardner: All right. Let's go to bachelor number two, Brad Shimmin. If you're dating IT efficiency, how are you going to get them off the mark?

Provide a wide pasture

Shimmin: Thanks, Dana. It's funny. Everything I have in my little list here really riffs off of all of Joe's excellent underlying fundamentals that was talking about there. I hope what I am going to give you guys are some not-too-obvious uses of the stuff that Joe's been talking about.

My first recommendation is to give your users a really wide pasture. There is an old saying that if you want to mend fewer fences, have a bigger field for your cattle to live in. I really believe that's true for IT.

You can see that in some experiments that have been going on with the whole BYOC -- Bring Your Own Computer -- programs that folks like Citrix and Microsoft have been engaging in. They give users a stipend to pick up their own notebook computer, bring that to work, and use a virtualized instance of their work environment on top of that computer.

That means IT no longer has to manage the device itself. They now just manage virtual image that resides on that machine. So, the idea that we've been seeing with mobile devices making a lot of headway, in terms of users buying and using their own inside IT, we'll see extend to desktops and laptops.

I'd just like to add that IT should forget about transparency and strive for IT participation. The days of the ivory tower with top-down knowledge held within secret golden keys behind locked doors within IT are gone. You have to have some faith in your users to manage their own environments and to take care of their own equipment, something they're more likely to do when it's their own and not the company's.

Gardner: So, a bit more like the bazaar, when it comes to how IT implements and operates?

Shimmin: Absolutely. You can't have this autocracy downward slope anymore to be efficient. That doesn't encourage efficiency.

The second thing I'd suggest is don't build large software anymore. Buy small software. As Joe mentioned, SOA is well entrenched now within both the enterprise and within the IT. Right now, you can buy either a software as a service (SaaS) or on-premise software that is open enough that it can connect with and work with other software packages. No longer do you need to build this entire monolithic application from the ground-up.

A perfect example of that is something like PayPal. This is a service, but there are on-premise renditions of this kind of idea that allow you to basically build up a monolithic application without having to build the whole thing yourself. Using pre-built packages, smaller packages that are point solutions like PayPal, lets you take advantage of their economies of scale, and lets you tread upon the credibility that they've developed, especially something that's good for consumer facing apps.

The third thing I'd suggest -- and this is in addition to that -- build inside but host outside. You shouldn't be afraid to build your own software, but you should be looking to host that software elsewhere.

A game changer

We've all seen both enterprises and enterprise IT vendors -- independent software vendors (ISVs) themselves like IBM, Oracle, and Microsoft, in particular -- leaping toward putting their software platforms on top of third-party cloud providers like Amazon EC2. That is the biggest game changer in everything we've been talking about here to date.

There's a vendor -- I can't say who it is, because they didn't tell I could talk about it -- who is a cloud and on-premise vendor for collaboration software. They have their own data centers and they've been moving toward shutting down the data centers and moving that into Amazon's EC2 environment. They went from these multi-multi thousand dollar bills they are paying every month, to literally a bill that you would get for such a cellphone service from Verizon or AT&T. It was a staggering saving they saw.

Gardner: A couple of hundred bucks a month?

Shimmin: Exactly. It's all because the economies are scaled through that shared environment.

The fourth thing I would want to say is "kill your email." You remember the "Kill your TV" bumper stickers we saw in the '90s. That should apply to email. It's seen its day and it really needs to go away. For every gigabyte you store, I think it's almost $500 per user per year, which is a lot of money.

If you're able to, cut that back by encouraging people to use alternatives to email, such as social networking tools. We're talking about IM, chat, project group-sharing spaces, using tools like Yammer inside the enterprise, SharePoint obviously, Clearspace -- which has just been renamed SBS, for some strange reason -- and Google Apps, That kind of stuff cuts down on email.

I don't know if you guys saw this, but in January, IBM fixed Lotus Notes so they no longer store duplicate emails, They were cutting down on the amount of storage their users required by something like 70 percent, which is staggering.

Gardner: So what was that, eliminating the multiple versions of any email, right?

Shimmin: It was the attachments, yes. If there was a duplicate attachment, they stored one for each note instead of saying, "Hey, it's the same file, let's just store one instance of it in a database." Fixing stuff like that is just great, but it points to how big a problem it is to have everything running around in email.

Gardner: You might as well just be throwing coal up into the sky, right?

Shimmin: Exactly. To add to that, we should really turn off our printers. By employing software like Wikis, blogs, and online collaboration tools from companies like Google and Zoho, we can get away from the notion of having to print everything. As we know, a typical organization kills 143 trees a year -- I think was the number I heard -- which is a staggering amount of waste, and there's a lot of cost to that.

Gardner: Perhaps the new bumper sticker should be "Email killed."

Open, but not severe

Shimmin: Printing and email killed, right. My last suggestion would be, as Joe was saying, to really go open, but we don't have to be severe about it. We don't have to junk Windows to leverage some cost savings. The biggest place you can see savings right now is by getting off of the heavy license burden software. I'm going to pick on Office right now.

Gardner: How many others do you have to pick from?

Shimmin: It's the big, fat cow that needs to be sacrificed. Paying $500-800 a year per user for that stuff is quite a bit, and the hardware cost is staggering as well, especially if you are upgrading everyone to Vista. If you leave everyone on XP and adopt open-source solutions like OpenOffice and StarOffice, that will go a long, long way toward saving money.

Why I'm down on printing is, the time is gone when we had really professional, beautiful-looking documents that required a tremendous amount of formatting and everything needed to be perfect within Microsoft Word, for example. What now counts is the information. It's same for 4,000-odd features in Excel. I'm sure none of us here have ever even explored a tenth of those.

Gardner: Maybe we should combine some of the things you and Joe have said. We should go to users and say, "You can use any word processor you want, but we're not going to give you any money," and see what they come up with.

Shimmin: You're going to find some users who require those 4,000 features and you are going to need to pay for that software, but giving everyone a mallet to crack a walnut is insane.

Gardner: I want to go back quickly to your email thing. Are you saying that we should stop using email for communication or that we should just bring email out to a cloud provider and do away with the on-premises client server email -- or both.

Shimmin: Thanks for saying that. Look at software or services like Microsoft Business Productivity Online Suite (BPOS). You can get Exchange Online now for something like $5 per month per user. That's pretty affordable. So, if you're going to use email, that's the way to go. You're talking about the same, or probably better, uptime than you're getting internally from a company like Microsoft with their 99.9 percent uptime that they're offering. It's not five 9s, but it's probably a lot better than what we have internally.

So, yeah. You should definitely explore that, if you're going to use email. In addition to that, if you can cut down on the importance of email within the organization by adopting software that allows users to move away from it as their central point of communication, that is going to save a lot of money as well.

Gardner: Or, they could just Twitter to each other and then put all the onus on the cost of maintaining all those Twitter servers.

Shimmin: Nobody wants to want to pay for that, though.

Gardner: Let's go to JP Morgenthal. I'm expecting "shock and awe" from you, JP. What's your top five?

Morgenthal: Shock and awe, with regard to my compadres' answers?

Gardner: Oh, yeah. Usually you have a good contrarian streak.

The devastation of open source

Morgenthal: I was biting my tongue, especially on the open source. I just went through an analysis, where the answer was go JBoss on Linux Apache. Even in that, I had given my alternative viewpoint that from a cost perspective, you can't compare that stack to running WebSphere, or WebLogic on Windows. Economically, if you compare the two, it doesn't make sense. I'm still irked by the devastation that open source has created upon the software industry as a whole.

Gardner: Alright. We can't just let that go. What do you mean, quickly?

Morgenthal: Actually, I blogged on this. Here's my analogy. Imagine tomorrow if Habitat for Humanity all of a sudden decided that it's going to build houses for wealthy people and then make money by charging maintenance and upkeep on the house. You have open source. The industry has been sacrificed for the ego and needs of a few against the whole of the industry and what it was creating.

Gardner: Okay. This is worth an entire episode. So, we're going to come back to this issue about open source. Is it good? Is it bad? Does it save money or not? But, for this show, let's stick to the top five ways to save IT, and we'll come back and do a whole show on open source.

Morgenthal: I'd like to, but I've got to give credit. I can't deny the point that as a whole, for businesses, again, those wealthy homeowners who are getting that Habitat for Humanity home, hey, it's a great deal. If somebody wants to dedicate their time to build you a free home, go for it, and then you can hire anybody you like to maintain that home. It's a gift from the gods.

Gardner: What are your top five?

Morgenthal: Vendor management is first. One thing I've been seeing a lot is how badly companies mismanage their vendor relationships. There is a lot of money in there, especially on the IT side -- telecom, software, and hardware. There's a lot of play, especially in an industry like this.

Get control over your vendor relationships. Stop letting these vendors run around, convincing end-users throughout your business that they should move in a particular direction or use a particular product. Force them to go through a set of gatekeepers and manage the access and the information they're bringing into the business. Make sure that it goes through an enterprise architecture group.

Gardner: It's a buyers market. You can negotiate. In fact, you can call them in and just say, "We want to scrap the old license and start new." Right?

Morgenthal: Well, there are legal boundaries to that, but certainly if they expect to have a long-term relationship with you through this downturn, they've got to play some ball.

With regard to outsourcing noncritical functions, I'll give you a great example where we combined an outsourced noncritical function with vendor management in a telco. Many companies have negotiated and managed their own Internet and telco communications facilities and capability. Today, there are so many more options for that.

It's a very complex area to navigate, and you should either hire a consultant who is an expert in the area to help you negotiate this fact, or you should look the scenario where you take as much bandwidth as you use on an average basis, and when you need excess bandwidth, team in the cloud. Go to the cloud for that excess bandwidth.

Gardner: Okay, number three.

Analyze utilization

Morgenthal: Utilization analysis. Many organizations don't have a good grasp on how much of their CPU, network, and bandwidth is utilized. There's a lot of open space in that utilization and it allows for compression. In compressing that utilization, you get back some overhead associated with that. That's a direct cost savings.

Another area that has been a big one for me is data quality. I've been trying to tell corporations for years that this is coming. When things are good, they've been able to push off the poor data quality issue, because they can rectify the situation by throwing bodies at it. But now they can't afford those bodies anymore. So, now they have bad data and they don't have the bodies to fix up the data on the front end.

Here is a really bad rock and hard place. If I were them, I'd get my house in order, invest the money, set it aside, get the data quality up and allow myself to operate more effectively without requiring extra labor on the front end to clean up the data on the back end.

Finally, it's a great time to explore desktop alternatives, because Windows and the desktop has been a de-facto standard, a great way to go -- when things are good. When you're trying to cut another half million, million, or two million out of your budget, all those licenses, all that desktop support, start to add up. They're small nickels and dimes that add up.

By looking at desktop alternatives, you may be able to find some solutions. A significant part of your workforce doesn't need all that capability and power. You can then look for different solutions like light-weight Linux or Ubuntu-type environments that provide just Web browsing and email, and maybe OpenOffice for some light-weight word processing. For a portion of your user base, it's all they need.

Gardner: Okay. Was that four or five?

Morgenthal: That's five -- vendor management, outsourcing, utilization analysis, data quality, and desktop alternatives.

Gardner: Excellent. Okay. Now, going to you, Dave Kelly, what's your top five?

Optimize, optimize, optimize

Kelly: Thanks, Dana, and it's great to come at the end. I don't always agree with JP, but I liked a lot of the points that he just made and they complement some of the ones that I am going to make, as well as the comments that Brad and Joe made.

My first point would be, optimize, optimize, optimize. There's no doubt that all the organizations, both on the business side and the IT side, are going to be doing more with less. I think we're going to be doing more with less than we have ever seen before, but that makes it a great opportunity to step back and look at specific systems and business processes.

You can start at the high level and go through business process management (BPM) type optimization and look at the business processes, but you can also just step it down a level. This addresses what some of the other analysts have said here. If you look at things like data-center optimization, there are tremendous opportunities for organizations to go into their existing data centers and IT processes to save money and defer capital investment.

You're talking about things like increasing the utilization of your storage systems. Many organizations run anywhere from 40 to 50 percent of storage utilization. If you can increase that and push off new investments in additional storage, you've got savings right there. The growth rate in storage over the past three to five years has been tremendous. This is a great opportunity for organizations to save money.

It also references what Brad said. You've got the same opportunity on the email side. If you look at your infrastructure on the data-center side or the storage side, you've got all this redundant data out there.

You can use applications. There are products from Symantec and other vendors that allow you to de-duplicate email systems and existing data. There are ways to reduce your backup footprint, so that you have fewer backup tapes required. Your processes will run quicker, with less maintenance and management. You can do single-instance archiving and data compression.

Gardner: Dave, it sounds like you're looking at some process re-engineering in the way that IT operates.

Kelly: You can certainly do that, but you don't even have to get to that process re-engineering aspect. You can just look at the existing processes and say, "How can I do individual components more efficiently." I guess it is process reengineering, but I think a lot of people associate process reengineering with a large front-to-back analysis of the process. You can just look at specific automated tasks and see how you can do more with less in those tasks.

There are a lot of opportunities there in terms of like data center optimization as well as other processes.

The next point is that while it's important to increase your IT efficiency, while reducing cost, don't forget about the people. Think about people power here. The most effective way to have an efficient IT organization is to have effective people in that IT organization.

Empower your people

There's a lot of stress going on in most companies these days. There are a lot of question about where organizations and businesses are going. As an IT manager, one thing you need to do is make sure that your people are empowered to feel good about where they're at. They need to not hunker down and go into this siege mentality during these difficult times, even if the budgets are getting cut and there's less opportunity for new systems or new technology challenges. They need to redirect that stress to discover how the IT organization can benefit the business and deal with these bad times.

You want to help motivate them through the crisis and work on a roadmap for better days, and map out, "Okay, after we get through this crisis, where are we going to be going from here?" There's an important opportunity in not forgetting about the people and trying to motivate them and provide a positive direction to use their energy and resources in.

Gardner: They don't want to get laid off these days, do they?

Kelly: No, they don't. Robert Half Technology recently surveyed 1,400 CIOs. It's pretty good news. About 80 percent of the CIOs expect to maintain current staffing levels through the first half of this year. That's not a very long lead-time at this point, but it's something. About 8 or 9 percent expected to actually hire. So everyone is cutting budgets, reducing capital expenditures, traveling less, trying to squeeze the money out of the budget, but maybe things will stay status quo for a while.

The third point echoes a little bit of what JP said on the vendor management side, as well as on using commercial software. Organizations use what they have or what they can get. Maybe it's a good time to step back and reevaluate the vendors. That speaks to JP's vendor management idea, and the infrastructure they have.

So, you may have investments in Oracle, IBM, or other platforms, and there may be opportunities to use free products that are bundled as part of those platforms, but that you may not be using.

For example, Oracle bundles Application Express, which is a rapid application development tool, as part of the database. I know organizations are using that to develop new applications. Instead of hiring consultants or staffing up, they're using existing people to use this free rapid application development tool to develop departmental applications or enterprise applications with this free platform that's provided as part of their infrastructure.

Of course, open source fits in here as well. I have a little question about the ability to absorb open source. Perhaps at the OpenOffice level, I think that's a great idea. At the infrastructure level and at the desktop level that can be a little bit more difficult.

The fourth point, and we've heard this before, is go green. Now is a great time to look at sustainability programs and try to analyze them in the context of your IT organization. Going green not only helps the environment, but it has a big impact, as you're looking at power usage in your data center with cooling and air conditioning cost. You can save money right there in the IT budget and other budgets going to virtualization and consolidating servers. Cutting any of those costs can also prevent future investment capital expenditures.

Again, as JP said about utilization, this is a great opportunity to look at how you're utilizing the different resources and how you can potentially cut your server cost.

Go to lunch

Last but not least, go to lunch. It's good to escape stressful environments, and it may be a good opportunity for IT to take the business stakeholders out to lunch, take a step back, and reevaluate priority. So, clear the decks and realign priorities to the new economic landscape. Given changes in the business and in the way that services and products are selling, this may be a time to reevaluate the priorities of IT projects, look at those projects, and determine which ones are most critical.

You may be able to reprioritize projects, slow some down, delay deployments, or reduce service levels. The end effect here is allowing you to focus on the most business critical operations and applications and services. That gives a business the most opportunity to pull out of this economic dive, as well as a chance to slow down and push off projects that may have had longer-term benefits.

For example, you may be able to reduce service levels or reduce the amount of time the help desk has to respond to a request. Take it from two hours to four hours and give them more time. You can potentially reduce your staffing levels, while still serving the business in a reasonable way. Or, lengthen the time that IT has after a disaster to get systems back up and operating. Of course, you've got to check that with business leaders and see if it's all right with them. So, those are my top five.

Gardner: Excellent, thank you. I agree that we're in a unique opportunity, because, for a number of companies, their load in the IT department is down, perhaps for the first time. We've been on a hockey-stick curve in many regards in the growth of data and the number of users, seats, and applications supported.

Companies aren't merging or acquiring right now. They're in kind of stasis. So, if your load is down in terms of headcount, data load, newer applications, now is an excellent time to make substantial strategic shifts in IT practices, as we've been describing, before that demand curve picks up again on the other side, which its bound to do. We just don't know when.

As the last panelist to go, of course, I am going to have some redundancy on what's been said before, but my first point is, now is the time for harsh triage. It is time to go in and kill the waste by selectively dumping the old that doesn't work. It's easiest to do triage now, when you've got a great economic rationale to do it. People will actually listen to you, and not have too much ability to whine, cry and get their way.

IT really needs to find where it's carrying its weight. It needs to identify the apps that aren't in vigorous use or aren't adding value, and either kill them outright or modernize them. Extract the logic and use it in a process, but not at the cost of supporting the entire stack or a Unix server below it.

IT needs to identify the energy hogs and the maintenance black holes inside their infrastructure and all the inventory that they are supporting. That means ripping out the outdated hardware. Outdated hardware robs from the future in order to pay for a diminishing return in the past. So, it's a double whammy in terms of being nonproductive and expensive.

You don't really need to spend big money to conduct these purges. It's really looking for the low-lying fruit and the obvious wasteful expenditures and practices. As others have said today, look for the obvious things that you're doing and never really gave much thought to. They are costing you money that you need to do the new things in order to grow. It's really applying a harsh cost-benefit analysis to what you are doing.

It would also make sense to reduce the number of development environments. If you're supporting 14 different tools and 5 major frameworks, it's really time to look at something like Eclipse, Microsoft, or OSGi and say, "Hey, we're going to really work toward more standardization around a handful of major development environments. We're going to look for more scripting and doing down and dirty web development when we can." That just makes more sense.

It's going to be harder to justify paying for small tribes of very highly qualified and important, but nonetheless not fully utilized, developers.

Look outside

It's also time to replace costly IT with outside services and alternatives that we have discussed. That would include, as Brad said, your email, your calendar, word processing, and some baseline productivity applications and consider where you can do them cheaper.

I do like the idea of saying to people, "You still need to do email and you need still to do word processing, but we no longer are going to support it. Go find an alternative and see how that works." It might be an interesting experiment at least for a small department level at first.

That means an emphasis on self-help, and in many aspects of IT it is possible. Empower the users. They want that power. They want to make choices. We don't need to just walk them down a blind path, tell them how to do mundane IT chores, and then pay an awful lot of money to have them doing it that way. Let's open up, as Brad said, the bazaar and stop being so much of a cathedral.

I suppose that means more use of SaaS and on-demand applications. They make particular sense in customer relationship management (CRM), sales force, and in human resources procurement and payroll. It's really looking to outsource baseline functionality that's not differentiating your organization. It's the same for everybody. Find the outsourcers that have done it well and efficiently and get it outside of your own company. Kill it, if you are doing it internally.

It's really like acting as a startup. You want to have low capital expenditures. You want to have low recurring costs. You want to be flexible. You want to empower your users. A lot of organizations need to think more like a startup, even if they are an older, established multinational corporation.

My second point is to create a parallel IT function that leverages cloud attributes. This focuses again on what Joe mentioned, on the value of virtualization and focusing on the process and workflows -- not getting caught up in how you do it, but what it ends up doing for you.

The constituent parts aren't as important as the end result. That means looking to standardize hardware, even if it's on-premises, and using grid, cloud, and modernized and consolidated data center utility best practices. Again, it's leveraging a lot of virtualization on standard low-cost hardware, and then focusing the value at a higher abstraction, at the process level.

It's standardizing more use of appliances and looking at open-source software. I also have to be a little bit of a contrarian to JP. I do think there's a role for open source in these operations, but we are going to save that for another day. That's a good topic.

This is another way of saying doing SOA, doing it on-premises, using cloud and compute fabric alternatives, and trying to look outside for where other people have created cloud environments that are also very efficient for those baseline functions that don't differentiate. That creates a parallel function in IT, but also looks outside.

I agree wholeheartedly with what's been said earlier about the client. It's time to cheapen, simplify, and mobilize the client tier. That means you can use mobile devices, netbooks, and smart phones to do more activities, to connect to back-end data and application sets and Web applications.

Focus on the server

It's time to stop spending money on the client. Spend it more on the server and get a higher return on that investment. That includes the use of virtual desktop infrastructure (VDI) and desktop-as-a-service (DaaS) types of activities. It means exploring Linux as an operating environment on the desktop, where that makes sense, and look at what the end users are doing with these clients.

If they're at a help desk and they're all using three or four applications in a browser, they don't need to have the equivalent of a supercomputer that's got the latest and greatest of everything. It's time to leverage browser-only workers. Find workers that can exist using only browsers and give them either low-cost hardware that's maybe three or four years old and can support a browser well or deliver that browser as an application through VDI. That's very possible as well.

It means centralizing more IT support, security, and governance at the data center. It even means reducing the number of data centers, because given the way networks are operating, we can do this across a wide area network (WAN). We can use acceleration, remote branch technologies, and virtual private networks (VPNs). We can deliver these applications to workers across continents and even across the globe, because we're not dealing with a B2C, we are dealing with a B2E -- that is, to your employees.

You can support the scale with fewer data centers and lower cost clients. It's a way to save a lot of money. Again, you're going to act like a modern startup. You're going to build the company based on what your needs are, not on what IT was 15 years ago.

My fourth point is that BI everywhere. Mine the value of the data that you've got already and the data that you are going to create. Put in the means to be able to assess where your IT spend makes sense. This is BI internal to IT, so BI for IT, but also IT enabling BI across more aspects of the business at large.

Know what the world is doing around you and what your supply chain is up to. It's time to join more types of data into your BI activities, not just your internal data. You might be able to actually rent data from a supplier, a partner or a third-party, bring that third-party data in, do a join, do your analysis, and then walk away. Then, maybe do it again in six months.

It's time to think about BI as leveraging IT to gain the analysis and insights, but looking in all directions -- internal, external, and within IT, but also across extended enterprise processes.

It's also good to start considering tapping social networks for their data, user graph data, and metadata, and using that as well for analysis. There are more and more people putting more and more information about themselves, their activities, and their preferences into these social networks.

That's a business asset, as far as I'm concerned. Your business should start leveraging the BI that's available at some of these social networks and join that with how you are looking at data from your internal business activities.

Take IT to the board level

Last, but not least, it's time for IT to be elevated to the board level. It means that the IT executive should be at the highest level of the business in terms of decision and strategy. The best way for IT to help companies is to know what those companies are facing strategically as soon as they're facing it, and to bring IT-based solutions knowledge to the rest of the board. IT can be used much more strategically at that level.

IT should be used for transformation and problem solving at the innovation and business-strategy level, not as an afterthought, not as a means to an end, but actually as part of what ends should be accomplished, and then focusing on the means.

That is, again, acting like a startup. If you talk to any startup company, they see IT as an important aspect of how they are going to create value, go to market cheaply, and behave as an agile entity.

That's the end of my five. Let's take the discussion for our last 10 minutes to how IT can work on the offense. I'll go first on this one. I think it's time to go green field. It's time to look at software as a differentiator.

The reason I bring this up is Marc Andreessen, who is starting a venture capital fund with Ben Horowitz. They were both at Opsware together and then at HP, after they sold. Andreesen told Charlie Rose recently that there is a tragic opportunity from our current economic environment. A number of companies are going to go under or they're going to be severely challenged. Let's take a bank, for example.

A bank is going to perhaps be in a situation where its assets are outstripped by its liabilities and there is no way out. But, using software, startups, and third-party services, as Andreessen said, you can start an Internet bank. It's not that difficult.

You want to be able to collect money, lend it out with low risk at a sufficient return, and, at the end of the day, have a balance sheet that stands on its own two feet. Creating an Internet bank, using software and using services combined from someone like PayPal and others makes a tremendous amount of sense, but that's just an example.

There are many other industries, where, if the old way of doing it is defunct, then it's time to come in and create an alternative. Internet software-based organizations can go out and find new business where the old companies have gone under. It doesn't necessarily mean it's all the software, but the business value is in how you coordinate buyers and sellers and efficiencies using software.

Take something like Zipcar. They're not in the automotive business, but they certainly allow people to gain the use of automobiles at a low price point.

I'd like to throw out to the crowd this idea of going software, going green field, creating Internet alternatives to traditional older companies. Who has any thoughts about that?

Morgenthal: On the surface there are some really good concepts there. What we need are state and federal governances and laws to catch up to these opportunities. A lot of people are unaware of the potential downside risks to letting the data out of your hands into a third-party candidate's hands. It's questionable whether it's protected under the Fourth Amendment, once you do that.

There are still some security risks that have yet to be addressed appropriately. So, we see some potential there for the future. I don't know what the future would look like. I just think that there is some definite required maturity that needs to occur.

Gardner: So, it's okay to act like a startup, but you still need to act like a grownup.

Morgenthal: Right.

Gardner: Any other thoughts on this notion of opportunity from tragedy in the business, and that IT is an important aspect of doing that?

Evolving enterprises

McKendrick: I agree with what you're saying entirely. You mentioned on a couple of occasions that large enterprises need to act like small businesses. About 20 years ago, the writer John Naisbitt was dead-on with the prediction that large enterprises are evolving into what he called confederations of entrepreneurs. Large companies need to think more entrepreneurially.

A part of that thinking will be not the splitting up, but the breaking down of large enterprises into more entrepreneurial units. IT will facilitate that with the Enterprise 2.0 and Web 2.0 paradigm, where end users can kind of shape their own destiny. You can build a business in the cloud. There is a need for architecture; and I preach that a lot, but smaller departments of large corporations can kind of set their own IT direction as well with the availability.

Gardner: We're almost out of time. Any other thoughts about how IT is on the offensive, rather than just the defensive in terms of helping companies weather the downturn?

Shimmin: I agree with what you guys have been saying about how companies can behave like startups. I'd like to turn it around a little bit and suggest that a small company can behave like a large company. If you have a data center investment already established, you shouldn't be bulldozing it tomorrow to save money. Perhaps there's money in "them thar hills" that can be had.

Look at the technologies we have today, the cloud-enablement companies that are springing up left and right, and the ability to federate information and to loosely coupled access methods to transact between applications. There's no reason that the whole idea that we saw with the SETI@home and the protein folding ideas can't be leveraged within the company's firewalls and data centers externalize. Maybe it's storage, maybe it's services, maybe it's an application or service that the company has created, that can be leveraged to make money. It's like the idea of a house putting a windmill in and then selling electricity back to the power grid.

Gardner: Last thoughts?

Kelly: I would add one or two quick points here. Going on the offense, one opportunity is to take advantage of the slowdown and look at those business processes that you haven't gotten to in a long time, because things have been so hectic over the past couple of years. It may be a great time to reengineer those using some of the new technologies that are out there, going to the cloud, doing some of the things we've already talked about.

The other option here is that it may be a good time to accelerate new technology adoption. Move to YouTube for video-based training, or use Amazon's Kindle for distributing repair manuals electronically. Look at what the options are out there that might allow you to remake some of these processes using new technologies and allow you to profit and perhaps even grow the business during these tough economic times.

Gardner: So economic pain becomes the mother of all invention.

Kelly: Exactly.

McKendrick: We've seen it happen before. Back in 1981-1982 was when we saw the PC revolution. The economy was in just as bad a shape, if not worse, than it is now. Unemployment was running close to 10 percent. The PC revolution just took off and boomed during that time. A whole new paradigm had evolved.

Gardner: Very good. Well, I would like to thank our panelists this week. We've been joined by Joe McKendrick, independent IT analyst and prolific blogger. Also, Brad Shimmin, principal analyst at Current Analysis; JP Morgenthal, independent analyst and IT consultant; and Dave Kelly, founder and president of Upside Research. Thanks to all. I think we've come up with a lot of very important and quite valuable insights and suggestions.

I'd also like to thank our charter sponsor for the BriefingsDirect Analyst Insights Edition podcast series, Active Endpoints, maker of the ActiveVOS visual orchestration system, as well as the support of TIBCO Software.

This is Dana Gardner, principal analyst at Interarbor Solutions. Thanks for listening, and come back next time.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com. Charter Sponsor: Active Endpoints. Also sponsored by TIBCO Software.

Special offer: Download a free, supported 30-day trial of Active Endpoint's ActiveVOS at www.activevos.com/insight.

Edited transcript of BriefingsDirect Analyst Insights Edition podcast, Vol. 38 on how businesses should react to the current economic realities and prepare themselves to emerge stronger. Copyright Interarbor Solutions, LLC, 2005-2009. All rights reserved.

Webinar: Modernization Pulls New Value From Legacy and Client-Server Enterprise Applications

Transcript of a BriefingsDirect webinar with David McFarlane and Adam Markey on the economic and productivity advantages from application modernization.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com. Learn more. Sponsor: Nexaweb Technologies.

Announcer: Hello, and welcome to a special BriefingsDirect presentation, a podcast created from a recent Nexaweb Technologies Webinar on application modernization.

The webinar examines how enterprises are gaining economic and productivity advantages from modernizing legacy and older client-server applications. The logic, data, and integration patterns' value within these older applications can be effectively extracted and repurposed using tools and methods, including those from Nexaweb. That means the IT and business value from these assets can be reestablished as Web applications on highly efficient platforms.

We'll learn how Nexaweb has worked with a number of companies to attain new value from legacy and client-server applications, while making those assets more easily deployed as rich, agile Web applications and services. Those services can then be better extended across modern and flexible business processes.

On this podcast, we'll hear from Dana Gardner, principal analyst at Interarbor Solutions, as well as David McFarlane, COO at Nexaweb, and then Adam Markey, solution architect at Nexaweb.

First, welcome our initial presenter, BriefingsDirect's Dana Gardner.

Dana Gardner: We're dealing with an awful lot of applications out there in the IT world. It's always astonishing to me, when I go into enterprises and ask them how many applications they have in production, that in many cases they don't know. In the cases where they do know, they're usually off by about 20 or 30 percent, when they go in and do an audit.

In many cases, we're looking at companies that have been around for a while with 10 or 20 years worth of applications. These can be on mainframe. They can be written in COBOL. They could be still running on Unix platforms. In a perfect world we'd have an opportunity to go in and audit these, sunset some, and re-factor others.

Today, however, many organizations are faced with manpower and labor issues. They've got skill sets that they can't bring in, even if they wanted to, for some of these older applications. There is, of course, a whole new set of applications that might not be considered legacy, but that are several years old now. These are N-tier and Java, distributed applications, .NET, COM, DCOM, a whole stew in many organizations.

What I am asking folks to do, now that we're into a situation where economics are probably more prominent than ever -- not that that's not usually the case in IT -- is to take a look at what applications are constraining their business. Not so much to worry about what the technology is that they are running on or what the skill sets are, but to start factoring what new initiatives they need to do and how can they get their top line and bottom line as robust as possible? How do they get IT to be an enabler and not viewed as a cost center?

This is really where we should start thinking about modernizing and transforming IT -- getting application functionality that is essential, but is in someway handicapping what businesses want to do.

We want to exploit new architectures and bring more applications into association with them. It's not just architectures in terms of technology, but approaches and methodologies like service-oriented architecture (SOA), or what some people call Web-oriented architecture (WOA), looking to take advantage of interfaces and speed of innovation so that organizations can start to improve productivity for their internal constituents, in this case usually employees or partners.

Then, increasingly because of the difficulty in bringing about new business during a period of economic downturn, they're reaching out through the Internet, reaching out through the channels that are more productive, less costly and utilizing applications to focus on new business in new ways.

SOA and mobile devices

Increasingly, as I mentioned, this involves SOA, but it also increasingly involves mobile. We need to go out and reach people through their mobile Internet devices, through their iPhone and their BlackBerry, and a host of other devices at the edge. You need to be able to do that with applications and you need to be able to do it fast.

So, the goal is flexibility in terms of which applications and services need to reach new and older constituencies at less cost and, over time, reduce the number of platforms that you are supporting, sunset some apps, bring them into a new age, a new paradigm, and reduce your operating costs as a result.

Information really is the goal here, even though we are, with a handful of applications, starting to focus on the ones that are going to give us the biggest bang for the buck, recognizing that we need to go in and thoughtfully approach these applications, bring them into use with other Web services and Web applications, and think about mashups and Enterprise 2.0 types of activities. That involves expanding the use of these new methodologies.

One of the things that's interesting about companies that are aggressively using SOA is they also happen to be usually aggressive in using newer development platforms and tools. They're using dynamic languages, Web interfaces, and rich Internet application (RIA) interfaces. This is what's allowing them to take their newer applications and bring them into a services orientation reuse. Some of those services can be flexible and agile.

That's not to say you can't do some of those things with the older applications as well. In many cases, tools are being brought about and third-party inputs, in terms of professional services and guidance, are coming around. I'm recommending to people to respond more quickly, to save operational costs, to get agile and reach out through these new edge devices and/or the Internet, and do it in a fairly short order.

It's amazing to me that for those companies that have moved in this direction, they can get applications out the door in weeks rather than months, and in many cases, you can transform and modernize older applications on aging platforms just as quickly.

We want to move faster. We want to recognize that we need a higher payoff, because we also recognize that the line-of-business people, those folks that are tasked with developing new business or maintaining older business, are in a rush, because things are changing so quickly in the world around us. They often need to go at fast-break or breakneck speed with their business activities. They're going to look at IT to be there for them, and not be a handicap or to tell them that they have to wait in line or that this project is going to be six to eight months.

So, we need to get that higher agility and productivity, not just for IT, but for the business goals. Application modernization is an important aspect of doing this.

How does modernization fit in? It's not something that's going to happen on its own, obviously. There are many other activities, approaches, and priorities that IT folks are dealing with. Modernizing, however, fits in quite well. It can be used as a way to rationalize any expenditure around modernization, when you factor in that you can often cut your operating costs significantly over time.

You can also become greener. You can use less electricity, because you're leveraging newer systems and hardware that are multi core and designed to run with better performance in terms of heat reduction. There are more options around cloud computing and accessing some services or, perhaps, just experimenting with application development and testing on someone else's infrastructure.

By moving towards modernization you also set yourself up to be much more ready for SOA or to exploit those investments you have already made in SOA.

Compliance benefits

There are also compliance benefits for those organizations that are facing payment-card industry (PCI) standards in financial or other regulatory environments, freeing up applications in such a way that you can develop reports, share the data, and integrate the data. These are all benefits to your compliance issues as well.

As I mentioned earlier, by moving into a modernization for older applications, you've got the ability to mash up and take advantage of these newer interfaces, reuse, and extended application.

There is a whole host of rationalizations and reasons to do this from an IT perspective. The benefits are much more involved with these business issues and developer satisfaction, recognizing that if you are going to hire developers, you are going to be limited in the skill sets. You want to find ones that are able to work with the tools and present these applications and services in the interfaces that you have chosen.

Keeping operations at a lower cost, again, is an incentive, and that's something they can take out to their operating and financial officers and get that backing for these investments to move forward on application modernization and transformation.

One of the questions I get is, "How do we get started? We've identified applications. We recognized the business agility benefits. Where do we look among those applications to start getting that bang for the buck, where to get modern first?"

Well, you want to look at applications that are orphans in some respect. They're monolithic. They're on their own -- dedicated server, dedicated hardware, and dedicated stack and runtime environment, just for a single application.

Those are good candidates to say, "How can we take that into a virtualized environment?" Are there stacks that can support that same runtime environment on a virtualized server, reduce your hardware and operating costs as a result? Are they brittle?

Are there applications that people have put a literal and figurative wall around saying, "Don't go near that application. If we do anything to it, it might tank and we don't have the documentation or the people around to get it back into operating condition. It's risky and it's dangerous."

Conventional wisdom will say don't go near it. It's better to say, "Listen, if that's important to our business, if it's holding our business back, that's a great target for going in and finding a way to extract the logic, extract the data and present it as something that's much more flexible and easy to work with."

You can also look for labor issues. As I said, if skills have disappeared, why wait for the proverbial crash and then deal with it? It's better to be a little bit proactive.

We also need to look at what functional areas are going to be supporting agility as these business requirements change. If you're an organization where you've got supply chain issues, you need to find redundancy. You need to find new partners quickly. Perhaps some have gone out of business or no longer able to manufacture or supply certain parts. You need to be fleet and agile.

If there are applications that are holding you back from being able to pick and choose in a marketplace more readily, that's a functional area that's a priority for getting out to a Web interface.

Faster, better, cheaper

People are going to be looking to do things faster, better, cheaper. In many cases those innovative companies that are coming to market now are doing it all through the Web, because they are green-field organizations themselves. They are of, for, and by the Web. If you're going to interact with them and take advantage of the cost, innovation, and productivity benefits they offer, your applications need to interrelate, operate, and take advantage of standards and Web services to play with them.

You also need to take a look at where maintenance costs are high. We've certainly seen a number of cases where by modernizing applications you have reduced your cost on maintenance by 20 or 30 percent, sometimes even more. Again, if this is done in the context of some of these larger initiatives around green and virtualization, the savings can be even more dramatic.

I also want to emphasize -- and I can't say it enough -- those SOA activities shouldn't be there for just the newer apps. The more older apps you bring in, the more return on investment you get for your platform modernization investments, as well as saving on the older platform costs, not to mention those productivity and agility benefits.

We also need to think about the data. In some cases, I have seen organizations where they have applications running and aren't really using the application for other than as an application repository for the data. They have a hard time thinking about what to do with the data. The application is being supported at high cost, and it's a glorified proprietary database, taking up app server and rack space.

If you're looking at applications that are more data centric in their usage, why not extract that data, find what bits of the logic might still be relevant or useful, put that into service orientation, and reduce your cost, while extending that data into new processes and new benefits.

It's also important to look at where the technical quality of an app is low. Many companies are working with applications that were never built very well and never performed particularly well, using old kludgy interfaces. People are not as productive and sometimes resist working with them. These are candidates for where to put your wood behind your arrow when it comes to application modernization.

In beginning the process, we need to look at the architecture targets. We need to think about where you're going to put these applications if you are refactoring them and bringing them into the Web standards process.

It's important to have capacity. We want to have enough architecture, systems, and runtime in place. We should think about hosting or collocation, where you can decrease your cost and the risk of capital expenditure, but at the same time, still have a home for these new apps.

You certainly don't want to overextend and build out platforms without the applications being ready. It's a bit of a balancing act -- making sure you have enough capacity, but at the same time performing these modernization transformation tasks. You certainly don't want to transform apps and not have a good home for them.

Also important is an inventory of these critical apps, based on some of the criteria, we have gone through.

Crawl, walk, run

The nice thing about creating the categorization is that once you've got some processes in place on how to go about this, with one application you can extend that to others. The crawl-walk-run approach makes a great deal of sense, but when you've learned to crawl well, extend and reuse that to walk well, and then scale it from there.

This construction, deconstruction, rationalization process should also be vetted and audited in the sense that you can demonstrate paybacks. We don't want to demonstrate cost centers becoming larger cost centers. We want to show, at each step of the way, how this is beneficial in cost as well as productivity. Then, we need to focus continually on these business requirements, to make a difference and enhance these business processes.

There are some traps. It's easier said than done. It's complicated. You need to extract data carefully. If you start losing logic and access to data that are part of important business processes, then you're going to lose the trust and confidence, and some of your future important cost benefit activities might be in jeopardy.

It's important to understand the code. You don't want to go and start monkeying around with and extracting code, unless you really know what you're doing. If you don't, it's important to get outside help.

There are people who are not doing this for the first time. They've done it many times. They're familiar with certain application types and platforms. It's better to let them come in, than for you to be a guinea pig yourself or take trials and tests as your first step. That's not a good idea when you're starting to deal with critical and important application.

Stick to processes and methods that do work. Don't recreate the wheel, unless you need to, and once you have got a good wheel creation method, repeat and verify.

You need to be rigorous, be systemic, and verify results, as we have said. That's what's going to get you those transformational benefits, rather than piecemeal benefits. You're going to see how application modernization fits into the context of these other activities, You're going to be well on the way to satisfying your constituencies, getting the funding you need, and then seeing more of your budget going to innovation and productivity and not to maintenance and upkeep.

There are a lot of great reasons to modernize, and we have mentioned a number of them. There are backwards and forwards compatibility issues. There are big payoffs in cost and agility, and now it's time to look at some of the examples of how this has been put into place.

Announcer: Thanks Dana. Now, we'll hear from David McFarlane, COO at Nexaweb, on some use-case scenarios for adopting and benefiting from these application modernization opportunities. Here is David McFarlane.

Understanding value

David McFarlane: We're going to go a little bit deeper and actually take a look at a case study of one of our clients, one of our successful implementations, and see the value that came out of it.

To really understand what value is, we have to understand how we're going to quantify it in the first place. We're probably all in IT here, and we're probably all IT heads, but we have to take a step back, take a top-down approach, and understand how we define that value in the business.

As Dana said earlier, application modernization impacts all areas of your business, and the three areas that it really impacts are business, operations, and IT. So, you have to step outside your role. You have to see what value the business would see out of it, what operations would see out of it, and also for yourself in IT, what gains and benefits you would get out of that. When you add them all together, you get the overall value for that application modernization.

Let's take a look at a real case study as an example. Just to set some background, we have a legacy system, a customer relationship management (CRM) call center application for one of our clients. They have about five call centers, with around 50 employees, and they're on a C++ client-server application.

The important thing to note about this is that, in legacy systems, there are usually multiple instances of this application. Since it's a client-server app, we have to remember that it's also deployed and managed on each individual desktop. Each individual employee has their own installation on their desktop, which is sometimes a nightmare to manage for most IT shops.

We took that system and built a modernized system with it. We had a J2EE architecture with desktop browser look and feel, as Dana talked about earlier. You get that real performance out of the installed client-server application, but it's delivered over the Web via zero client install.

You don't have to do anything besides update your Web server, and everybody automatically has the new application, the new look and feel, the new business logic, and access to whatever data you've hooked it up to on the backend.

Also important is the ability of our system that we modernized to be deployed as an open standard. We used J2EE architecture, and that means we're able to integrate with anything that you have on your back end via open Java application programming interfaces (APIs).

There is a vast array of open source products out there waiting to be used, to be integrated, and to modernize systems. There's also a large workforce that will be able to understand a Java application as opposed to a custom C++ application or even a COBOL application. We also consolidated it to one distributed instance, since we can now manage it centrally from one data center.

ROI analysis

When you're doing a modernization, you're probably going to have to do some sort of return on investmment (ROI) analysis to understand exactly what you're going to get out of this application, and that's going to take some time.

If you're coming from an IT perspective, you might have most of the benefits already in your head: "I'll have people using Java instead of COBOL. I'll have all the developers focused on one development language, instead of multiple development languages. I'm going to be able to decrease by deployment time, etc."

But, when justifying something like this, you need to take a step back, and as we said before, look at the factors in these three areas that are most affected by application modernization. As Dana pointed out, it's business operations in IT. So, we go ahead and look at the business.

We have to ask a few questions here: "Who are my users? How long does each transaction take?" Say I'm a call center and it takes few minutes for a user to get through a transaction, if I can cut that to one-and-a-half minutes or even one minute, I'm able to increase productivity significantly.

The next part is operations. How is that productivity increased, but also what does it mean to have a modern application infrastructure? If previously I had to come in to work and sit down at my desktop, because that's the only place that application was installed, maybe I don't even need to come in to work anymore. Maybe I can work from home. Maybe I can work from India, if I want to, as long as I have VPN access into that sort of an application. You can begin to see the operational flexibility that you get out of that.

Then, as we look into the IT benefits that we have here, how long did it take to make a change to my legacy system? One of the biggest benefits that we're seeing is when coming from legacy C++ PowerBuilder applications, where you really have to code each and every aspect of the user interface (UI), the business logic, and the specific data interaction, because we don't have SOA to leverage, and we don't have hooks into services that we've built or are planning to build in our application.

Also, we have to think of what the developer actually had to do to make that change. In older technologies, they might not have a way to prototype the UI and show the business users feedback before they are able to get sign off on what they're going to build. They might have to program each and every element of the user interface, all the way down to writing SQL stored procedures that are application-specific to a database.

Going to a modern architecture, you're going to have services and you're going to have your object-relational management capabilities. You're going to have some great middle-tier applications like Spring and Struts to enhance the development. Obviously, with Nexaweb technologies, you have that ability to create the declarative user interfaces, which speeds up UI development time significantly.

Also we have what hardware and software do the application run on, and what licenses am I paying for? As Dana pointed out earlier, you'll have a significant opportunity for maintenance savings, when you go to a modern architecture.

Productivity gains

We asked all these questions, and we found some significant areas of value in our CRM modernization case. In the business we actually saw a 15 percent gain in end-user productivity, which impacted our clients by about $1.5 million a year. In these times, you're actually able to slim down or trim your workflow if you have a more productive application. In this case, which are the productivities that are able to do more calls quicker, service customers quicker? Ultimately, that ends up in end user satisfaction and dollars saved as well.

Next, you have the operational value. What we had here was a decrease in audit time. We found that their auditors were going around to each individual desktop and seeing exactly which applications were installed on their computer. They had to look at each of the five instances in each call center for auditing, instead of looking at one consolidated instance, with just one database and book of record for all the operation there. So, that saved a lot of auditing time for them, which is really great.

Another thing was that it improved the performance of another help desk. This was a help desk for customer support, but the internal IT help desk actually saw huge improvement. Because the application was centrally managed, all people had to do was go to a Website or click a link to access that application, instead of having to install software. As you know, when you install software, a ton of things can happen. You actually have to do a lot of testing for that software as well. All that has been reduced, and we're saving about $15K there.

When you look at the IT benefits, we have that IT developer productivity gain that we talked about. We eliminated some hardware and software for those five instances and some of that maintenance cost. So, that's and $85K impact. There are the deployment benefits of a RIA, when you're going from deploying applications on 250 computers to zero computers. You're going to see an immediate impact there, and that was around $250K for the time to do that, the software that it took to push that out, and for the support that it needed to run.

Because of the change management benefits from RIAs, the development productivity, and the ability to go from requirements to design, to testing, to production much more quickly than a client-server application, we're able to see a 90 percent gain there, which had a $200K impact.

When you look at it in total, the yearly bottom line improvement was about $2.23 million for this one instance, with one time improvement of $85K for the hardware and the software that we got rid of. It was only a one-time investment of about $800K.

I say "only," but if you look at the business, operational, and the IT impacts together, you get payback in the first full year. If you were only coming from that IT perspective, you would have seen that the payback is actually a little bit longer than a year.

If you add all those numbers up, you get something a little less than $800K, about $700K, I believe. That will be about 14- or 15-month payback instead of about a 5- or 6-month payback. When you're trying to make a case for modernization, this is exactly what your CFO or your CEO needs to know -- how it affects your bottom line from all areas of the business, not just IT.

Let's not forget the intangibles that come with application modernization. It's always about the bottom line. There are some great things that you get out of a modern application infrastructure, and the first thing you get, when you look at the business, is improved response time.

Happier CSRs

The number one thing I could think of is that your customer service representatives (CSRs) are going to be happier, because once they click a button, it's not going to take two seconds to respond like the old application. It's going to be fast. It's going to be rich. You're not going to have any installation issues when you do an upgrade. It's going to be smooth.

You're going to have happier CSRs. Having happier CSRs means that you're going to have improved customer service, and a customer satisfaction level, when people get calls through quicker, and when people talk to happy customer service representative.

Also, when you're doing application modernization, you have a good opportunity to automate manual portions of the business process. You can go in and say, "This person is cutting and pasting something into an Excel spreadsheet, and emailing this to somebody else as a report after they're done." Maybe there's an opportunity there to have that done automatically. So, it saves them time again. That's where you can really find your increased productivity.

When we look at operations, we actually enabled real estate consolidation. I didn't put those numbers in the ROI, because they were probably going to do that anyway, but it was an enabler. Having a technology to go from five call centers to one call center with distributed agents across the country and across the world saves the business a lot of money on the real estate, the power, and the infrastructure needed to have five call centers up and running.

Again, you get the workforce flexibility, because I can work from home, work from India, or come and work from the office. I could do this job from anywhere, if I have access to this application. Obviously, we're able to bring outsourced call centers online on-demand with that.

Then, we move on to IT. As I said before, it's short release cycles with more functionality. When release cycles are shorter, you can incrementally build in more features to your application, make people more productive, and make the application do more in less time, which is obviously what we want to do.

We have a standardized J2EE architecture, which means the people that you're going to look for to maintain the application are going to be out there. There is a huge number of Java developers out there waiting and ready to come in to maintain your application.

We're built on open standards to ensure that the application is ready for the future. There are a lot of RIA technologies that try to lock you in to one runtime and one development methodology. We use open standards as much as we can to push your application out the door as fast as possible, and be as maintainable as possible, and as ready for the future as possible.

Announcer: Thanks, David. Now, we'll hear from Adam Markey, solution architect at Nexaweb, on specific deployment examples of application modernization projects. Here, then, is Adam.

Enterprise-wide value

Adam Markey: As we look at these different customer examples, we really want to see how they've had an impact of value across the enterprise, and see, from a business point of view, the ability to be able to increase market reach, improve user productivity, decrease the time to market, increase customer engagement and loyalty, and sustain, if not build upon, that competitive advantage.

We also want to look at the operations as well and understand how this new architecture has actually realized benefits in terms of a reduced real estate, greater utilization of global workforce, reduction in energy, moving to green architectures, and improving the overall vendor management.

For those closely responsible for the organization and who deliver this capability, we want to look at IT and how this process helps deal with the rapidly changing demographics in the IT skills market. As the baby boomers move on and out of or off the job market, many of the legacy skills that we relied on so heavily through the years are becoming very rare and hard to find within the organization.

We'll take a look at that process efficiency, and generally how we can improve the overall efficiency and cost in terms of licenses and the use of open source. So, let's take a closer look at a few examples to help illustrate that. There's nothing wrong with your screens here. This first example is actually an example of the modernization of a Japanese phone exchange trading platform.

In this case, this was a trading platform built by Enfour, Bank of Tokyo-Mitsubishi (BTM). The challenge that BTM had was that, once they were capable of satisfying their large corporate customers with their on-premises foreign exchange trading platforms, the small- and medium-sized enterprises (SMEs) were quite different in terms of what they required.

They needed a UI and an application that was much simpler for them to adopt. They didn't have the necessary IT infrastructure to be able to establish the complex on-premises systems. They needed something that had no IT barriers to adoption.

What we did for BTM with our partner Hitachi was to help modernize and transform the entire trading platform to the Web. Just to stress, this isn't simply an information portal, this is a fully functioning trading platform. There are over 500 screens. It's integrated to a 120 different data sources with very stringent service-level requirements to the deployment of the application.

We needed to be able to display any fluctuations and exchange right from the Reuters feed in 200 milliseconds or less. We needed to be able to complete a close loop transaction in two seconds or less.

So, this is a fully functioning trading platform. What's it's meant for BTM is that they've been able to dramatically increase the adoption and penetration into the SME market. Fundamentally, these SME or institutional traders don't need any architecture whatsoever, just a browser. There is no client installation. They're able to self-serve, which means they can simply enter the URL, log in, and get started. This has been a tremendous cost reduction and also revenue growth for this product line in penetrating this new market segment.

In the same field of foreign exchange trading, we were able to help a number of Japanese banks take their products and services global. Traditionally, the market had been very service-intensive through a call center. You dialed in and placed your trade with the trader over the phone. By being able to move this entire platform to the Web, we allowed them to go global and go 24/7.

Now, we have over 30,000 institutional traders using this trading platform and application to self-serve through operations, not just in Tokyo, but in Singapore, London, New York, Frankfurt, literally around the world.

New capabilities

Not only has it extended the product line with very little additional operational cost to the banks, but it's also allowed them to provide new capabilities to those customers. One, for example, is the ability to be able to run continuous global book.

In the traditional implementations of trading platforms, each one would be an on-premises installation, which meant that each region would actually have to close their books and close out their operations at the end of their working day. Because it's now managed and provisioned in system, it can actually run globally, and allows them to maintain those books, and maintain common alerts across entities that themselves have a global footprint.

Not only were we getting them to a new market, but we were also allowing them to introduce new functionality. It allowed them to interact more closely with the customers, providing real-time chat facilities, and allowing the traders in Japan to interact directly with a trader as they exhibited certain behavior. It allowed them to offer custom contracts and has significantly increased the close rate of those applications.

So, a big impact in terms of market reach for the banks in Japan is one example. Let's take a look here at how we've been able to dramatically improve user productivity and dramatically reduce the business process time for large organizations.

This is a representation for one of the largest telecommunications groups in Europe. The challenge that they were facing is that they had a request for proposal (RFP) process that was very complicated. They had to be able to provide quotations for countrywide mobile platforms, a very large, complex design process, which was performed through one application, one legacy application as a product configurator.

Then, they would go to another application for doing the parts costing and bill of material assessment, another application for the pricing, and finally, an overall RFP approval process for these large $100 million-plus projects running over 10 years.

The whole process was taking them anywhere up to four weeks. It was fragmented. It was error prone. There were spreadsheets, and the files were flying around the globe, trying to complete this process.

What we were able to do for this organization was to streamline the process and present it as a single-branded Web-based workflow that brought all the different elements together, and, most importantly, ran on top of a SAP NetWeaver infrastructure. In fact, the workflow was designed to have an SAP look and feel.

End users didn't know when they were in or outside of SAP. They didn't care and they didn't need to, because as an end-to-end process, SAP acts as the overall system of record, providing a much higher degree of control, accuracy, and a dramatic reduction in errors.

The great result, from a user productivity point of view, is that they've been able to go from a process that took four weeks to a process that now takes four hours or even less -- a dramatic reduction. More important was the ability to increase the accuracy of these processes.

Desktop-like experience


These Web applications, I should stress, are really a desktop-like experience for the end user. We think of them and talk about them as a desktop in a browser. Everything that you could do as a desktop application with all the user navigation and productivity in very intense data environments, you can do in a browser-based application as deployed in this solution.

Let's take another look at another example where Web architecture and rich Web interfaces allowed us to dramatically improve customer loyalty and customer engagement.

You maybe familiar with the concept of the extended enterprise, whereby more and more organizations need to be able to open up traditionally back-office processes, and back-office systems still managed on the sort of green screen UIs in the bowels of the company. In order to be able to truly engage their customers and improve the process flow, more and more of those systems are being opened up and presented to their customers through rich, engaging Web applications.

This is an example of that. This is a company in the Netherlands called Waterdrinker, which is actually the largest flower distributor in Europe, a very significant business for them. We were helping them to create a Web-based, self-service ordering process that dramatically reduces the dependency on the use of customer service reps. It was similar to the scenario for the foreign-exchange trading platform. We were actually migrating customer interaction to the self-served Web platforms without the need for human intervention and costly CSRs.

But, it's much more than that. We're providing a much richer experience for the user, a much more engaging experience for the user, where they're able to more dynamically navigate through the catalog and determine the optimal order for them with all kinds of what-if calculations and analysis that are provided for them in real time at their own discretion.

The net result has been a significant increase in customer satisfaction, engagement, loyalty, We're yet to see it, because it's still relatively new, but just based on the amount of response reaction and conversion that we have seen through these Web-based interfaces, loyalty benefits will follow soon after. In addition, with a Web-based UI, you're able to easily and effectively customize the user interface with different users and communities.

In this case, they're able to provide a custom UI solution that integrates their catalog ordering process into their partners' processes. They distribute through local partners and local Websites, and they're able to provide this architecture as a white-label capability and then brand it according to the local distributor, delivering a rich branded experience through their partner.

Let's talk generally about competitive advantage. Obviously, all those things that we have talked about and shown with regard to different customers, and Dana has talked about in aggregate, offer all kinds of competitive advantage.

But, there's a certain element to competitive advantage that I would like to emphasize in this transformation process. Organizations, through the years, have basically instantiated and codified their best practices in the workflows within those legacy systems. Those business rules represent years of intelligence and competitive intelligence, and often the point at which you can realize tremendous competitive advantage.

Razor-thin margins

This is never truer than in the razor-thin margins of the consumer packaged goods (CPG) business, where a lot of the margin for a customer can actually be determined through the appropriate inventory, logistics, and pricing management, literally as goods are on route. What we've done for customers like these is to enable them to quickly and effectively extract the business rules that are buried in the legacy systems.

Frankly, nobody knows how they work anymore. They're not really very well documented at best, but we have allowed them to extract those business rules that represent the competitive advantage and consolidate them into a set of corporate-wide rules that can be more effectively managed.

One issue in a traditional legacy environment is that, as you establish business rules in terms of the legacy implementation, each one is monolithic. They start to create their own derivatives, as people program, tweak, and modify. At the end of a 10-year process, the rules barely resemble each other in each of the iterations.

In our transformed architecture, we're able to provide an environment, in which you can centrally manage, control, and modify those business rules and have them consistently and immediately applied across all the necessary touch points. Through this process, we can dramatically reduce human error.

This architecture allows us to provide support tools and business rules in a form that's readily accessible to the end user. You might say, "Wait a minute. It's a Web-based application, and when I'm sitting face to face with my customers, I'm not going to have access to the Web."

As you would expect in these solutions, we're able to architect them, so that the same application can be deployed as a Web application, or used as standalone. A great example of that is Aflac, where we created their premium calculation solution that is basically used across all the customer touch points, 38,000 users. And, 6,000 of those are agents who go door-to-door.

Part of the architecture and part of the challenge was to deliver that insurance calculation solution in such a way that when the agent is sitting across the kitchen table from their customer, they could still perform a level of custom quotation. They could produce the necessary documentation to be able to close the customer there and then as a standalone laptop with a local printer right across the kitchen table. That's all part of bringing those business rules that represent the years of competitive advantage successfully to the Web.

Let's take a look at how some of these capabilities impact the operations themselves. Here, we'll take an example of a call-center application. This was a transformation for the Pepsi bottling group of their customer-equipment tracking system It was a PowerBuilder application, maybe 10 years old, that we successfully moved to the Web.

The real business value in this is that by doing this, by creating a Web-based environment that could be deployed in any call center, we provide the flexibility and the agility for organizations to better utilize those call centers and better utilize that real estate, often consolidating from a large number of call centers to a smaller set of agile call centers, where they can put a lot of different processes through the same infrastructure.

Cost-management advantage

This has tremendous advantages, as you can imagine, in terms of cost management for those customers. We're even able to take that to the next step with the advent of voice-based telephony. It's now possible to engage home-office operators through a voice over Internet protocol (VoIP) infrastructure.

Those operators can not only have the benefit of the call center application as a Web based application accessible through their home broadband, but actually can have the same level of computer telephony integration (CTI) that they would have had, if they sat in the call center, by virtue of a series of VoIP based CTI technology that's available.

This is offering tremendous operating improvements in terms of, for example, real-estate consolidation. Also, looking at operations and the ability to optimize the use of the workforce, we have a situation here where we deploy a very complex laboratory information-management solution for the AmeriPath, now part of Quest Diagnostics. This is part of a pathology services process that requires very experienced technicians to participate.

The joy of being able to deploy this as a Web-based application means that you get great skills mobility, which means that technicians from anywhere, provided they have Web access, can actually participate in a diagnostic process, without the need to move the sensitive Health Insurance Portability and Accountability Act (HIPAA) data. So, HIPAA data that has to be stored in one place, can be made accessible to technicians through any location who can participate then in a process 24/7.

The value to IT is manifold. We'll take a quick look at some of those before we jump into the value equation itself. This is an example with SunGard Shareholder Systems, where they wanted to modernize their commercial product line, a 401k management application. I'm sure they're pretty busy these days.

It was originally deployed as an IBM-Oracle mainframe solution with a C++ front end. We modernized it through a pure Web application, but, from an IT development point of view, the benefits of being in that Web architecture are manifold. First and foremost, they were able to manage this entire development process with one person in the US, and a whole development team offshore in India, dramatically reducing the time and cost.

In this new architecture, the ability to respond to program change requests is tremendously different. We're able to program and change requests in one-tenth of the time and, by virtue of being a Web architecture, actually deploy those in now what are weekly release cycles, instead of six-monthly cycles that you would typically see as a point solution.

As we're running a little long here, I won't go into all of these, but there are many different ways in which the modern architecture really played into creating significant additional IT value.

We provide a process we call Nexaweb Advance, which is an end-to-end transformation process that allows us to dramatically reduce the time, risk, and costs of this overall implementation. It starts with a capture phase that is able to go in and interrogate legacy systems and dramatically reduce the amount of time and effort to document the code that typically is not well documented.

Then it goes through a model transformation process that dramatically reduces the amount of actual code that has to be written. In this example, it was a 65 percent reduction in the amount of code in the three million lines of application. The net result of that is through a typical designer development cycle, we were able to realize 50 percent or more reduction in the development time.

Having done that as a Web based application, there is no kind installation, no on-site provisioning. It's all centrally managed, so dramatic reductions in operating costs recognized by customers. In the example that we shared with you a little bit earlier, where, because we're in a modern object-oriented architecture with all the inheritance benefits that that brings, we're actually able to modify and execute change requests quite often in one-tenth of the time and then deploy them immediately and effectively as Web applications.

Announcer: Thanks, Adam. With that we conclude our podcast. You have been listening to a sponsored BriefingsDirect presentation taken from a recent Nexaweb webinar on application modernization. Please find more information on these solutions at Nexaweb.com. Thanks for listening and come back next time.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com. Learn more. Sponsor: Nexaweb Technologies.

Transcript of a BriefingsDirect webinar with David McFarlane and Adam Markey on the economic and productivity advantages from application modernization. Copyright Interarbor Solutions, LLC, 2005-2009. All rights reserved.