Friday, June 19, 2009

Winning the Quality War: HP Customers Offer Case Studies on Managing Application Performance

Transcript of a BriefingsDirect podcast recorded at the Hewlett-Packard Software Universe 2009 Conference in Las Vegas the week of June 15, 2009.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com. Sponsor: Hewlett-Packard.

Dana Gardner: Hello, and welcome to a special BriefingsDirect podcast series coming to you on location from the Hewlett-Packard Software Universe 2009 Conference in Las Vegas. We’re here in the week of June 15, 2009 to explore the major enterprise software and solutions trends and innovations that are making news across the global HP ecology of customers, partners and developers.

I'm Dana Gardner, principal analyst at Interarbor Solutions, and I'll be your host throughout this special series of HP Sponsored Software Universe live discussions.

Now, please join me for our latest discussion, a series of user discussions on quality assurance issues. Our first HP customer case study comes from FICO. We are joined by Matt Dixon, senior manager of tools and processes, whose department undertook a service management improvement award for operational efficiency and integrity. Welcome to the show Matt.

Matt Dixon: Thanks, Dana. I’m glad to be here.

FICO's service-management approach

Gardner: Tell me a little bit about how you use the development of a service management portfolio approach to your remediation and changes that take place vis-à-vis your helpdesk. It sounds like an awful lot of changes for a large company?

Dixon: Yes. We did go through a lot of changes, but they were changes that we definitely needed to go through to be able to do more with less, which is important in this environment.

The IT service management (ITSM) project that we undertook allows us to centralize all of our incidents, changes, and configuration items (CIs) into a one centralized tool. Before, we had all these disparate tools that were out there and had to go to different tools and spreadsheets to find information about servers, network gear, or those types of things.

Now, we’ve consolidated it into one tool which helps our users and operations folks to be able to go to one spot, one source of truth, to be able to easily reassign incidents, migrate from an incident to a change, and see what’s going to be impacted through the configuration management database (CMDB).

Gardner: Perhaps you can help our listeners better understand what FICO does and then what sort of helpdesk and operational staff structure they have?

Dixon: FICO, formerly known as Fair Isaac, is a software analytics company. We help financial institutions make decisions and we’re primarily known for FICO scores. If you apply for a loan, half of the times you get a FICO score. We’re about 2,300 employees. Our IT staff is about 230. We’re a global company, and our helpdesk is located in India. It’s 24x7 and they are FICO employees -- so that’s important to know.

Gardner: Tell me about the problem set you’re trying to address directly with your IT service management approach?

Dixon: We had two primary objectives we were trying to meet with our ITSM project. The first was to replace our antiquated tool sets. As I said before, we had disparate tools that were

They're very important definitely in today’s economy, and through the completion of our project we've been able to consolidate tools to increase those efficiencies and to be able to do more with less.

all over the place and were not integrated. Some were developed internally, and the development team had left. So, we’re no longer able to keep up with the process maturity that we wanted to do, because the tools could not support the process improvements that we wanted.

In addition to that, we have a lot of sensitive data -- from all the different credit data that we have to medical data to insurance data. So, we go through a vast number of audits per year, both internal and external, and we identified some gaps with our ITSM's previous solution. We undertook this project to close those gaps, so we could meet those audit requirements.

Gardner: I suppose in today’s economy, making sure your operations are efficient, making sure that these changes don’t disrupt, and maintaining the applications in terms of performance are pretty important?

Dixon: They're very important definitely in today’s economy, and through the completion of our project we've been able to consolidate tools to increase those efficiencies and to be able to do more with less.

Gardner: As you transition from identifying your problems and knowing what you wanted, how did you come about a solution?

Request for proposal

Dixon: We sent a request for proposal (RFP) to four different companies to ask them how they would help us address the gaps that our previous tool sets had identified. Throughout that process, we kept a scorecard, and HP was chosen, primarily for three reasons.

Number one, we felt that the integration capabilities within HP, both currently and the future roadmaps, were better than the other solution sets. Number two, we thought that universal configuration management database (UCMDB), through its federation, offered a more complete solution than other CMDB solutions that were identified. The third one was our partnerships and existing relationships with HP, which we relied upon during the implementation of our ITSM solution.

Gardner: And so, were there several products that you actually put in place to accomplish your goals?

Dixon: We chose two primary products from HP. One was Service Manager where we log all of our changes and incidents, and then the second one was the UCMDB, and we integrated those two products, so that the CIs flow into Service Manager and that information flows out of Service Manager back into UCMDB.

Gardner: How long have you had this in place, and what sort of metrics or success and/or payback have you had?

Dixon: We started our implementation last summer, in July of 2008. We went live with Incident in August. We went live with Change Management in October. And, we went live in January with Configuration Management. It was kind of a phased rollout. We started last July, and the project wrapped up in January of 2009.

From the payback perspective, we’ve seen a variety of different paybacks. Number one, now we’ve been able to meet and surpass audit requirements.

Now, we can report on first-call resolution. We can report on a meantime to recover. We can report on all the important information the business is asking for.

That was our number one objective -- make sure that those audits go much faster, that we can gather the information quicker, and that we can meet and surpass audit requirements. We’ve been able to do that.

Number two, we’ve improved efficiencies and we’ve done that through templates, not having to double-enter data because of disparate tools. Now, we have one tool, and that information tracks within all the tools. You don’t have to double-enter data.

The third one is that we've improved visibility through notifications and reporting. Our previous toolset didn’t have a lot of reporting abilities and no notification options. Now, we can report on first-call resolution. We can report on a meantime to recover. We can report on all the important information the business is asking for.

The last one is that we have more enforcement or buy-in of our processes. Our number of changes logged, has gone up by 21 percent. It’s easier to log a change. We have different change processes and workflows that we’ve been able to develop. So, people buy into the process. We’ve seen a 21 percent increase in the number of changes logged from our previous toolset.

Gardner: You’ve got this information in one place, where you can analyze it and feel comfortable that all the changes are being managed, and nothing is falling off the side or in between the cracks. Is there something you can now do additionally with this data in this common, managed repository that you couldn’t do before? Or, were there adds or improvement in terms of moving to a variety of different systems or approaches?

Dixon: We have a lot of plans for the future, things that we’ve identified that we can do. Some of the immediate impacts we’ve seen are our major problem channels -- which CIs have the most incidents logged against them. We identify CIs in incidents. We identify CIs in changes. Now, we can run reports and say, "Which CIs are changing the most? Which CIs are breaking the most?" And, we can work on resolving those issues.

Then, we’ve continually improved the process. We have a mature tool with lot of integrations. We’ve been able to pull all this information together. So, we’re setting up roadmaps, both internally and in partnership with HP, to continually improve our process and tools.

Gardner: Well, great. We've been talking about a case study with a user FICO, and how they’ve implemented ITSM projects. Thanks, Matt.

Dixon: Thanks, Dana. I appreciate it.

Gevity opts for PPM solutions

Gardner: Our second customer use case discussion today comes from Gevity, part of TriNet. We’re here to discuss how portfolio and project management (PPM) solutions have helped them. We’re here with Vito Melfi. He is the vice president of IT operations. Welcome.

Vito Melfi: Thank you.

Gardner: Tell us a little bit about how PPM solutions became important for you?

Melfi: Well, in Gevity, we had, as most other companies do, a whole portfolio of applications and a lot of resources. The desire on Gevity’s part to become a very transparent IT organization was difficult to do, not knowing where your resources are, how you are using them, and how to re-prioritize applications within our company priorities.

The application of portfolio management became very critical, as well as strategic. Today, we have the ability to see across our resource base. We use the time-tracking system, and we can produce portfolio documents monthly. Our client base can see what’s out there as a priority, what’s in queue, and, if we have to change things, we can do so with great flexibility.

Gardner: Now, Gevity does a lot of application support for a number of companies in their HR function. So, applications are very important. Tell us more about how your company operates?

Melfi: We’re a professional employment organization (PEO). We deliver payroll services, benefits and workers’ comp services, and a host of other HR services. We’re essentially an HR service company for hire. We believe that we can provide these capabilities better as a service provider than most companies can provide trying to build this type of technology capability on their own.

Gardner: When you began looking into PPM, complexity of control probably was a number one concern for you?

Melfi: Absolutely. Complexity in an organization can be paramount if you don’t have good control over your resources and over your applications. At Gevity, we had a lot of people trying very hard to get control and get their arms around those things.

The technology that HP provides to PPM really is the enabler for us to figure out our whole portfolio requirement. The communication that comes back to our functional areas

Better internal customer service is always paramount to us. By being able to do more with less, obviously we can take our funding and look into different areas of investment.

and to our client base has been very well received. It's something that we’ve found to be very valuable to us. Then, with taking that through to the quality center and service center, the integration of the three has been just a big benefit to us.

Gardner: What would you say is the solution that this combination of products actually provides for you?

Melfi: The solution that we get out of our service center application is to be able to turn around the incidents that we have. We’ve been able to resolve first call incidents in a 70-80% first call close. This was our ratio with 10 people a couple of years ago, and it’s still our ratio with 7 people doing it. Our service level has maintained okay, and actually improved a bit, and our employee base has gone down. This is particularly important to us as we go forward with our parent company, TriNet, because now we’re going to be merging east- and west-coast operations.

Gardner: So, it’s greater visibility and greater control. How does that translate into returns on either investment in dollars and cents or in the way you can provide service and reliability to your users?

Melfi: It translates in a couple of ways. Better internal customer service is always paramount to us. By being able to do more with less, obviously we can take our funding and look into different areas of investment. Not having to invest in adding people to scale our services creates opportunity for us elsewhere in the organization.

Gardner: Okay. I wonder if there are any lessons that you might have for other folks who are looking at PPM? And, expanding on that, what would you do differently?

Melfi: We knew this, but it really comes to bear when you’re actually doing an implementation of your toolset, the key to success is having good processes. If you have those processes in place, the implementation of the toolset is a natural transition for you.

If you don’t have good processes in place, the tool itself will help, but you're going to have to take a step backwards and understand how these three things interact -- two, or three, or how many you’re implementing. So it’s not a silver bullet. It’s not going to come and automate everything for you. The key is to have a really a good grasp on what you do and how you do it and what your end game is, and then use the tools to your advantage.

Gardner: We’ve been talking about the use of PPM solutions with Vito Melfi. He is the vice president of IT operations at Gevity. Thanks.

Melfi: Thank you.

JetBlue revs up test cycle

Gardner: Our third customer today comes from an HP Software & Solutions Awards of Excellence winner, JetBlue Airways. We’re here with Sagi Varghese, manager of quality assurance at JetBlue. Welcome.

Sagi Varghese: Hi. How are you?

Gardner: Good. Tell us about the problems that you faced, as you tried to make your applications the best they could be?

Varghese: About two years ago, our team picked up the testing for our online booking site, which is hosted in-house. At that time, we had various issues with the stability of the site as well as the capability of the site. Being a value-add customer, we wanted to be able to offer our customer features beyond what came in a canned product offered by our business partner. We wanted to be able to offer additional services.

Over the last two years, we added a lot of features on top of our generic products -- integration with ancillary services like cars, hotels, and things that -- and we did those at a very fast pace. A lot of these enhancements had to be rolled out in a very short time frame.

Almost two years ago, all of the testing was manual and one of the first steps was to adopt a methodology, so that we could bring some structure and process around the testing techniques that we’re using. The next step was to partner with HP. We worked very closely with HP, not only on the functional aspects of the application, but also on the performance aspects of the application.

A typical end-to-end test cycle would take five to six people over several weeks to completely test a new solution or a new release of the application. We made a business case to automate the testing effort or the regression testing, as we call it, or the repeated testing, if you’d like, for want of the simple term. We made a business case to automate that using HP’s Quick Test Pro product and we were able to complete the automation in less than four weeks. That became the starting point.

It involved using a test automation framework that worked with the Quick Test Pro product, and our testing cycles reduced about 70 percent. As time progressed, and we added more features into our online Website, we also became more mature in the utilization of the tool and added more test scripts into our automated bucket, rather than manual. We went from 250 test cases to about 750 test cases that we run today, a lot of them overnight, in less than two days.

Gardner: At JetBlue, of course you’re in a very competitive field, the airline business. Therefore, all of your applications need to perform well. If your customers don’t get what they want in one or two clicks, you’re going to lose them. Tell me a little bit about the solution approach to making your applications better. Is it something that your testing did alone? What did you look for from a more holistic solutions perspective?

Varghese: One of the things that we were looking at was that customer experience. We were working with a product that was offered by a business partner or a vendor

Today, we are turning them around in less than two days, which means we can deliver more features to the market more often and realize the value.

and we were allowed customizations on top of that. We were largely dependent on the business partners, because they host our reservations site. So, we're kind of dependent on them for the performance of the application. We were able to work with them using HP’s LoadRunner product to optimize the performance of the site.

Gardner: You mentioned a few paybacks in putting together better quality assurance. What sort of utilization did you get in some of the tools that you had in place, even though you were going from manual to a more automated approach?

Varghese: About two years ago, even though we had the tools, we had very limited use. We ran a few ad-hoc, automated scripts every now and then. Since we adopted this framework a little over a year ago, we have 100 percent utilization of the tool. We don’t have enough licenses today. We definitely are in dire need of getting more licenses.

Last year, every person on my team went to advanced training. Everybody on the team can execute the 700 scripts pretty much overnight, if they had to. We could run them all parallel. We have 100 percent utilization of the tool and we’re in need for more licenses. I wish we had that capability, and we will in the future.

Gardner: So you’ve been able to cut your testing costs. You have seen better utilization of the tools you have in place and higher demand for more. How does it translate into what you've been able to accomplish in terms of your post-production quality of applications?

Varghese: Historically, when we had manual test cases, delivering a new release or a functionality on our Website involved perhaps three to four months of effort, simply because it took us several weeks to go through one cycle of testing. Today, we are turning them around in less than two days, which means we can deliver more features to the market more often and realize the value.

If you have heard, at JetBlue we have been offering even more leg-room features. This year, we have launched three or four products in the first quarter alone. We’ve been able to do that because of the quick turnaround time offered by the test automation capability.

Gardner: And not only do you reduce the time, what about the rate of failure?

Varghese: The rate of failure has reduced greatly. We brought post-production failures down by about 80 percent or so. Previously, in the interest of time, we would compromise on quality and you wouldn't necessarily do an end-to-end test. Today we have that, I wouldn’t say a luxury, but the ability to run an end-to-end test in less than two days. So, we’re able to pretty much test all of the facets of an application, even if that particular module is not affected.

Gardner: Congratulations on winning the award. This is a great testament that you took this particular solution set and did very good things with it.

Varghese: Absolutely. Thank you very much. Thank you for having us.

Gardner: We've been talking with Sagi Varghese, manager of quality assurance at JetBlue, a winner today of HP Software & Solutions Awards of Excellence.

Thanks for joining us for this special BriefingsDirect podcast, coming to you on location from the Hewlett-Packard Software Universe 2009 Conference in Las Vegas.

Also look for full transcripts of all of our Software Universe live podcasts on the BriefingsDirect.com blog network. Just search the web for BriefingsDirect. The conference content is also available at www.hp.com, just search on the HP site under Software Universe Live 2009.

I'm Dana Gardner, principal analyst at Interarbor Solutions, your host for this series of HP sponsored Software Universe Live Discussions. Thanks for listening and come back next time.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com. Sponsor: Hewlett-Packard.

Transcript of a BriefingsDirect podcast recorded at the Hewlett-Packard Software Universe 2009 Conference in Las Vegas during the week of June 15, 2009. Copyright Interarbor Solutions, LLC, 2005-2009. All rights reserved.

Tuesday, June 09, 2009

Analysts Define Growing Requirements List for Governance in Any Move to Cloud Computing

Edited transcript of BriefingsDirect Analyst Insights Edition podcast, Vol. 42 on need for governance as more enterprises look to cloud computing services from inside and outside the firewall.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com. Charter Sponsor: Active Endpoints. Also sponsored by TIBCO Software.

Special offer: Download a free, supported 30-day trial of Active Endpoint's ActiveVOS at www.activevos.com/insight.

Dana Gardner: Hello, and welcome to the latest BriefingsDirect Analyst Insights Edition, Volume 42. I'm your host and moderator, Dana Gardner, principal analyst at Interarbor Solutions.

This periodic discussion and dissection of IT infrastructure related news and events, with a panel of industry analysts and guests, comes to you with the help of our charter sponsor, Active Endpoints, maker of the ActiveVOS visual orchestration system, and through the support of TIBCO Software.

Gardner: Our topic this week on BriefingsDirect Analyst Insights Edition, and it is the week of May 18, 2009, centers on governance as a requirement and an enabler for cloud computing. We're going to talk not just about IT governance, or service-oriented architecture (SOA) governance. It's really more about extended enterprise processes, resource consumption, and resource-allocation governance.

It amounts to "total services governance," and it seems to me that any meaningful move to cloud-computing adoption, certainly that which aligns and coexists with existing enterprise IT, will need to have such total governance in place.

So, today we'll go round robin with our IT analyst panelists on their top five reasons why service governance is critical and mandatory for enterprises to properly and safely modernize and prosper vis-à-vis cloud computing.

We see a lot of evidence that the IT vendor community and the cloud providers themselves recognize the need for this pending market need and requirement for additional governance.

For example, IBM recently announced a virtualization configuration management appliance called CloudBurst. It not only helps companies set up and manage virtualized infrastructure, but it can just as well provision and manage instances of stacks of applications, as well as data services support across any number of cloud scenarios.

Easier provisioning

We also recently saw Amazon Web Services move with a burgeoning offering to ease provisioning, a reliability control, via automated load balancing and scaling features and services.

Akamai Technologies this spring announced advanced network-based cloud performance support, in addition to content and application's optimization services. [Disclosure: Akamai is a sponsor of BriefingsDirect podcasts.]

HP, also this spring, released Cloud Assure to help drive security, performance, and availability services for software-as-a-service (SaaS) applications, as well as cloud-based services. So, the road to cloud computing is increasingly paved with, or perhaps is going to be held up by, a lack of governance. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

Here to help us understand the need for governance as an enabler or a roadblock to wider cloud adoption are our analyst guests this week. We're here with David A. Kelly, president of Upside Research. Hey, Dave.

David A. Kelly: Hey, Dana. Happy to be here. This should be a fun topic.

Gardner: Ron Schmelzer, senior analyst from ZapThink. Hey, Ron.

Ron Schmelzer1: Hey, great to be here.

Gardner: And, Joe McKendrick, independent analyst and ZDNet blogger. Hey, Joe.

Joe McKendrick: Hey, Dana, nice to be here as well.

Gardner: Let's start with you Ron. You've been involved with SOA best practices and methodologies for several years. Before that, you were a thought leader in the Web services space, and governance has been part and parcel of these advances. Now, we're taking it to an extended environment, a larger, more complex environment. Tell me, if you would, your top five reasons why you think services governance is critical or not for this move to a larger services environment.

Schmelzer: You're making me count on a Friday before a long weekend. Let me see if I can do that. I'm glad you brought up this topic. It's really interesting. We just did a survey of the various topics that people are interested in for education, training, and stuff like that. The number one thing that people came back with was governance. That's indicative and telling at a few levels.

The first thing people realize is that simply building and putting out services -- whether they're on the local network or in the cloud or consuming services from the cloud -- don't provide the benefit, unless there's some control. As people always say, nobody really wants to be ungoverned, but nobody wants to have a government. The thing that prevents freedom from going into chaos is governance.

I can list the top five reasons why that is. You want the benefit of loose coupling. That is, you want the benefit of being able to take any service and compose it with any other service without necessarily having to get the service provider involved. That's the whole theory of loose coupling. The consumer and the provider don't have to directly communicate.

But the problem is how to prevent people from combining these services in ways that provide unpredictable or undesirable results. A lot of the efforts in governance from the runtime prevents that unpredictability. So one, preventing chaos.

Two. Then there is the design time thing. How do you make sure services are provided

How do you make sure that the various services comply with the various corporate policies, runtime policies, IT policies, whatever those policies are?

in a reliable predictable way? People want to create services. Just because you can build a service doesn't mean that your service looks like somebody else's service. How do you prevent issues of incompatibility? How do you prevent issues of different levels of compliance?

Of course, the third one is around policy. How do you make sure that the various services comply with the various corporate policies, runtime policies, IT policies, whatever those policies are?

Those are the top three. To add a fourth and a fifth, people are starting to think more and more about governance, because we see the penalty for what happens when IT fails. People don't want to be consuming stuff from the cloud or putting stuff into a cloud and risking the fact that the cloud may not be available or the service of the cloud may not be available. They need to have contingency plans, but IT contingency plans are a form of governance. Those are the top four, and it's a weekend, so I'll take the fifth off.

Gardner: Very good. Now, we go to David Kelly next. David, you've been following the cloud evolution through the lens of business process management (BPM) and business process modeling. I'm interested in your thoughts as to how governance can assist in how organizations can provide a better management and better modeling around processes.

Kelly: Yeah, absolutely. At one level, what we're going to see in cloud computing and governance is a pretty straightforward extension of what you've seen in terms of SOA governance and the bottom-up from the services governance area. As you said, it gets interesting when you start to up-level it from individual services into the business processes and start talking about how those are going to be deployed in the cloud. That brings me to my first point. One of the key areas where governance is critical for the cloud is ensuring that you're connecting the business goals with those cloud services.

It's like the connection between IT and business in conventional organizations. Now, as those services move out to the cloud, it's the same problem but in a larger perspective, and with the potential for greater disruption. Ron just mentioned that in terms of the IT contingency planning and the risk issues that you need to bring up. So, one issue is connecting the business goals with the cloud services.

Another aspect that's important here is ensuring compliance. We've seen that for years. That's going to be the initial driver that you're going to see in the cloud in terms of compliance for data security, privacy, and those types of things. It's real easy to get your head around, and when you're looking at cloud services that are provided to consumers, that's going to be a critical point.

Can the consumers trust the services that they're interacting with, and can the providers provide some kind of assurance in terms of governance for the data, the processes, and an overall compliance of the services they're delivering?

Then, when you step back and look, the next issue in terms of governance

It's like saying we have Web server governance. You need it. It's there and its important, but its such a small slice of the overall solution that we're going to have to see a much broader expansion over the next four or five years.


and cloud governance comes down to ensuring consistent change management. You've got a very different environment than most IT organizations are used to. You've got a completely different set of change-management issues, although they are consistent to some extent with what we've seen in SOA and the direction organizations are taking in that area. You need to both maintain the services and make sure they don't cause problems when you're doing change management.

The fourth point is making sure that the governance can increase or help monitor quality of services, both design quality, as Ron mentioned, and runtime quality. That could also include performance.

Dana, when you mentioned some of your examples, most of those are about the performance and availability of these services. So, they're very limited. What we've seen so far is a very limited approach to governance. It's like saying we have Web server governance. You need it. It's there and its important, but its such a small slice of the overall solution that we're going to have to see a much broader expansion over the next four or five years.

The last thing, looking at this from a macro perspective, is managing the cloud-computing life cycle. From the definitions of the services, through the deployment of the services, to the management of the services, to the performance of the services, to the retirement of the services, it's everything that's going on in the cloud. As those services get aggregated into larger business processes, that's going to require different set of governance characteristics. So, those are my top five.

Gardner: Joe McKendrick, we've heard from David and Ron. David made an interesting point that we're probably scratching the surface of what's going to be required for a full-blown cloud model to prosper and thrive. We're still looking at this as basically red light-green light, keeping it working, keeping the trains running. We don't necessarily have them on time, on schedule, or carrying a business payload or profit model. So, Joe, I'm interested in your position -- five reasons why governance is important, or what, perhaps, needs to come.

McKendrick: Thanks, Dana. Actually, Ron and David really covered a lot of the ground I was going to cover, and they said it probably a lot better than I would say.

There is an issue that's looming that hasn't really been discussed or addressed yet. That is the role of governance for companies that are consuming the services versus the role of governance for companies that are providing the services.

On some level, companies are going to be both consumers and providers of cloud services. There is the private cloud concept, and we've talked about that quite a bit in these podcasts. SOA is playing a key role here of course.

Companies, IT departments will be the cloud providers internally, and there is a level of governance, the design time governance issues that we've been wrestling with SOA all these years, that come into play as providers.

There are going to be some other companies that may be more in a consume mode. There are other governance issues, another side of governance, that they have to tackle, such as service-level agreements (SLAs), which is assuring the availability of the applications they're receiving from some outside third party. So, the whole topic of governance splits in two here, because there is going to be all this activity going on outside the firewall that needs to be discussed.

Another key element that's coming into play has been wrestled with, discussed, and thrown about during the development of SOA over the past few years.

A lot of companies are taking on the role of a broker or brokerage. They're picking up services from partners, distributors, and aggregators, and providing those services to specific markets.


It's the ability to know what services are available in order to be able to discover and identify the assets to build the application or complete a business process. How will we go about knowing what's out there and knowing what's been embedded and tested for the organization?

The issue of return on investment (ROI) is another hot button, and we need to be able to determine what services and processes are delivering the best ROI. How do we measure that? How do we capture those metrics?

But overall, the key thing of SOA and what we've been talking about with SOA is how do we get the business involved? How do we move it beyond something that IT is implementing and move it to the business domain? How do we ensure that business people are intimately involved with the process and are identifying their needs? Ultimately, it's all about services. We're seeing businesses evolve in this direction.

A lot of companies are taking on the role of a broker or brokerage. They're picking up services from partners, distributors, and aggregators, and providing those services to specific markets. I call it the "loosely coupled business" concept, and it's all about services -- SOA, Web services, cloud-based services. It's all rolled into one -- Enterprise 2.0. I'll bring that in there too.

So, we're just scratching the surface here.

Preparing to scale

Gardner: Thanks Joe. I'll be last and will take the position of disadvantage, because I'll be talking a lot about what you've all stated so far, but perhaps with a little different emphasis.

My first reason for governance is that we're going to need to scale beyond what we do with business to employee (B2E). In many cases we've seen SOA and Web services developed in large enterprises first for some B2E and some modest business to consumer (B2C).

For cloud computing, we're going to need to see a greater scale business to business (B2B) cloud ecology and then ultimately B2C with potentially very massive scale. New business models will demand a high scale and low margin, so the scale becomes important. In order to manage scale, you need to have governance in place. And by the way, that's not only for services, but application programming interfaces (APIs).

We're going to need to see governance on API usage, but also in what you're willing to let your APIs be used for -- not just on an on/off switch, but also at a qualitative level. Certain types of uses would be okay, but certain others might not for your APIs, and you might also want to be able to charge for them.

My second point is the need to make this work within the cloud ecology.

Standards and neutrality at some level are going to be essential for this to happen at that scale across a larger group of participants and consumers.

So, with dynamic partnering, with people coming and going in and out of an ecology of process, delivered cloud services, means federation. That means open and shared governance mechanisms of some type. Standards and neutrality at some level are going to be essential for this to happen at that scale across a larger group of participants and consumers.

One example of this we've seen at the social-network level is the open, social approach to sign-on and authentication. That's just scratching the surface of what's going to be required in terms of an automated approach to provisioning and access control at the services level, which falls back to much more robust and capable governance.

My third reason is that IT is going to need to buy into this. We've heard some talk recently about doing away with IT, going around IT, or doing all of these cloud mechanisms vis-à-vis the line of business folks. I think there is a role for that, and I think it's exploratory at that level.

Ultimately, for an enterprise to be successful with cloud models as a business, they're going to have to take advantage of what they already have in place in IT. They need to make it IT ready and acceptable, and that means compliance. As we've talked about, that's the ability to have regulatory satisfaction, where that's necessary, and to satisfy the requirements that IT has for how its going to let its resources, services, and data be used.

IT checklist

IT has, or should have, a checklist of what needs to take place in order for their resources and assets to be used vis-à-vis outside resources or even within the organization across a shared-services environment. IT needs to be satisfied, and governance is going to be super essential for that.

Number four is that the business models that we're just starting to see well up in the marketplace around cloud are also going to require governance in order to do billing, to satisfy whether the transaction has occurred, to provision people on and off based on whether they've paid properly or they're using it properly under the conditions of a license or a SLA of some kind. This needs to be done at a very granular level.

We've seen how long it took for telecommunications companies to be able to build and provision properly across a fairly limited amount of voice services. They recognized that their business model was built on the ability to provision a ring tone and charge appropriately for it. If it has a 30-day limit to use, that needs to be enforced. So, governance is going to be essential for making money at cloud types of activities.

Lastly, cloud-based data is going to be important. We talk about transactions, services, APIs, and applications, but data needs to be shared, not just at a batch level, but at a granular level across multiple partners. To govern the security, provisioning, and protection of data at a granular level falls back once again to governance. So, I come down on the side that governance is monumental and important to advancing cloud, and that we are still quite a ways away from doing that.

Where I'd like to go next with the conversation is to ask where would such

The cloud actually complicates things a little bit, because we're not really in control of the cloud infrastructure. So, we don't have full control of how a third-party cloud environment would choose to enforce a runtime policy.

governance happen? Is this something that will be internal? Will there be a third party, perhaps the equivalent of a Federal Reserve in the cloud, that would say, "This is currency, this is what the interest rates are, and this is what the standards are?" In a sense, we're talking about cloud computing as almost an abstraction, like we do when we think about an economy or a monetary system.

So, let's take up that question of where would you actually instantiate and enforce governance. Back to Ron Schmelzer at ZapThink.

Schmelzer: It's good that you mentioned all of these things. Governance just can't be a bunch of words on a piece of paper, and then you hope that people by themselves will just voluntarily make them happen. Clearly, we need some ways of enforcing them.

Some of them are automated and some of them are automatable, especially a lot of the runtime governance things you talk about -- enforcing security policies, composition policies, and privacy policies.

There are a lot of those policies that we can enforce. We can enforce them as part of the runtime environment, whether we do that as part of the infrastructure, we do it as part of the messaging, or we do that at the client side. There are a lot of different ways of distributing.

The cloud actually complicates things a little bit, because we're not really in control of the cloud infrastructure. So, we don't have full control of how a third-party cloud environment would choose to enforce a runtime policy.

But, there are other kinds of policy. We talked about design-time policy, which is how we govern the way that we create services. How do we govern the way that we consume them? How do we govern the way that we procure those services? There is a certain amount of enforceability, both at automated level with the tooling that we use to do that, the design time tooling, or even as part of the budgeting, approval, or architectural review process. There are a lot of places where we can enforce that.

Change management

Of course, we have the whole area of change management. It's a huge bugaboo in SOA, and it's going to rear its head in cloud. How do we deal with things versioning and changing, both the expected changes and the unplanned changes, things becoming available, and things not becoming available.

We may have policies to deal with that, but how do we force a policy that says, "All of a sudden the geocoding service that you're using for some core process is no longer available. You have to switch to another one." Can you truly automate that, or is there some sort of fall back? What do you do?

Fortunately, one of the great things about cloud is that it's forcing us to stop thinking about integration middleware as a solution to architectural problems, because it has absolutely nothing to do with integration middleware.

We don't even know what's running the cloud. So, when we're thinking about the cloud now, we have to be thinking in terms of the abstract service. What do I do when it's available? What do I do when it's not available? That forces us to think a lot more about governance, quality, and management.

Gardner: Let's go to you Dave Kelly. It seems to me that there is a political angle to this as well, as Ron was saying. There is a need for a trusted, neutral, but authoritative third party. Would I trust my own enterprise, my competitor, or even someone in my supply chain to be dictating the enforcement of governance?

Kelly: Well, I think there is. There is a role for a trusted,

We're going to see more of a bottom-up approach to governance. The organizations that are putting services or data out there are going to be ones demanding some type of governance or compliance capabilities.

neutral, as you said, an authoritative third party, but we're not going to see one soon. That's a longer-term evolution. That's just my take. We'll see some kind of alliance evolve over the next couple of years, as providers start to grapple with this and with how they can help ensure some sort of governance and/or compliance in the cloud services. As usual in the IT landscape, that will be politicized, at least in terms of the vendors providing services.

We're going to see more of a bottom-up approach to governance. The organizations that are putting services or data out there are going to be ones demanding some type of governance or compliance capabilities. You're going to see this push from the bottom, with some movement from the top, but I don't know that it's going to be all that effective.

Gardner: Joe McKendrick, let me run that by you, but with a hypothetical. We've seen in the past over the history of business, commerce, and the mercantile environment, starting perhaps 500-700 years ago, around shipping, sailing ships across port to port, that someone had to step up and become an arbiter. Perhaps it was a customs groups, perhaps a large influential company, like an East India Company, but eventually someone walked in to fill the vacuum of managing a marketplace.

The cloud is essentially a marketplace or many marketplaces. It's very complex compared to just moving tobacco from North America to Europe or back to the East Indies with some other cargo. Nonetheless, it seems to me that the government or governments could step into the middle here and perform this needed third-party authoritative role for governance.

Extracting revenue

Maybe it won't be necessarily providing the services, but providing the framework, the standards, and, at some level, enforcement. In doing so, it will have an ability to extract some sort of a revenue, maybe on a transaction basis, maybe on a monetary percentage basis. Lord knows, most governments that we're looking at these days need money, but we also need a cloud economy because it's so much more productive.

I know this is a big question, a big hypothetical, but don't you think that it's possible that this need for governance that we've uncovered will provide an opportunity for a government agency or some sort of a quasi-public entity to step in and derive quite a bit of revenue themselves from it?

McKendrick: Wow! I don't know about that. You mentioned earlier the possibility of a hypothetical Federal Reserve in the cloud, I'm just trying to picture Ben Bernanke or Alan Greenspan taking the reins of our cloud economy and making obtuse statements, and everybody trying to read the tea leaves on what they just said.

I don't know, Dana. I can't see a government agency stepping in to administer or pluck revenue out of the cloud beyond maybe state agencies looking for ways to leverage sales taxes. They already have that underway.

You mentioned marketplaces taking over. I think we're going to see the formation of marketplaces of services. Dave Linthicum isn't on the call with us. He was with StrikeIron for a while, and StrikeIron was a great example from the get-go of how this would be structured.

They formed this private marketplace. Web service providers would

I think it will be a private-sector initiative. We'll see these marketplaces gel around services.

provide these services and make them accessible to StrikeIron. They would certify to StrikeIron that the services were tested and viable. StrikeIron also would conduct its own testing and ensure the viability of the services.

Gardner: I believe there's another company in Europe called Zimory that's attempting a similar approach, right?

McKendrick: Exactly. In fact, a company called 3tera just announced this past week that they'll be providing a similar type of marketplace for cloud-based services.

Gardner: So, the need is clearly there, don't you agree?

McKendrick: Absolutely! I think it will be a private-sector initiative. We'll see these marketplaces gel around services. I'm not sure how StrikeIron is doing these days, but the business model was that the providers of the services were to receive these micro payments every time a service was used by a consumer tapping into the marketplace. It might be just a few pennies per instance, but these things add up. Sooner or later, you have some good money to be made for service providers.

Gardner: Ron, do you think that this is strictly a private-sector activity or can no one private-sector entity be put into the position of a hub within a spoke of cloud commerce? Would anyone be willing to trust one company with such power, or does this really open up an opportunity for more of a public entity of some kind?

Let it evolve

Schmelzer: For now, we need to let this evolve. We're still not quite sure what this means economically. We don't know how long lived this is going to be. We don't know what the implications are entirely. We do trust a lot of private companies.

To a certain extent, Google is one, big unregulated information hub, as it is. There's a lot of kvetching about that, and Google has made some noise about getting into electronic health records. Right now, there's really no regulation. It's like, "Well, let Google spend their money innovating in that area, and if something good comes out of it, maybe the government can learn."

But, the government is a little bit overwhelmed at the moment just trying to keep the basics of "Ye Old 1.0 Brick-and-Mortar Economy" running, and can't get their fingers into the 2.0 and 3.0 stuff that a lot of us in the market don't have entire visibility into. I'm going to plead SOA libertarianism on this one.

McKendrick: The government could play a role of a catalyst. Look at the Internet, the way the Internet evolved from ARPANET.

But, the government is a little bit overwhelmed at the moment just trying to keep the basics of "Ye Old 1.0 Brick-and-Mortar Economy" running.

The government funded the ARPANET and eventually the Internet, funding the universities and the military establishments involved in the network. Eventually, they niched them into the private sector. So, they could play a catalyst role.

Gardner: There is a catalyst, but there is also a long-term role of playing regulator. If you look at how other markets have evolved. Right now, we're looking at the derivatives market that has evolved over the past 10 or 15 years in financial market.

Some government agencies are coming and saying, "Listen, this thing blew up in our face. We need now to allow for a regulatory overview with some rules and policies that we can enforce. We're not going to run the market, we're not going to take over the market, but we're going to apply some governance to the market."

McKendrick: Does the government regulate software now? I don't see a lot of government regulation of software -- Oracle or Siebel.

Gardner: We're not talking about software. We're talking about services across a public network.

McKendrick: Right, but the cloud is essentially a delivery mechanism. Its not CDs. It's an over-the-wire delivery of a software.

Gardner: That's why I argue that it's a market, just like a NASDAQ is a market, the New York Stock Exchange, or a derivatives trading environment is a market. Why wouldn't the government's role apply to this just as it has to these marketplaces? Dave Kelly?

Not at the moment

Kelly: Eventually, it will, but, as you said, the derivatives market went unregulated for a long number of years, and the cloud market is certainly not well-defined. It's not a good place for regulation at the moment. Come back in three or four years, and you've got a point to make, but until we get to some point where there is some consistency, standards, and generally accepted business principles, I don't think we're there yet.

Gardner: Should we wait for it to be broken before we try to fix it?

Kelly: That's the typical strategy of government, so yeah. Or we can wait for someone like Microsoft to step in.

Gardner: Would that be amenable to somebody like Amazon and Google?

Kelly: I don't know.

McKendrick: I think we may see an association step in. Maybe we'll see an Open Group, or an OASIS-type

The only other alternative from a political standpoint is to have one big cloud provider that makes all the rules that everyone has to line up around.

industry association step in and take the lead.

Gardner: I see -- the neutral consortium approach.

Kelly: The neutral ineffective consortium.

Schmelzer: Ooh, this is getting rapidly political. We need this weekend, where is the weekend?

Gardner: But that is the point. This is ultimately going to be a political issue. Even if we come up with the technical means to conduct governance, that doesn't mean that we can have governance be effective in this large, complex marketplace that we envision around cloud.

The only other alternative from a political standpoint is to have one big cloud provider that makes all the rules that everyone has to line up around. I believe on the political side of things that's called fascism. Sometimes, it's worked out, but not very often.

Kelly: Or Colossus: The Forbin Project.

Schmelzer: Utilitarianism is the best form of government, as long as everybody cooperates. But, it's hard having the governments involved. To a certain extent, it's true that governance only works as long as there is trust. If you can't trust the providers, then you're just not going to go for it. The best case in point was when Microsoft introduced Passport [aka Hailstorm]. Remember that?

Microsoft said, "We'll serve as a central point. You don't like logging into all these websites and providing all your personal information. No problem. Store that with us, and we will be basically be your trusted intermediary. You log into the Passport system and enter your password into Passport."

Lack of trust

What happened to it? It failed. Why did it fail? Because nobody trusted Microsoft. I think that was really the biggest reason. Technologically it had some issues too, and there were a bunch of other problems with .NET. Also, they were just using Passport as a way of getting their tentacles into all the enterprise software and things. That's neither here nor there, but the biggest reason was, "Why would I want to store all this information with Passport?"

Look at the response to that, this whole Liberty Alliance shindig. I can't say that Liberty Alliance was really that much more successful. What ended up becoming more successful, the whole single sign-on on the Web, was stuff around OpenID and OpenSocial, and all that sort of stuff. That was the social network guys, Facebook and Google, saying, "We're really the people who are in control of this information, and they've already shared this information with us as it is."

Gardner: And what happened was we had a standardized approach to sharing authentication certificates across multiple vendors. That seems to be working fairly well.

Schmelzer: Yeah, without any real intervention. So, I would argue that there is probably a lot more private information in Facebook than people would ever want shared, and there is really no regulation there, but it's pretty well self-regulated at the current moment.

The question is, will all this service cloud stuff go in the direction of what Microsoft tried to do, the single-vendor imposed thing Liberty Alliance tried to do, sort of like the consortium thing, or the OpenID thing, which is a couple of people that already own a very large portion of the environment realizing that they just need to work together amongst themselves.

Gardner: In the meantime, because we all seem to agree that there is a great need for this,

I'd argue that 90 percent-plus of the people who are doing governance really don't know how to do governance at all, regardless of whether they have a great tool or not.

those individual organizations that create the picks and shovels to support governance, regardless of how it's ultimately enforced or what standards, policies, or rules of engagement are ultimately adopted, probably stand to inherit a very large market.

Does anybody want to take a guess as to what the potential market dimensions of a governance picks and shovels, that is the underlying technology and services to support such a governance play might be? Again, we'll start with you, Ron. How big is the market opportunity for those companies that can provide the technical means to conduct governance, even if we don't yet know how it might be overseen?

Schmelzer: I'm very satisfied to see that people are talking about governance as much as they are. This is not a sexy topic at all. I'd much rather be talking about mashups and stuff like that. Given all this interest, the interest in education and training, and what's going on in this market, the market opportunity is significantly growing. It's a little hard to quantify, whether you're quantifying the tools market or the runtime market, or you're quantifying services for setting up governance stuff. I don't think there is enough activity on the services side.

Companies are getting into governance and they think the way to get into governance is to buy a tool or registry or something and put a bunch of repositories together. How do they know what they're doing? I'd argue that 90 percent-plus of the people who are doing governance really don't know how to do governance at all, regardless of whether they have a great tool or not.

It's a big untapped opportunity for companies to get in with some real, world-class governance expertise and best practices and help companies implement those, independent of the tooling that they're using.

Gardner: Dave Kelly, do you agree that the market opportunity is for the methodologies, the professional services, the expertise, as much or more than perhaps say a pure technology sell?

Best practices are critical

Kelly: It's about equal. When you're talking governance, the processes, policies, and best practices are a critical part of it. It's not just about the technology, as it is in some other cases. It's really about how you're applying the policies and principles, both at the IT level and the business level, that are going to form your combined governance and compliance strategy. So, there is definitely a role for that.

At the same time, you're going to see an extension of the existing governance and technology solutions and perhaps some new ones to deal with -- as you said, the scalability, virtualization aspects, and perhaps even geopolitical aspects. As the services and clouds get dispersed around the world, you may have new aspects to deal with in terms of governance that we haven't really confronted yet.

There will be probably a combination of market sizes. I'm not going to put a number on it. It's going to be larger than the existing governance market, but probably I'd say by 10, 15, or 20 percent.

Gardner: Joe McKendrick, let's perhaps try a different way of quantifying the market opportunity. On a scale of 1-10, with 1 being lunch money and 10 being a trillion dollar market, what's your rough estimate of where this governance market might fall?

McKendrick: Let's put it this way. Without Excel or spreadsheets, probably 1 or 2. If you count Excel and spreadsheet sales, it's probably 7 or 8. Most governance efforts are very informal and involve plotting things on spreadsheets and passing them around, maybe in Word documents.

Gardner: That's not going to scale in the cloud. That can't even scale at a department level.

McKendrick: I know, but that's how companies do it.

Gardner: That's why they need a third-party entity to step in.

McKendrick: That's the prime governance tool that's out there these days.

Gardner: I'm going to say that it's probably closer to a 4 or 5. That's because the marketplace in the cloud can very swiftly become a real significant

Just as with the credit card companies, some sort of entity or process will emerge around that, and the government will probably find a way of getting a piece of it, as they usually have in the past.

portion of our general economy. I think that the cloud economy can actually start becoming an adjunct to the general economy that we know in terms of business, commerce, consumer, retail and so forth.

If that's the case, there's going to be an awful lot of money moving around, and governance will be essential. Just as with the credit card companies, some sort of entity or process will emerge around that, and the government will probably find a way of getting a piece of it, as they usually have in the past.

The opportunity here is almost commensurate with the need. There is a huge need for governance and therefore the market opportunity is great, but that's just my two cents.

Well, thanks, we've had a great discussion about governance -- some of the reasons for it being necessary, where the market is going to need to go in order for cloud computing to reach the vision that so many people are fond of these days. We're certainly going to be talking about governance a lot more.

I want to thank our panelists for today's input. We've been joined by David A. Kelly, president of Upside Research. Thanks, Dave.

Kelly: You're welcome. It was fun.

Gardner: Ron Schmelzer, senior analyst at ZapThink. Always a pleasure, Ron.

Schmelzer: Thank you, and one leg out the door to this vacation.

Gardner: And Joe McKendrick, independent analyst and ZDNet blogger. Thanks for your input as always, Joe.

McKendrick: Thanks for having me on, Dana. It was a lot of fun.

Gardner: I also want to thank the sponsors for this BriefingsDirect Analyst Insights Edition Podcast Series, and that would be Active Endpoints and TIBCO Software.

Gardner: This is Dana Gardner, principal analyst at Interarbor Solutions. Thanks for listening, and come back next time.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com. Charter Sponsor: Active Endpoints. Also sponsored by TIBCO Software.

Special offer: Download a free, supported 30-day trial of Active Endpoint's ActiveVOS at www.activevos.com/insight.

Edited transcript of BriefingsDirect Analyst Insights Edition podcast, Vol. 42 on need for governance as more enterprises look toward cloud computing and services from inside and outside the firewall. Copyright Interarbor Solutions, LLC, 2005-2009. All rights reserved.

Tuesday, June 02, 2009

Mainframes Provide Fast-Track Access to Private Cloud Benefits for Enterprises, Process Ecosystems

Transcript of a BriefingsDirect podcast on the role and benefits of mainframes and their position as private cloud infrastructure in today's efficiency-minded enterprises.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com. Learn more. Sponsor: CA.

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you're listening to BriefingsDirect. Today, we present a sponsored podcast discussion on how mainframes can help enterprises reach cloud-computing benefits faster.

We'll be looking at what defines cloud computing, with an emphasis on private clouds or those computing models that enterprises can control on-premises, but that also favor and provide cloud-like efficiency with lower-end costs and a heightened ability to deliver services that support agile business processes.

We'll examine how new developments in mainframe automation and supporting the use of mainframes allow for cloud-computing advantages and the ability to solve some of the more contemporary computing challenges.

To help us understand how mainframe is the cloud, we're joined by Chris O'Malley, executive vice president and general manager for CA's Mainframe Business Unit. Welcome to the show, Chris.

[UPDATE: CA's purchase today of some assets of Cassatt bolsters the role of mainframes' and CA's management capabilities as foundations for private cloud efficiencies.]

Chris O'Malley: Dana, thank you very much. I'm glad to be here.

Gardner: Chris, we've heard a tremendous amount about cloud computing and there's a buzz around this whole topic. From your perspective, what makes cloud so appealing and feasible right now?

O'Malley: Cloud as a concept is, in its most basic sense, virtualizing resources within the data center to gain that scale of efficiency and optimization you just discussed. It's a big topic of discussion right now, especially given the recession we're sitting in.

It's very visible physically that there are many, many servers that support the ongoing operations of the business. CFOs and CEOs are starting to ask simple, but insightful, questions about why we need all these servers and to what degree are these servers being utilized.

When they get answers back and it's something like 15, 10, or 5 percent utilization, it begs for a solution to the problem to start bringing a scale of virtualization to optimize the overall data center to what has been done on the mainframe for years and years.

We're now seeing the availability of the technology -- VMware is an example -- to start to create almost mainframe-like environments on the distributed side. So, it's both the need from a business standpoint of trying to respond to reduced cost of computing and increased efficiency at a time when the technologies are becoming increasingly available to customers to manage distributed environments or open systems in a way similar to the mainframe.

Gardner: I suppose there's also an issue around integration. When people talk about cloud computing, we hear them refer to it as an application-development or platform-as-a-service (PaaS) affair. We also hear software as a service (SaaS) or just great delivery of the applications. Then, there's this notion of infrastructure fabric or infrastructure as a service (IaaS).

But, to relate and manage all of those things is something we haven't yet seen in this whole cloud market. I imagine that at a private level, if you were to use mainframe and associated technologies, you might start to see some of those integration points among these different levels or aspects of cloud computing.

O'Malley: You're right. It's a maturity curve that we're going through, and it's very likely that larger customers are using their mainframe in a highly virtualized way. They've been doing it for 30 years. It was the genesis of the platform. It's a fixed asset that was very expensive way back, or at least relatively expensive, that they try to get as much out of it as they possibly can. So, from its beginning, it was virtualized.

You see the same big customers, though, having application needs outside of what they've done themselves. What customer relationship management (CRM) and salesforce.com have done creates a duality of the mainframe acting as a cloud and using SaaS to support how they work their markets. It's very important that those things start to become integrated. CRM obviously fits into things like order entry, and tying those efforts together.

As you go through this maturity cycle, there is always a level of effort to integrate these things. The viability of things like salesforce.com, CRM, and the need to coordinate that data with what for most customers is 80 percent of their mission-critical information residing on the mainframe is making people figure out how to fix those problems. It's making this cloud slowly, but pragmatically, come true and become a reality in helping to better support their businesses.

Gardner: So, that would lead, at some point, to a cloud of clouds and hybrid models. We've been worried about integration vertically and now horizontally. I suppose we'll have to start worrying about it across organizational boundaries as well.

Barriers to adoption

O'Malley: Absolutely. There are other barriers that exist as well. The distributed environment and the open-system environment, in terms of its genesis, was the reverse of what I described in the mainframe. The mainframe, at some point, I think in the early '90s, was considered to be too slow to evolve to meet the needs of business. You heard things like mounting backlog and that innovation wasn't coming to play.

In that frustration, departments wanted their server with their application to serve their needs. It created a significant base of islands, if you will, within the enterprise that led to these scenarios where people are running servers at 15, 10, or 5 percent utilization. That genesis has been the basic fiber of the way people think in most of these organizations.

It's not just the technical barriers and the complexity of it. It's a cultural shift of an acceptance by players across the business. They all start to use a shared commodity in fulfilling their needs, and the recession helps that. Good CEOs and good CFOs never let a recession go to waste. They explain to their executive management, "We need a greater level of efficiency. We need to transform our thinking, so that we can start to take advantage of these technologies, decrease our overall cost, and increase our ability to serve our market."

They are not just technical issues. There is also people's disposition on the way IT should be run. That has to change as well.

Gardner: I suppose we've gone along with the pendulum swing, from centralized, to decentralized, and now we're coming back. I've spoken to a number of people that say the shortcomings of distributed computing are, in fact, the set of requirements for cloud computing. Do you agree with that?

O'Malley: I absolutely do. This 15 or 10 percent utilization is what we consistently see, customer after customer after customer. Recently, I was with an international customer. They took me on a data center tour, and one of the first things I see is an air conditioning unit the size of a school bus. I see walls that are three-and-a-half feet thick, poured

Time and time, I hear there is not a CEO or a CFO interested in adding yet another square foot of data-center floor space or adding people to manage the environment at a scale equal to the increasing capacity.

concrete. I see cabling that looks like it weighs tons and football fields of floor space. In the midst of the tour, somebody tells me, "Here is a blade server that cost us next to nothing."

The difficulty in bringing and using these things in an efficient fashion, the cost of all those moving parts, and everything that has to be managed as a single thing, rather than in a virtualized form, has caused a scale of waste that you cannot hide.

Time and time, I hear there is not a CEO or a CFO interested in adding yet another square foot of data-center floor space or adding people to manage the environment at a scale equal to the increasing capacity. They should be getting economies of scale and are just not seeing it.

You're seeing the pendulum come back. This is just getting too expensive, too complex, and too hard to keep up with business demands, which sounds a lot like what people's objections were about the mainframe 20 years ago. We're now seeing that maybe a centralized model is a better way to serve our needs.

Gardner: A lot of what attracts people to the cloud model -- because it is still rather amorphous, and not well-defined -- is this notion of elasticity. That's both, as you say, to help on utilization when it's low, but also to allow for the spikes to be managed externally or to take workloads and apply them across multiple machines in the case of a private cloud.

O'Malley: Exactly.

Gardner: How do you see this attraction to elasticity of compute resources and infrastructure? How does that relate to where the modern mainframe is?

On-demand engine

O'Malley: The modern mainframe is effectively an on-demand engine. IBM has created now an infrastructure that, as your needs grow, for example, you need to turn on additional engines that are already housed in the box. With peak processing in December around the retail uptake -- it will happen again here in the not too distant future -- or a quarter end for most organizations, they have the capacity to turn engines on and off and then be charged effectively, like a utility.

With the z10, IBM has a platform that is effectively an in-house utility and, obviously, outsourcers offer that option in a purer fashion. This is not the mainframe your grandpa bought in 1976. It had always been a strong platform in terms of being able to drive high degrees of utilization. You don't see a bad mainframe customer. They're all at 95 percent throughput on those processors.

Now, with the z10 and the ability to expand capacity on demand, it's very attractive for customers to handle these peaks, but not pay for it all year long. So, that's strength. Obviously, with companies like Salesforce.com, that's an option on the distributed side as well. You're paying for only that which you need at a given moment.

Gardner: Another issue that I've encountered in exploring these cloud issues is a common idea that this is for commodity-level services -- email, maybe some business applications, sales-force automation, CRM, for example. But, those peaks and troughs are also something that affect mission-critical applications, particularly if they're batch or something to be done at a certain frequency.

How do you take advantage of the compute capacity, when you're in between those frequencies and those batches? Do you see cloud computing as something that is destined for commodity-level IT,

The attributes that make up that which is required for a mission-critical application are basically what make your brand. So, the mainframe has always been the home for those kinds of things. It will continue to be.

or is this something that also makes a great deal of sense for the most mission-critical types of transactions and applications?

O'Malley: As it specifically relates to mainframe, it absolutely does. The mainframe has always been the home, if you're a manufacturer, for your logistics, which sit on the mainframe. It's a core process to the organization.

If you're a bank, the ATMs, the DDL, all of that stuff tends to be mainframe apps. You're right. There's a strong variability in the types of processing that is, in fact, being done. The hardware allows you the capacity to handle those things and reduce your consumption in a way that affects your cost.

Gardner: It's the virtualization, management, and governance of what's going on with the infrastructure that's the genesis of this elasticity. I think what you're describing is a value-add on top of the platform.

O'Malley: Absolutely. The mainframe has always been very good at resilience from a security standpoint. The attributes that make up that which is required for a mission-critical application are basically what make your brand. So, the mainframe has always been the home for those kinds of things. It will continue to be. We're just making the economics better over time. The attributes that are professed or promised for the cloud on the distributed side are being realized today by many customers and are doing great work. It's not just a hope or a promise.

Gardner: There is some disconnect, though, cultural and even generational. A lot of the younger folks brought up with the Web, think of cloud applications as being Web applications, built with scripting languages, perhaps delivered with rich interfaces, but primarily Web applications.

But, there's nothing to say that a Web application, a client-server application, a virtualized application, or even a virtualized desktop -- referred to as virtualized desktop infrastructure (VDI) -- can't find a place on a mainframe that supports different applications and different platforms beneath those applications.

Moving away from green screen

O'Malley: Correct. As an example, Linux runs on the mainframe. Just to take what you're saying a little bit deeper and state the obvious, one of the knocks on the mainframe is that it's the home of green screens. It was put to me recently by a customer that it's like showing garlic to a vampire. They just don't see that as the answer to the future, and it's not driving them to want to work on a platform that looks like it came out of 2001: A Space Odyssey or something.

Despite all these good things that I've said about the mainframe, there are still some nagging issues. The people who tend to work on them tend to be the same ones who worked on them 30 years ago. The technology that wraps it hasn't been updated to the more intuitive interfaces that you're talking about.

So, CA is taking a lead in re-engineering our toolset to look more like a Mac than it does like a green screen. We have a brand new strategy called Mainframe 2.0, which we introduced at CA World last year. We're showing initial deliverables of that technology here in May. The first thing that we're coming out with is a common service that looks in every way like InstallShield from the mainframe.

If you were to walk up to a 22-year-old system programmer and looked over their shoulder, there's no way that you'd see any difference between what they were working on and what somebody may be working on in the open-system side.

So, you're right that the mainframe technologically can do a lot, if not everything you can do on the distributed side, especially with what z/Linux offers. But, we've got to take what is a trillion dollars of investment that runs in the legacy VOS environment and bring that up to 2009 and beyond. CA, through our strategy of Mainframe 2.0, is in

We've had a cloud for 40 years. It’s called 'the mainframe.'

fact making that happen relative to the usage of our technology, but ultimately in terms of how the day-to-day workers interact with the mainframe and having it look, we believe, even more productive than what they're accustomed to on a distributed platform.

Gardner: It sounds as if we're really dealing with semantics as it addresses infrastructure. If you have a person who's been in the business for several decades and has some experience and you want to reassure them, you could say. "Well, it's running on the mainframe," they'll probably feel good about that. For somebody a little bit younger, you might say, "Well, it's running on the private cloud." It's really the same thing.

O'Malley: Absolutely. I listened to VMware presentation the other day, and they were, I think, speaking with ADP. I think that's what they said. They described the cloud. At the end of it, they said, "We've had a cloud for 40 years. It’s called 'the mainframe.'" But, you're right. It becomes semantics at that point. People will think differently. The mainframe has an image that will be altered dramatically with what we're doing with Mainframe 2.0.

It has its virtues, but it has its limits. The open system has its virtues and has its limits. We're raising the abstract to the point where, in a collective cloud, you're just going to use what's best and right for the nature of work you're doing without really even knowing whether this is a mainframe application -- either in z/OS, or z/Linux -- or it's Linux on the open system side or HP-UX. That's where things are going. At that point, the cloud becomes true in the promise where it's being touted at the moment.

Gardner: What about this? Going back to the issue to integration, if there is been this long-term ability to manage virtualized instances on the mainframe, eventually, as we get into this cloud of clouds and hybrid model future scenario, the buck must stop some place.

There's going to need to be one throat to choke somewhere, even if the services are emanating from a variety of sources. Is it a stretch to think that your on-premises mainframe that's being used as a cloud would also become a hub, rather than a spoke, in terms of how you would govern, manage, and integrate across multiple cloud types of implementation?

Benefits of centralization

O'Malley: One of the aspects that's wonderful about the mainframe is that the scale of discipline allows a very few people to manage a very large environment. That's been developed over 40 years and really is the benefit of this centralized model.

Increasingly, we're seeing customers come to the conclusion that there are certain things -- security and storage management for example -- that have been perfected in terms of their optimization and efficiency on the mainframe.

You're right. They're thinking of how to take certain disciplines that would probably be best done by the hub or the mainframe to manage the overall environment. That's definitely what we're thinking about from a strategy perspective. Security and storage management are two strong examples of the place those disciplines are done throughout the data center.

Gardner: We've discussed some of the issues around expense and the economics around utilization, control, and lower risk with governance and security. We've also addressed the perception, the gap, if you will, on culture and age -- "my grandfather's mainframe" and that sort of thing.

But, there's also this nagging concern in the market around skills and whether the mainframe needs to be sunset because of a lack of support, or whether its going to become, as we just described, the hub for the future. What is it that you bring to your clients in order to ameliorate their concerns around this skills issue?

O'Malley: There are two dimensions to it. One, we have to transform the technology, because we can't be naive. There is an 18-year-old man or woman out there someplace who's about to get into college.

It's very important that we bring a cool factor to the mainframe to make it a platform that's equally compelling to any other. When you do that, you create some interesting dynamics to getting the next generation excited about it.

They're going to have to see a renewed mainframe that is more like what they've been accustomed to, if we're going to have them invest a college career to develop their skills and pursue a career in the mainframe space.

They're used to intuitive interfaces that they don't need a manual for and that they can dig into. They eventually get into the depths of it, but they need a nice entry point into it. They need something that, through just their generalized knowledge, they can get into. A green screen is the opposite of that. It's a heavy-lifting exercise in the front end.

To be very honest, it's very important that we bring a cool factor to the mainframe to make it a platform that's equally compelling to any other. When you do that, you create some interesting dynamics to getting the next generation excited about it. One is that there's a vacuum of talent in that space. So, you've got a career escalator within mainframe that is just not available to you on the distributed side, and we're trying to set the example.

Our first technology within Mainframe 2.0, which I talked about, is called the Mainframe Software Manager. It's effectively InstallShield for the mainframe. We developed that with 20-somethings. In our Prague data center, we recruited 120 students out of school and they developed that in Java on a mainframe.

We're trying to set the example for what you can do in terms of bringing college students, making them effective, and having them do new and creative things on a platform that, at least in the recent history, they hadn't seen a lot of. They can get a sense of confidence between the dynamic of CA redressing the platform and our showing a formula to bring in college students, rapidly make them effective, and have them actually deliver technology that changes the way this platform is managed forever. It changes a lot of people's thinking and gives confidence to our customers and management.

We're also going on the road. I'm speaking at many universities, talking to both existing computer science students, as well as high school students that plan to go to those universities. I'm talking about making the mainframe one that's a friendly platform to them, if you will, and talking about the career opportunities that are offered to them.

Just to give you the sense of amazement, have 25-year-old people in Prague that have written lines of code that, within the next 12 months, we'll be running at the top 1,000 companies on the face of the earth. There aren't a lot of jobs in life that present you that kind of opportunity. But, we've got to get those two dimensions right. We've got to show that the platform is friendly. Its one where we have a formula to bring new college students in, make them effective, and then get the word out there, so that more and more students look at this as a career option for them.

Gardner: I'm just curious. When you speak to high school and college students, are there any particular skill sets that put them into the right track for what they need for mainframes, or is it just mainstream computer science?

A need for urgency

O'Malley: It's mainstream computer science, but there's a need for a level of urgency to get things done. The product that we're coming out within May, Mainframe Software Manager, was written from beginning to end in less than 12 months. One of the things that this project taught us was the capacity of these students to come out and connect with customers. There has been some atrophy in terms of our capacity to communicate, of being able to understand customer needs -- what are the issues -- and then being able to apply new paradigms.

Have no fear. We need almost a level of innocence in looking at things in a far different way that the students can bring and then working very hard in a systematic way in conjunction with a having a transparency with customers to never make a mistake. We can't go down a cul de sac with these kinds of activities -- developing the communication skills, the technical skills, or the discipline to master what I've just described. Those are the big things that we're looking for.

I'll be honest with you. With this younger crowd, there's a lot they don't know, but there is a new dimension that they bring and a level of innovation and creativity that we didn't have without them.

Gardner: They're not intimidated easily, right?

O'Malley: They're not intimidated, and they look at things differently. What others may say can never be done, shouldn't be done, or isn't necessary, they say, "That ain't right." A month later, they're doing something that almost creates a shock and awe from customer. It's a wonderful thing for me to be part of and to witness.

Gardner: Let's look at some examples, if you have any, around how organizations that have heard the cloud model attributes, requirements, and benefits, wanted to get there quickly, and probably had some things in place. Have we examples of taking the mainframe model, elevating it to the cloud model in terms of how it's being utilized, and then perhaps some attributes? Are there metrics of successes as to how that works?

O'Malley: For a long time the higher-end mainframe customers aggressively used their big iron to do things in the way you've described. What's more interesting is that recently we're seeing smaller customers start to look at cloud, more specifically virtualization, being pushed to the mainframe in unconventional ways.

We have an insurance company up in Minneapolis that ran SAP, which is a financial system that competes against Oracle, and they elected initially to run it in client-server fashion. They ran the database server

Some interesting things happen when you bring it up to the mainframe. There's no physical network at that point. It's all hypersockets. So, it has drastically reduced the cost from a networking standpoint.

under DB2 on their z/OS. They ran the application server on an Intel platform. They got to a point where they required an upgrade to that application.

Usually customers follow conventional wisdom. They do what they always did. They upgrade their hardware in place and they leave the application as it was. In this case, this company has a charter to sell insurance only in the state of Minnesota. As a result of that, when Target stores let people go because of the recession, it's not like they can go to Wisconsin and sell somebody else insurance to increase their overall revenue. Cost efficiency, cost per member, is not just an IT issue. It’s a CEO issue.

So, rather than just upgrading this application with all they have, they said, "Let's pause and take a hard look at this environment. Let's look at options and see if there are better things we could be doing to serve the business."

Ultimately, they decided on bringing the application server up to z/Linux, effectively encasing all of SAP in a single server, effectively creating an internal cloud for SAP to handle the scalability requirements and drive down cost.

Some interesting things happen when you bring it up to the mainframe. There's no physical network at that point. It's all hypersockets. So, it has drastically reduced the cost from a networking standpoint.

As you talked about earlier, z/OS effectively becomes a hub to the effort of management. The few people who did system programmer type function on the mainframe could now do it for what is a consolidated distributed environment, where they brought up 40 servers to the mainframe.

The thing that's also interesting is that, because of the maturity of virtualization on the mainframe, you can't just share SAP to 40 SAP servers, but you can also share with Web services and other applications. This is much, much more difficult to do on a distributed side with things like VMware.

Now, they've gotten nearly all their distributed environment up to the mainframe. On that platform, things like disaster recovery, where it was extremely difficult to bring up the environment when they did their testing, now comes up in 90 minutes. In fact, it takes half an hour to bring it up, an hour of certification validation, and they're up and running.

They've seen effectively half the cost, with a greater level of security, resilience, and all the things that the mainframe offers. You saw things like that in the big banks and the big insurance companies that had the capacity and people and smartness skills to do it.

You seldom saw that on the smaller end, but, given the recession and the maturity of the platform, the innovation that's been brought to the mainframe, all the enhancements that have taken place over the last eight years, and the efforts that CA is doing, it's making people look at it differently. That is, I think, a perfect example of a cloud up and running, and making a massive difference to support an organization's charter, which is to serve their customers at the lowest possible cost.

Gardner: I should think that that's not only going to be payback in a short-term but will improve over time as they need to do patches, administration, and upgrades. They'll have a smaller set or perhaps even a singular application set to apply those to to get the benefits of what a SaaS provider can do, but we're now bringing this downstream to a smaller company that can deliver their own on-demand model.

O'Malley: Absolutely. The evil in IT is moving parts and too many of them. The more that you can reduce change and reduce the need to manage change, the more you're going to reduce your overall cost.

The recession eventually will end, and you're right. The people who have taken these steps to drive efficiency, the steps that I just went through, are going to be in a much better competitive position when we come out of this recession not only to grow at a rate that their customers do, but do it in a more cost effective fashion than their competitors.

Gardner: Well, we've covered a lot of territory in terms of understanding some of the issues, the attractiveness of cloud. We've talked about the fact that it's still immature, but that there are a number of elements in the requirements list for cloud that are in place and simply need to be applied. We've discussed some of the issues around age, expense, and skill sets that are being addressed.

I want to thank our guest today, Chris O'Malley. He is the executive vice president and general manager for CA's Mainframe Business Unit. I appreciate your time, Chris.

O'Malley: Dana, thank you very much.

Gardner: We've been learning about how mainframes can help enterprises reach cloud benefits faster, and how in many respects the mainframe is already the cloud. I want to thank the sponsor for this discussion, CA, for their underwriting of its production. This is Dana Gardner, principal analyst at Interarbor Solutions. Thanks for listening, and come back next time.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com. Learn more. Sponsor: CA.

Transcript of a BriefingsDirect podcast on the role and benefits of mainframes and their position as private cloud infrastructure in today's efficiency-minded enterprises. Copyright Interarbor Solutions, LLC, 2005-2009. All rights reserved.