Monday, November 10, 2008

Solving IT Energy Use Issues Requires Holistic Approach to Efficiency Planning and Management

Transcript of a BriefingsDirect podcast with HP’s Ian Jagger and Andrew Fisher on the role of energy efficiency in the data center.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: Hewlett-Packard.

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions and you’re listening to BriefingsDirect. Today we present a sponsored podcast discussion on the critical and global problem of energy management for IT operations and data centers.

We will take a look at energy demand, supply, costs, and ways to develop a complete management perspective across the entire IT energy equation. The goal is to find innovative means to conservation so that existing facilities don't need to be expanded or replaced.

Good energy management is not as simple as just less hardware or better cooling, but it really requires an enterprise-by-enterprise examination of the "many sins" of energy and resources misuse.

In order to put into practice longer-term benefits, behaviors, and measurements, the whole picture needs to be taken into consideration. The goal, of course, is to promote a low-risk matching of energy supply and cost with the lowest IT energy demand possible.

To help us examine these important topics we’re joined by Ian Jagger. He is the Worldwide Data Center Services marketing manager in Hewlett-Packard's (HP) Technology Solutions Group. Welcome to our podcast, Ian.

Ian Jagger: Thank you, happy to be here.

Gardner: We’re also joined by Andrew Fisher. He is the manager of technology strategy in the Industry Standard Services group at HP. Welcome, Andrew.

Andrew Fisher: Thank you, very much.

Gardner: Let's take a look first at the broad picture of larger trends in this whole energy equation. As I say, it’s not simple. There are a lot of moving parts, and there are a lot of mega tends and external factors involved as well.

I suppose the first thing to look at is capacity. I’d like to direct this to Ian. How critical is the situation now where large enterprises with vast data centers are actually facing an energy crisis?

Jagger: I think it's quite critical Dana. Data centers typically were not designed for the computing loads that are available to us today and they have been caught out. Enterprise customers are having to consider strategically what they need to do with respect to their facilities and their capability to bring enough power to be able to supply the future capacity needs coming from their IT infrastructure.

Gardner: Now, at the most general level, is this a case where there is not enough electricity available or that the growth and demand of electricity is just growing so quickly, or both?

Jagger: I think it's both, and there is also a third level, which is how adequate is the cooling design within the data center itself. So, it is a question of how much power is available, of how much can be drawn into the data center, what is the capacity of the data center, and as I said, how that is cooled.

Gardner: We are also, of course, involving green concerns. There are issues around carbon and pollution, and mandates around these issues. We are also faced with regulatory issues and compliance that are of a separate nature, and many organizations are behaving more like service bureaus, where they have service level agreements.

So there is not too much wiggle room in terms of what needs to be adhered to from compliance and/or service levels. What are the variables that companies need to first start focusing on in order to better execute their management of energy?

Fisher: That's a good question. One of the most important things to understand is how they have allocated power within that data center. There are new capabilities that are going to be coming online in the near future that allow greater control over the power consumption within the data center, so that precious capacity that's so expensive at the data center level can be more accurately allocated and used more effectively.

Gardner: This does vary from region to region, and HP being a global company, perhaps we should also take a look at the fact that in the United States, for example, there are limitations from the grid. The capacity of moving energy, even if it can be generated, is an issue, and in the U.K., apparently in the London area at least, there’s been somewhat of a lockdown in terms of use restrictions around the Olympics.

Ian, perhaps you could fill us in a little bit on some of the regional impacts and how this is supercritical perhaps in some areas more than others.

Jagger: I think you have just got it with the example you have used. It does vary region to region, depending on the capacity of the grid, the ability to distribute it along the grid and how that impacts customers geographically. It's not just about power distribution and generation, but it's also about the nascent situation with respect to compliance.

In Europe, we are now seeing countries, particularly the U.K., who have taken the lead in terms of carbon reduction. Legislation is coming on line, kicking in from 2010, but compliance requirements from 2009, where the top 5,000 companies or so, companies that use a given volume of energy or a value of energy, have to justify that usage in terms of purchasing carbon credits which are set against them.

Each of those companies -- and this includes HP U.K. -- need to establish what the energy usage is and show the roadmap -- how they can reduce that year over year towards the legislation that's in play there. It's only around the corner before that's applied in the U.S. too.

Gardner: Now, we recognize that this is a large problem. Many components -- I have heard the phrase “many sins” -- are involved. I wonder if either of you, or perhaps both, could fill us in a little bit about what are the types of past behaviors, approaches, mentalities, and philosophies about energy that need to be reexamined in order to get closer to where we need to go.

Jagger: I think the contrast among the silos between facilities and real estate and IT are based in the contradiction between cost and availability. You mentioned service levels earlier. From an IT perspective, that’s service level agreements to the business in terms of availability, the uptime of equipment. But, from the real estate perspective, the facility perspective, it's about cost control and CAPEX and OPEX with respect to the facility itself.

They have tended to operate in independent silos, but now the general problem we have, which is overriding both of those departments, is the cost of energy. Typically the cost of energy is now approaching 10 percent of IT budgets and that's significant. It now becomes a common problem for both of these departments to address. If they don't address it themselves then I am sure a CEO or a CFO will help them along that path.

Gardner: How about it, Andy? What sort of sins unfortunately have people overlooked as a result of lower energy cost in the past, but that really can't be overlooked now?

Fisher: First of all, it's a complex system. When you look at the total process of delivering the energy from where it comes in from the utility feed, distributing it throughout the data center with UPS capability or backup power capability, through the actual IT equipment itself, and then finally with the cooling on the back end to remove the heat from the data center, there are a thousand points of opportunity to improve the overall efficiency.

To complicate it even further, there are lot of organizational or behavioral issues that Ian alluded to as well. Different organizations have different priorities in terms of what they are trying to achieve. So, there is rarely a single silver bullet to solve this complex problem.

You need to take a complete end-to-end solution that involves everything from analysis of your operational processes and behavioral issues, how you are configuring your data center, whether you have hot-aisle or cold-aisle configurations, these sorts of things, to trying to optimize the performance or the efficiency of the power delivery, making sure that you are getting the best performance per watt out of your IT equipment itself. Probably most importantly, you need to make sure that your cooling system is tuned and optimized to your real needs.

One of the biggest issues out there is that the industry, by and large, drastically overcools data centers. That reduces their cooling capacity and ends up wasting an incredible amount of money. So we have at HP a wide range of capabilities, including our EYP Mission Critical Facilities Services to help you analyze those operational issues as well as structural ones, and make recommendations, in addition to products that are more efficient as well.

Gardner: You raise a couple of interesting points. It's hard to fix something that you can't measure. What are the basic measurement guidelines for energy use?

I have heard of Defense Council on Integrity and Efficiency (DCIE). There is also a Power Usage Effectiveness (PUE). How does a large organization start to get a handle on this? As you say, or it has been mentioned, it's a siloed problem in the past, now it needs to be tackled head on?

Jagger: You have touched on the principal benchmarks that go through the industry there -- the PUE and the Infrastructure Efficiency Ratio, which is the inverse of the PUE. Put very simply, the PUE would be the total power coming into the data center over the amount of power required for computing purposes. So how efficient is that? How efficient is the data center and service of overall power that is required for computing?

In other words, if you need one kilowatt for computing, and your PUE is two-and-a-half, than you need to be bringing 2.5 kilowatts to the wall to be able to run those computers.

They are not perfect, and there are industry bodies that are looking to drive greater elements of perfection out of this. So for example, PUE is a Green Grid Rating System that is generally used, but Green Grid themselves are looking to migrate through the inverse ratio of the data center infrastructure and efficiency ratio, and use that going forward before they can develop the next level.

The principal problem is that they tend to be snapshots in time and not necessarily a great view of what's actually going on in the data center. But, typically we can get beyond that and look over annualized values of energy usage and then take measurements from that point.

The best way of saving energy is, of course, to turn the computers off in the first place. Underutilized computing is not the greatest way to save energy.

Gardner: That dovetails, of course, with a number of other initiatives we have underway, such as virtualization, application modernization, winnowing out apps that aren't being used very much. Service-oriented architecture (SOA) encourages reuse and making sure that common services are supported efficiently.

There is also data center unification and modernization of hardware. All these things come together and ultimately increase utilization, which then changes the energy equation.

The question is how do we make these things work in concert? How is there some coordination between getting the right mix on energy along with some of these other initiatives? Why don't we start with Ian on that?

Jagger: They feed off each other. If you look at virtualizing the environment, then the facility design or the cooling design for that environment would be different. If you weren't in a virtualized environment, suddenly you are designing something around 15-35 kilowatts per cabinet, as opposed to 10 kilowatts per cabinet. That requires completely different design criteria. You’re using four to eight times the wattage in comparison. That, in turn, requires stricter floor management.

But having gotten that improved design around our floor management, you are then able to look at what improvements can be made from the IT infrastructure side as well. I guess Andy would have some thoughts there.

Fisher: There is a wide range of opportunities. Just the latest generation server technology is something like 325 percent more energy efficient in terms of performance-per-watt than older equipment. So, simply upgrading your single-core servers to the latest quad-core servers can lead to incredible improvements in energy efficiency, especially when combined with other technologies like virtualization.

Gardner: Once these organizations start hitting the wall on energy, it behooves them to look at some of these other initiatives, rather than just saying, “Wow, we need another data center at 10, 20, maybe 100 million dollars.” Is that more the philosophy here -- be smart not big?

Fisher: Absolutely. There is a substantial opportunity to extend the life of your data center, and I recommend that you give HP a call and talk to us here. We have a wide range of things that we can help with.

Ian can talk to the services here in a second, but from a product perspective, we’re bringing to market new capabilities in terms of efficiency of the platforms to help you reduce that total energy consumption of the IT equipment itself. We’re also working on unique ways of reclaiming existing capacity. Instead of having to build another 50 or 100-million-dollar data center, you can live longer in the data center that you have.

Gardner: I suppose one of the fundamental shifts recently with the cost of energy going up considerably is that the return on investment (ROI) equation shifts as well. If I were selling systems I need to know, given the harsh economic climate, that I have a good ROI investment story -- that if you invest $10, you can save $15 over X amount of time. The energy factor now plays a much larger role in that.

Perhaps, Andy, you could tell us a little bit about how the cost of energy, instead of an afterthought, is now a fore thought, when it comes to these -- whether it’s worth these modernization efforts.

Fisher: We look at it both from an OPEX, or your monthly cost of electricity -- and that’s rising rapidly, as the cost of energy goes up -- as well as from a CAPEX perspective, with your investment in your data center.

The first thing is to optimize your CAPEX investment, the money you have already sunk into your data center. You want to make sure that from an investment perspective you don't have to lay out another huge chunk of money to build another data center. So, number one, we want to optimize on the CAPEX side and make sure that you are using what you have most effectively.

But, from an operational cost perspective, it's really about reducing your total energy consumption. You can approach that initially from optimizing the energy use of your IT equipment itself, because that is core to the PUE calculation that we talked about.

If you are able to reduce the number of watts that you need for your IT equipment by buying more energy efficient equipment or by using virtualization and other technologies, then that has a multiplying effect on total energy. You no longer have to deliver power for that wattage that you have eliminated and you don't have to cool the heat that is no longer generated.

Otherwise, there are opportunities… We’ve introduced products that help you optimize your cooling, which typically can be up to 50 percent or more of your total energy budget. So by making sure that you fine tune your cooling to meet your actual demand of your IT you can make substantial reductions on your monthly electric bill.

Gardner: Now, how does the Adaptive Infrastructure relate to this as well? It seems that would also be a factor in some of these equations?

Fisher: We are really talking about the Adaptive Infrastructure in action here. Everything that we are doing across our product delivery, software, and services is really an embodiment of the Adaptive Infrastructure at work in terms of increasing the efficiency of our customers' IT assets and making them more efficient.

Gardner: Let's go back to Ian. It seems that, as with many areas like manufacturing or application development, the history has been that you build it and then you throw it over the wall and someone has to put it into production or build it.

I expect that maybe data centers have had a similar effect when it comes to energy. We set up requirements. We build based on performance requirements. And then, oh, by the way, energy issues come as an afterthought.

Is that true and is that the outmoded method, and are we now, in a sense, building for energy conservation from the get-go? Has it become more of a city- or town-planner mentality, rather than simply an architect approach. What's the mindset shift that's taking place?

Jagger: That's a good question. I think you have to address it at all the levels you talked about. At the company level or the enterprise level, you are absolutely right. That has been the mentality or the approach, we need a data center, and we base it where we are. Nothing else matters. Base it adjacent to us.

Energy costs or supply have not been a consideration. Now they are. That's on the basis that you don't have any other complexities coming at you. But, if you are just looking at the strategy for your data centers in terms of business growth and your capacity, storage, and availability requirements that you have going forward, and you do the math, you can understand the size of the data center you need and how that works with respect to virtualization strategies and so on.

On top of that, we have the latest complexities, where you simply don't have the forward view on things. In just the last few days we’ve seen, for example, Wells Fargo buying Wachovia. I’m not sure how many data centers are within those two organizations, but you can bet they are in the scores. Suddenly, we have real estate and IT managers who are scratching their heads thinking, “How on earth do we bring all this together. There are different approaches now being taken at the enterprise level.

At the architects’ level, it would be irresponsible for an architect today not to build energy efficiency into a green field building or any building, not just a data center. It’s pretty much been established that it just makes sense if you are designing a new building to be building energy efficiency into it, because your operating costs will far outweigh the capital expenditure on those building rather quickly.

I’m not sure how a company like HP can influence at the planning level, but where we can influence is at the industry level and at the governmental level. We have experts within the company who sit on think tanks and governance boards. We advise bodies like the EPA. We sit with the leading organizations in energy building design, and discuss how governance with respect to green building design can be built and can be moved forward within the market.

That's how we can start to influence at the industry level in terms of having industry standards created, because if the industry doesn't create them itself, then governmental bodies will do it.

Gardner: It also seems that because it's so difficult to predict all the variables, that a need for modularity has emerged in the data center design, so that the end result can be amended and adjusted without all the other parts being interconnected and brittle. It’s similar to software, where you would want to have modularity in software, so you gain flexibility and it’s not too brittle. Can you explain more deeply how that relates to best energy management practice?

Jagger: The approach that we at HP are now taking is to move toward a new model, which we called the Hybrid Tiered Strategy, with respect to the data center. In other words, it’s a modular design, and you mix tiers according to need.

What has gone on in the past and today is that as an enterprise you may have a requirement for a Tier 4 level of structure, with respect to the data center, which is putting out at 100 watts per square foot, for example. Let’s say, for the sake of argument, that's a 100,000-square-foot data center, but you don't need all that data center infrastructure at a Tier 4 topology.

If you look at how you’re going to structure your virtualization program, you may only need 50 percent of it at Tier 4 for high density computing, and the rest of it can be at a Tier 2 level.

If that were the situation, you would be clearing roughly 25 percent of your capital costs on building that data center. Just doing simple math, if you are looking at 100,000 square feet, that's in the region of $40 to $50 million. So, there are some clear consequences of moving to a hybrid tiered or a modular model.

Gardner: Are there some examples out there that you can give us? It would be great if you could name some companies, or at least give us use-case scenarios where organizations have adjusted, adopted some of these practices, implemented some of these standards, used common measurement practices, and have resisted having to spend $40 million on CAPEX, but also perhaps utilizing their existing resources even better.

Jagger: I think HP is the biggest example. We are the biggest example of designing modularity into our own data centers.

Beyond HP, you could look at supercomputing centers, high density computing -- the Internet service providers, the Googles of this world, and Microsoft themselves. The companies that require high-level resolve, high density and supercomputing typically are moving in this direction. We are pioneering this with our in-house capabilities. We are at the leading edge of this level of innovation.

Gardner: Let's take a look forward a little bit. What can we expect? Obviously, this makes more sense over time. Green issues are going to become more prevalent. Carbon is going to become more regulated. Costs are going to become prohibitive for waste, and the amount of data moving around increases all the time.

Perhaps you can explain the roadmap, the future, some of the concepts around optimizing data centers -- without pre-announcing things, but at least, give us a sense of what's coming.

Fisher: How about if I talk to that one first. One thing that was just announced is relevant to what Ian was just talking about. We announced recently the HP Performance-Optimized Data Center (POD), which is our container strategy for small data centers that can be deployed incrementally.

This is another choice that's available for customers. Some of the folks who are looking at it first are the big scale-out infrastructure Web-service companies and so forth. The idea here is you take one of these 40-foot shipping containers that you see on container ships all over the place and you retrofit it into a mini data center.

In the HP implementation, it's a very simple kind of layout. You just have a single row of 50 U racks. I believe there’s something like 22 of them in this 40-foot container. There’s a single hot aisle and a single cool aisle, with overhead cooling that takes the exhaust hot air from the back and cools it and delivers it to the front.

Using the HP POD you can install any standard equipment into the 19-inch racks and build out a very efficient data center that has a very low PUE or a leading PUE, from a cooling perspective. So that's yet another option in the HP side.

From the product side of HP here, one of the biggest things we’re seeing is that power and cooling capacity is allocated by facilities in a very conservative manner. It's hard to understand exactly how much energy is required for each individual server or blade enclosure. So, there’s typically quite a bit of a conservative reserve that is allocated on top of what's probably actually being consumed.

In fact, if it's in the purview of the facilities team to allocate that power, they would treat it as any piece of electrical equipment and they would just look at what the max power rating or requirement is for the piece of equipment. What we’re seeing is that this can actually overstate the power requirement by up to three times what is actually needed.

So, there’s an incredible opportunity to reclaim that reserve capacity, put it to good use, and continue to deploy new servers into your data center, without having to break ground on a new data center.

Very soon, you’re going to be hearing some exciting news from HP about how we’re going to provide the opportunity for fine-tuned control of exactly how much power the servers in the IT racks are going to actually use.

Gardner: So, not only are we moving toward modularity at a number of levels, we’re bringing more intelligence to bear on the problem?

Fisher: Yes. A key to addressing this problem is to have accurate measurement and the ability to have predictability and control of the actual power consumption of the core IT equipment that the whole infrastructure is supporting.

Gardner: Alright. How about a roadmap, from a strategic point of view, of methodologies and best practices. Ian, what new innovations can we expect along those lines?

Jagger: In all this complexity, it's a relatively simple path to follow. It all starts with discovery -- where are we today? Given what we know about business direction, where do we need to get to? What do we need to be capable of from a business technology perspective that incorporates a facility as a holistic or a hybrid view of those departments combined? What is it that they need to produce to support the business going forward?

Then, you have a gap. The next question is how do we fill that gap, how do we get there? Various strategies can accrue from that, depending on what your needs are.

We would look at that with customers, and we would sit down with them and ask them some pretty basic questions. Do you need to be where you are today? If you are in Phoenix, does the data center needs to be in Phoenix or could it be in Washington state? It’s cooler and you don't therefore have the energy costs that you would in Phoenix. So, let's have a look at that.

What is your position from a corporate social-responsibility perspective with respect to the environment? How visible are you in addressing that in comparison to your industry peers? What are the pressures on you to do that? So, let's have a look at alternate energy sources with respect to your data center.

For example, we have just announced our San Diego facility, which is now powered by solar panels. We are involved quite heavily right now in Iceland, providing geothermal technologies for data centers. So, a question there would be, can you be in Iceland? One issue there would be the question of latency. There are several questions that you would ask in terms of direction and how to get there.

Having answered those, you would move into planning and design phases and we would address those at that point too. We would build into the operation of any given new sites, or retrofitted site, the processes with respect to service management across the facility and IT structures. Service management is now not only about IT, but it’s about the facility as well, and how that is brought together in one motion.

So, it's pretty much a simple lifecycle approach within a complex field, and that will get you there. Along the process, we would be able to give the orders of magnitude of cost and typical ROI based on the strategies that you are looking to undertake.

Gardner: It certainly sounds like being efficient and getting this larger management capability over energy and facilities and resources is becoming a core competency and not an option. Is that fair to say?

Jagger: Yes. I think the spin on that is, going back to the example I just used of Wells Fargo and Wachovia, who do you turn to who can help you with that? You don't face that every day of your life, either within facilities or within IT, and you need help. You need to reach out for where the help is.

Traditionally, in our industry, as we have been discussing, it has tended to be siloed into real estate and into IT. What’s now required is the holistic view of infrastructure. I mean the physical infrastructure and the IT infrastructure. Customers need to reach out to firms that they feel comfortable reaching out to.

I think it was Andy who actually conducted this survey -- so correct me if I’m wrong, Andy. We recently undertook a survey in each of our worldwide regions, all enterprise customers. The finding was that the more the customers themselves had issues that they needed to address with respect to the environment and energy the more likely it was that they were going to come to HP as their vendor of choice.

Fisher: That's correct.

Gardner: Well, clearly if you don't have the holistic view you are going to have to learn how to get one, right?

Fisher: Right.

Gardner: Ian, let me direct this to you. I suppose there is some thought around environmental benefits and green IT, in which people believe that this is an additional cost or an expense. It seems to me, though, from what we have been discussing, that moving towards good environmental practices is actually moving towards good energy management practices too.

Jagger: That's absolutely right. It is not a choice of one or the other. Now, the business outcomes that come from energy management are also environmental outcomes, but there are apparent barriers to implementing environmental solutions, which, as you just, said are actually energy management solutions. Primarily they revolve around the lack of identifiable ROI or the payback period around any green improvement and then the measurements of that improvement itself.

More recently, we’ve been able to show customers the typical examples of how they can move through that environmental curve or that energy management curve going back to the industry standard benchmarks of PUE.

By showing them what a rough order of magnitude cost would be to move them grade by grade through the ranking system of energy efficiency, we show them what that cost would be, what the return would be, as a result of that in terms of carbon savings, in terms of dollar savings, and what the payback period would be based again on those dollar savings.

So, we can have a very strategic, yet tactical, view on how to approach this. A customer can take a larger view in terms of how far they want to go with their environmental approach and balance that with their energy-management approach.

There is obviously a curve here. The larger the investment in improving energy management, then the greater the return. At some point, that return slows down, because of the amounts of actual investment you have put in. So, there is a curve there, and we can show you how to get to any point along that curve.

Gardner: Excellent! We have been discussing the large global problem around energy management and how it has become more critical for IT operations -- energy not as an afterthought, but really the forethought and an overriding stratagem for how to conduct business in IT.

I want to thank our guests today. We have been joined by Ian Jagger. He is the Worldwide Data Center Services marketing manager in HP's Technology Solutions Group. Appreciate your input Ian.

Jagger: You’re very welcome, Dana, I am happy to have taken part.

Gardner: Andrew Fisher, the manager of technology strategy in the Industry Standard Services group at HP. Thank you, Andy.

Fisher: You are welcome.

Gardner: This is Dana Gardner, principal analyst at Interarbor Solutions. You’ve been listening to a sponsored BriefingsDirect podcast. Thanks for listening and come back next time.

For more information on energy-efficiency in the data center, read the whitepaper.

For more information about HP Energy Efficiency Services.

For more information on HP Thermal Logic technology.

For more information on HP Adaptive Infrastructure.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: Hewlett-Packard.

Transcript of a BriefingsDirect podcast with HP’s Ian Jagger and Andrew Fisher on the role of energy efficiency in the data center. Copyright Interarbor Solutions, LLC, 2005-2008. All rights reserved.

Thursday, November 06, 2008

Implementing ITIL Requires Log Management and Analytics to Help IT Operations Gain Efficiency and Accountability

Transcript of BriefingsDirect podcast on the role of log management and systems analytics within the Information Technology Infrastructure Library (ITIL) framework.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: LogLogic.

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you’re listening to BriefingsDirect. Today, a sponsored podcast discussion on how to run your IT department well by implementing proven standards and methods, and particularly leveraging the Information Technology Infrastructure Library (ITIL) prescriptions and guidelines.

We’ll talk with an expert on ITIL and why it’s making sense for more IT departments and operations around the world. We’ll also look into ways that IT leaders can gain visibility into systems and operations to produce the audit and performance data trail that helps implement and refine such frameworks as ITIL.

We’ll examine the use of systems log management and analytics in the context of ITIL and of managing IT operations with an eye to process efficiency, operational accountability, and systems behaviors, in the sense of knowing a lot about the trains, in order to help keep them running on time and at the lowest possible cost.

To help us understand these trends and findings we are joined by Sudha Iyer. She is the director of product management at LogLogic. Welcome to the show, Sudha.

Sudha Iyer: Thank you.

Gardner: We’re also joined by Sean McClean. He is a principal at KatalystNow in Orlando, Florida. It's a firm that handles mentoring, learning, and training around ITIL and tools used to implement ITIL. Welcome to the show, Sean.

Sean McCLean: Thank you very much.

Gardner: Let's start by looking at ITIL in general for those folks who might not be familiar with it. Sean, how are people actually using it and implementing it nowadays?

McCLean: ITIL has a long and interesting history. It's a series of concepts that have been around since the 1980, although lot of people will dispute exactly when it got started and how. Essentially, it started with the Central Computer and Telecommunications Agency (CCTA) of the British government.

What they were looking to do was create a set of frameworks that could be followed for IT. Throughout ITIL's history, it has been driven by a couple of key concepts. If you look at almost any other business or industry, accounting for example, it’s been around for years. There are certain common practices and principles that everyone agrees upon.

IT, as a business, a practice, or an industry is relatively new. The ITIL framework has been one that's always been focused on how we can create a common thread or a common language, so that all businesses can follow and do certain things consistently with regard to IT.

In recent times, there has been a lot more focus on that, particularly in two general areas. One, ITIL has had multiple revisions. Initially, it was a drive to handle support and delivery. Now, we are looking to do even more with tying the IT structure into the business, the function of getting the business done, and how IT can better support that, so that IT becomes a part of the business. That has kind of been the constant focus of ITIL.

Gardner: So, it's really about maturity of IT as a function that becomes more akin to other major business types of functions or management functions.

McCLean: Absolutely. I think it's interesting, because anyone in the IT field needs to remember that we are in a really exciting time and place. Number one, because technology revises itself on what seems like a daily basis. Number two, because the business of IT supporting a business is relatively new, we are still trying to grow and mature those frameworks of what we all agree upon is the best way to handle things.

As I said, in areas like accounting or sales, those things are consistent. They stay that way for eons, but this one is a new and changing environment for us.

Gardner: Are there any particular stumbling blocks that organizations have as they decide to implement ITIL? When you are doing training and mentoring, what are the speed bumps in their adoption pattern?

McCLean: A couple of pieces are always a little confusing when people look at ITIL. Organizations assume that it’s something you can simply purchase and plug into your organization. It doesn't quite work that way. As with any kind of framework, it’s there to provide guidance and an overall common thread or a common language. But, the practicality of taking that common thread or common language and then incorporating it or interpreting it in your business is sometimes hard to get your head around.

It's interesting that we have the same kind of confusion when we just talk. I could say the word “chair,” and the picture in your head of what a chair is and the picture in my head of what a chair is are slightly different.

It's the same when we talk about adopting a framework such as ITIL that's fairly broad. When you apply it within the business, things like “that business is governance,” “that business is auditing compliance rules” and things like that have to be considered and interpreted within that framework for ITIL. A lot of times, people who are trying to adopt ITIL struggle with that.

If we are a healthcare industry, we understand that we are talking about incidents or we understand that we are talking about the problems. We understand they we are talking about certain things that are identified in the ITIL framework, but we have to align ourselves with rules within the Health Insurance Portability and Accountability Act (HIPAA). Or, if we are an accounting organization, we have to comply to a different set of rules. So it's that element that's interesting.

Gardner: Now, what's interesting to me about the relationship between ITIL and log and systems analytics is that ITIL is really coming from the top-down, and it’s organizational and methodological in nature, but you need information, you need hard data to understand what's going on and how things are working and operating and how to improve. That's where the log analytics comes in from the bottom-up.

Let's go to Sudha. Tell us how a company like LogLogic uses ITIL, and how these two come together -- the top-down and the bottom-up?

Iyer: Sure. That's actually where the rubber meets the road, so to speak. As we have already discussed, ITIL is generally a guidance -- best practices -- for service delivery, incident management, or what have you. Then, there are these sets of policies with these guidelines. What organizations can do is set up their data retention policy, firewall access policy, or any other policy.

But, how do they really know whether these policies are being actually enforced and/or violated, or what is the gap? How do they constantly improve upon their security posture? That's where it's important to collect activity in your enterprise on what's going on.

There is a tight fit there in what we provide as our log-management platform. LogLogic has been around for a number of years and is the leader in this log management industry. It allows organizations to collect information from a wide variety of sources, assimilate it, and analyze it. An auditor or an information security professional can look deep down into what's actually going on, on their storage capacity or planning for the future, on how many more firewalls are required, or what's the usage pattern in the organization of a particular server.

All these different metrics feed back into what ITIL is trying to help IT organizations do. Actually, the bottom line is how do you do more with less, and that's where log management fits in.

Gardner: Back to you, Sean. When companies are trying to move beyond baseline implementation and really start getting some economic benefits, which of course are quite important these days from their ITIL activities, what sort of tools have you seen companies using? To what degree do you need to dovetail your methodological and ITIL activities with the proper tools down in the actual systems?

McCLean: When you’re starting to talk about applying the actual process to the tools, that's the space that's the most interesting to me. It's that element you need some common thread that you can pull through all of those.

Today, in the industry, we have countless different tools that we use, and we need common threads that can pull across all of those different tools and say, “Well, these things are consistent and these things will apply as we move forward into these processes.” As Sudha pointed out, having an underlying log system is a great way to get that started.

The common thread in many cases across those pieces is maintaining the focus on the business. That's always where IT needs to be more conscious and to be constantly driving forward. Ultimately, where do these tools fit to follow business, and how did these tools provide the services that ultimately support the business to do the thing that we are trying to get done?

Does that address the question?

Gardner: I think so. Sudha, tell us about some instances where LogLogic has been used and ITIL has been the focus or the context of its use. Are there some findings general use case findings? What have been some of the outcomes when these two bottom-up, top-down approaches come together?

Iyer: That's a great question. The bottom line is the customers, and we have a very large customer base. It turns out, according to some surveys we have done in our customer base, that the biggest driver for a framework such as ITIL is compliance. The importance of ITIL for compliance has been recognized, and that is the biggest impact.

As Sean mentioned earlier, it's not a package that you buy and plug into your network and there you go, you are compliant. It's a continues process.

What some of our customers have figured out is that adopting our log management solutions allows them to create better control and visibility into what actually is going on on their network and their systems. From many angles, whether it's a security professional or an auditor, they’re all looking at whether you know what's going on, whether you were able to mitigate anything untoward that's happening, and whether there is accountability. So, we get feedback in our surveys that control, and visibility has been the top driver for implementing such solutions.

Another item that Sean touched on, reducing IT cost and improving the service quality, was the other driver. When they look at a log-management console and see this is how many admin accesses that were denied. It happened between 10 p.m. and midnight. They quickly alert, get on the job. and try to mitigate the risk. This is where they have seen the biggest value return on investment (ROI) on implementations of LogLogic.

Gardner: Sean, the most recent version of ITIL, Version 3 focuses, as you were alluding to, on IT service management, of IT behaving like a service bureau, where it is responsible on almost a market forces basis to their users, their constituents, in the enterprise. This involves increasingly service-level agreements (SLAs) and contracts, either explicit or implicit.

At the same time, it seems as if we’re engaging with the higher level of complexity in our data center's increased use of virtualization and the increased use of software-as-a-service (SaaS) type services.

What's the tension here between the need to provide services with high expectations and a contract agreement and, at the same time, this built-in complexity? Is there a role for tools like LogLogic to come into play there?

McCLean: Absolutely. There is a great opportunity with regard to tools such as LogLogic from that direction. ITIL Version 2 focused on simply support and delivery, those two key areas. We are going to support the IT services and we are going to deliver along the lines of these services.

The ITIL Version 2 has started to talk a lot about alignment of IT with the business, because a lot of times IT continues and drives and does things without necessarily realizing what the business is and the business is doing. An IT department focuses on email, but they are not necessarily looking at the fact that email is supporting whatever it is the business is trying to accomplish or how that service does.

As we moved into ITIL Version 3, they started trying to go beyond simply saying it's an element of alignment and move the concept of IT into an area where its a part of the business. Therefore it’s offering services within and outside of the business.

One of the key elements in the new manuals in ITIL V3 is talk to service strategy, and its a hot topic amongst the ITIL community, this push towards a strategic look at IT, and developing services as if you were your own business.

IT is looking and saying, “Well, we need to develop our IT services as a service that we would sell to the business, just as any other organization would.” With that in mind, it's all driving toward how we can turn our assets into strategic assets? If we have a service and its made up of an Exchange server, or we have a service and it’s made up three virtual machines, what can we do with those things to make them even more valuable to the business?

If I have an Exchange server, is there someway that I can parcel it out or farm it to do something else that will also be valuable?

Now, with LogLogic's suite of tools we’re able to pull that log information about those assets. That's when you start being able to investigate how you can make the assets that exist more value driven for the organization's business.

Gardner: Back to you, Sudha. Have you had customer engagements where you have seen that this notion of being a contract service provider puts a great deal of responsibility on them, that they need greater insight and, as Sean was saying, need to find even more ways to exploit their resources, provide higher level services, and increase utilization, even as complexity increases?

Iyer: I was just going to add to what Sean was describing. You want to figure out how much of your current investment is being utilized. If there is a lot of unspent capacity, that's where understanding what's going on helps in assessing, “Okay, here is so much disk space that is unutilized. Or, it's the end of the quarter, we need to bring in more virtualization of these servers to get our accounting to close on time, etc. That's where the open API, the open platform that LogLogic is comes into play.

Today, IT is heavily into the services-oriented architecture (SOA) methodology. So, we say, “Do you have to actually have a console login to understand what's going on in your enterprise?” No. You are probably a storage administrator or located in a very different location than the data center where a LogLogic solution is deployed, but you still want to analyze and predict how the storage capacity is going to be used over the next six months or a year.

The open API, the open LogLogic platform, is a great way for these other entities in an organization to leverage the LogLogic solution in place.

Gardner: Another thing that has impressed me with ITIL over the years is that it allows for sharing of information on best practices, not only inside of a single enterprise but across multiple ones and even across industries and wide global geographies.

In order to better learn from the industries' hard lessons or mistakes, you need to be able to share across common denominators, whether its APIs, measurements, or standards. I wonder if the community-based aspect to log behaviors, system behaviors, and sharing them also plays into that larger ITIL method of general industry best practices. Any thoughts along those line, Sean?

McCLean: It's really interesting that you hit on that piece, because globalization is one of the biggest drivers I think for getting ITIL moving and going on. More and more businesses have started reaching outside of the national borders, whether we call them offshore resources, outshore resources, or however you want to refer to them.

As we become more global, businesses are looking to leverage other areas. The more you do that, the larger you grow your business in trying to make it global, the more critical it is that you have a common ground.

Back to that illustration of the chair, when we communicate and we think we are talking about the same thing, we need some common point, and without it we can't really go forward at all. ITIL becomes more and more valuable the more and more we see this push towards globalization.

It’s the same with a common thread or shared log information for the same purposes. The more you can share that information and bring it across in a consistent manner, then the better you can start leveraging it. The more we are all talking about the same thing or the same chair, when we are referring to something, the better we can leverage it, share information, and start to generate new ideas around it.

Gardner: Sudha, anything to add to that in terms of community and the fact that many of these systems are outputting the same logs. I’s making that information available on a proper context that becomes the value add.

Iyer: That's right. Let's say you are Organization A and you have vendor relationships and customer relationships outside your enterprise. So, you’ve got federated services. You’ve got different kinds of applications that you share between these two different constituents -- vendors and customers.

You probably already have an SLA with these entities, and you want to make sure you are delivering on these operations. You will want to make sure there is enough uptime. You want to grow towards a common future where your technologies are not far behind, and sharing this information and making sure that what you have today is very critical. That's where there is actual value.

Gardner: Let's get into some examples. I know it's difficult to get companies to talk about sensitive systems in their IT practices. So perhaps we could keep it at the level of use-case scenarios.

Let's go to Sean first. Do you have any examples of companies that have taken ITIL to the level of implementation with tools like log analytics, and do you have some anecdotes or metrics of what some of the experiences have been?

McCLean: I wish I had metrics. Metrics is the one thing that seems to be very hard to come up with in this area. I can think of a couple of instances where organizations were rolling out ITIL implementations. In implementations where I am engaged, specifically in mentoring, one of the things I try to get them to do is to dial into the community and talk to other people who are also implementing the same types of processes and practices.

There’s one particular organization out in the Dallas-Fort Worth, Texas area. When they started getting into the community, even though they were using different tools, the underlying principles that they were trying to get to were the same.

In that case they were able to start sharing information across two companies in a manner that was saying, “We do these same things with regard to handling incidents or problems and share information, regardless of the tool being set up.”

Now, in that case I don't have specific examples of them using LogLogic, but what invariably came out in this set of discussions was what we need underneath is the ability to get proactive and start preventing these incidents before they happen. Then, we need metrics and some kind of reporting system where we can start doing the checking issues before they occur and getting the team on board to fix it before it happen. That's where they started getting into log-like tools and looking at using log data for that purpose.

Iyer: That corroborates with one of the surveys we developed and conducted in the last quarter. Organizations reported that the biggest challenge for implementing ITIL was twofold.

The first was the process of implementation, the skill set that they needed. They wanted to make sure there was a baseline, and measuring the quality of improvement was the biggest impediment.

The second one was the result of this process improvement. You get your implementation of the ITIL process itself, and where did you get it? Where were you before and where did you end up after the implementation?

I guess when you were asking for metrics, you were looking for those concrete numbers, and that's been a challenge, because you need to know what you need to measure, but you don't know that because you are not skilled enough in the ITIL practices. Then, you learn from the community, from the best-of-breed case studies on the Web sites and so forth, and you go your merry way, and then the baseline numbers for the very first time get collected from the log tools.

Gardner: I imagine that it's much better to get early and rapid insights from the systems than to wait for the SLAs to be broken, for user surveys to come back, and say, “We really don't think the IT department is carrying its weight.” Or, even worse, to get outside customers or partners coming back with complaints about performance or other issues. It really is about early insights and getting intervention that seems to really dovetail well with what ITIL is all about.

McCLean: I absolutely agree with that. Early on in my career within ITIL I had a debate with a practitioner on the other side of the pond. One thing we had a debate about was about SLAs. I had indicated that it's critical to get the business engaged in the SLA immediately.

His first answer was no, it doesn't have to happen that way. I was flabbergasted. You provide a service to an organization without an SLA first? I thought “This can't be. This doesn't make sense. You have to get the business involved.”

When we talked through it and got down to real cases, it turned out that what he was saying is that it’s not that he didn't feel that the SLA didn’t need to be negotiated with the business. What he meant was that we need to get data and reports about the services that we are delivering before we go to the customer, the customer, in this case, being internal.

His point was that we need to get data and information about the service we are delivering, so that when we have the discussion with a business about the service levels we provide, they have a baseline to offer. I think that's to Sudha's point as well.

Iyer: That's right. Actually, it goes back to one of the opening discussions we had here about aligning IT to the business goals. ITIL helps organizations make the business owners think about what they need. They do not assume that the IT services are going to be there or its not an afterthought. It’s a part of that collective, working toward the common success.

Gardner: Let's wrap up our discussion with some predictions or look into the future of ITIL. Sean, do you have any sense of where the next directions for ITIL will be, and how important is it for enterprises that might not be involved with it now to get involved, so that they can be in a better position to take advantage of the next chapters?

McCLean: The last is the most critical. People who are not engaged or involved in ITIL yet will find they are starting to drop out of a common language. That enables you to do just about everything else you do with regard to IT in your business.

If you don't speak the language and the vendors that provide the services do, then you have a hard time getting the vendors to understand what it is the vendors are offering. If you don't speak the language and you are trying to get information shared, then you have a hard time getting forward in that sense.

It’s absolutely critical for businesses and enterprises to start understanding the need for adopting. I don't want to paint it as if everybody needs to get on board ITIL, but you need to get into that and aware of that, so that you can help drive its future directions.

As you pointed out earlier, Dana, it's a common framework but it's also commonly contributed to. It's very much an open framework, so if a new way to do things comes up and is shared, that makes sense. That would be probably the next thing that's adopted. It’s just like our English language, where new terms and phrases are developed all the time. It's very important for people to get on board.

In terms of what's the next big front, when you have this broad framework like this that says, “Here are common practices, best practices, and IT practices.” If the industry matures, I think we will see a lot of steps in the near future, where people are looking and talking more about, “How do I quantify maturity as an individual within ITIL? How much do you know with regard to ITIL? And, how do I quantify a business with regard to adhering to that framework?”

There has been a little bit of that and certainly we have ITIL certification processes in all of those, but I think we are going to see more drive to understand that and to formalize that in upcoming years.

Gardner: Sudha, it certainly seems like a very auspicious pairing, the values that LogLogic provides and the type of organizations that would be embracing ITIL. Do you see ITIL as an important go-to market or a channel for you, and is there in fact a natural pairing between ITIL-minded organizations and some of the value that you provide?

Iyer: Actually, LogLogic believes that ITIL is one of those strong frameworks that IT organizations should be adopting. To that effect, we have been delivering ITIL-related reporting, since we first launched the Compliance Suite. It has been an important component of our support for the IT organization to improve their productivity.

In today’s climate, it's very hard to predict how the IT spending will be affected. The more we can do to get visibility into their existing infrastructure networks and so on, the better off it is for the customer and for ourselves as a company.

Gardner: We’ve been discussing how enterprises have been embracing ITIL and improving the way that they produce services for their users. We’ve been learning more about visibility and the role that log analytics and systems information plays in that process.

Helping us have been our panelists, Sudha Iyer. She is the director of product management at LogLogic. Thanks very much, Sudha.

Iyer: Thank you, it's a pleasure, to be sure.

Gardner: Sean McClean, principal at KatalystNow, which mentors and helps organizations train and prepare for ITIL and its benefits. It’s based in Orlando, Florida. Thanks very much, Sean.

McCLean: Thank you. It’s been a pleasure.

Gardner: This is Dana Gardner, principal analyst at Interarbor Solutions. Thanks for listening and come back next time.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: LogLogic.

Transcript of BriefingsDirect podcast on the role of log management and systems analytics within the Information Technology Infrastructure Library (ITIL) framework. Copyright Interarbor Solutions, LLC, 2005-2008. All rights reserved.

Friday, October 31, 2008

BriefingsDirect Analysts Take Microsoft's Pulse: Will the Software Giant Peak in Next Few Years?

Edited transcript of BriefingsDirect Analyst Insights Edition podcast, Vol. 32, on the outlook for Microsoft in the face of the economic downturn and new directions in the IT market, recorded October 24, 2008.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: Active Endpoints.

Special offer: Download a free, supported 30-day trial of Active Endpoint's ActiveVOS at www.activevos.com/insight.

Dana Gardner: Hello, and welcome to the latest BriefingsDirect Analyst Insights Edition Podcast, Volume 32.

This periodic discussion and dissection of IT infrastructure-related news and events with a panel of industry analysts and guests comes to you with the help of our charter sponsor, Active Endpoints, maker of the ActiveVOS visual orchestration system.

I am your host and moderator Dana Gardner, principal analyst at Interarbor Solutions. Our topic this week, the week of October 20, 2008, is the IT elephant in the room ... Microsoft. The software titan held its Professional Developers Conference (PDC) on October 27 in Los Angeles. We’re expecting quite a bit of news from the event, and this also gives us a chance to examine the state of Microsoft and its place and role in the enterprise IT dominion.

We’re going to dig into Microsoft, its mission, how well it’s doing, and how well we’re expecting it do over the next couple of years. We’re joined by this week's panel to help us dig through this.

I’d like to welcome first Jim Kobielus, senior analyst at Forrester Research. Hi, Jim.

Jim Kobielus: Hi, Dana. Hi, everybody.

Gardner: Tony Baer, senior analyst at Ovum. Hi, Tony.

Tony Bear: Hey, Dana, good to be here again.

Gardner: Dave Linthicum, independent consultant with the Linthicum Group. Dave, will be joining in a little bit.

Next, Brad Shimmin, principal analyst at Current Analysis. Howdy, Brad.

Brad Shimmin: Hi, Dana, how are you?

Gardner: Great, thank you. Making his debut on our show, Mike Meehan, a senior analyst at Current Analysis as well, and former editor-in-ehief at SearchSOA.com. Welcome, Mike.

Mike Meehan: Great to be here, Dana.

Gardner: And, last, Joe McKendrick, independent analyst and prolific blogger on SOA and business intelligence topics. Howdy, Joe?

Joe McKendrick: Pleasure to be here, Dana, thank you.

Gardner: Alright, let’s dig into the freshest news this week. Microsoft just yesterday announced its financial results for the quarter ending September 30. We saw 9 percent revenue growth, which includes 20 percent revenue growth for their business software, and overall 2 percent net income growth.

It’s not quite as robust as similar recent reports from IBM, Oracle, and HP. Indeed, the Business Unit at Microsoft did better than the Windows Operating System Unit, which has of course been its long-time cash cow.

I guess we’ll take this over to Tony. Tony, is there anything that we can read into Microsoft’s financial results that give us some indication of how well the company is doing?

Baer: Actually, I’ve been giving this some thought in terms of the results from some of the others lately -- for example, IBM and Oracle, mostly up, and SAP down.

My sense with Microsoft is that the Windows unit has been very much slowed down by the very slow uptake of Vista, and especially by the tendency of corporate customers, if and when they get new machines, to downgrade to Windows XP. So, that certainly has created something of a drag there.

The other part of this -- and this is actually one part which does surprise me a little bit -- is that Microsoft has been putting a lot more emphasis especially around business software, and specifically Oslo. You’ll see a lot of this in the sessions and announcements next week at PDC. It’s too early to impact the results, the financial results, but its indicative of a general direction on Microsoft's part. It has become more of an enterprise computing player.

What does surprise me a little bit is that in a company of Microsoft’s size it would have that much material impact.

Gardner: What's a little surprising to me is that even with 9 percent revenue growth and 20 percent revenue in the Business Software Unit, which includes Office, that only translated into 2 percent income or earnings.

Is Microsoft at a disadvantage, compared to other enterprise vendors, because of its exposure to the consumer market, Web advertising market, and the cyclical nature of an operating system upgrade like Windows?

Baer: I’m not sure if it’s at a disadvantage with regard to the consumer market per se. I hate to use an extreme example like this, but take a look at some of the very toughest economic times that we’ve had. Let’s go back to the Depression, which of course we all remember from our childhood, or at least that we are all reincarnated now. During the ‘30s, when nobody had any money, people went out for cheap, real thrills. In that case, it was a trip down to the movie theater.

My sense is that, if you already have an Xbox 360, what's the big deal about getting another game? That’s a much cheaper thrill than going out and buying some more expensive piece of consumer electronics hardware.

I don't think that the exposure to the consumer side is such an issue. I think it's more a matter that certain parts of Microsoft’s business have matured and that some of the newer areas, which would be the enterprise side, and would also be say the Web-designer side, where they are going head-to-head with Adobe, are still much too early on the maturity curve to have a material impact.

Gardner: Alright, Mike Meehan, what do you think? Is Microsoft in a good, medium, or a bad position going into an economic downturn, given what we’re seeing and given their exposure across such a wide variety of different products and services?

Meehan: You’re generally never in a bad position when you’re diversified. That’s the one thing Microsoft has going for it. It has its hooks in a lot of different ponds.

I tend to think that they are better off in the consumer market than they are in the enterprise market. My view is that .NET has lost to Java, just as an enterprise technology. It’s a niche. It’s an avenue where Microsoft is going to have a presence.

People are going to use Visual Studio. They can build out Oslo and they can try to keep people in with as much service orientation as Microsoft can give you in their package, but they are not going to be on the same par as IBM, Oracle, or even SAP long term, in terms of being able to give you enterprise applications and application development tools.

They are a sidelight to that. Their business is more in the operating system and in the Xbox. Kids like playing games, and social computing, those game-oriented things, are going to be the areas where Microsoft is going to see its greatest profits down the road.

Gardner: So, you’re saying that Microsoft’s future is waning when it comes to its share of market, profits, and growth on the business side, and that’s its virtuous growth machine between the tension of their tools and its platform is not going to continue? It’s fighting against organizations like Google and Apple in the consumer space that is going to be Microsoft’s growth future?

Meehan: I think they are capped on the business side. There's only so much of that pie they are ever going to get right at this point.

Gardner: Anybody out there have a concurring view to that? It seems that the vast majority of Microsoft’s revenues and profits still come from the business sector.

Kobielus: I think that there’s some validity to the viewpoint that Microsoft's growth potential has capped on the business side, when you consider packaged applications, and software- and application-development tools, in the sense that the entire product niche of the service-oriented architecture (SOA) universe is rapidly maturing.

The vendors in this space -- the SOA vendors, the business-intelligence (BI) vendors, the master data management (MDM) vendors -- are going to realize revenue growth and profitability. Those who survive this economic downturn and thrive in the next uptick, will be those who very much focus on providing verticalized and customized applications on a consulting or professional services basis.

In that regard, Microsoft is a bit behind the eight ball. They don’t really have the strength on the consulting, professional services, and verticalization side, that an SAP, Oracle, or an IBM can bring to the table.

Microsoft, if they want to continue to grow in the whole platform and application space and in the whole SOA universe, needs to put a greater focus on consulting services.

Gardner: That's interesting. Now, here we have Microsoft, as I say the elephant in the room, the largest software company in the world, in many respects one of the most successful companies in the history of business, behind the eight ball. How could it be behind the eight ball, when it has $40 billion in cash in the bank, and an army of global developers and engineers? Yet, I think there's something to this.

Let’s drill down for a second. Gartner, the largest analyst and research firm came out with a Top 10 Strategic Technology Areas list for 2009. These are the 10 areas I think are going to be the most strategic for IT people.

Number 1, virtualization. I think it's safe to say that Microsoft is catching up on virtualization.

Number 2, cloud computing. We’ll soon get detail on Microsoft’s cloud computing, but they’re clearly behind the eight ball if you compare them to say Amazon or Google or Salesforce.com.

Number 3, servers beyond blades. Well, that’s a hardware story, and Microsoft isn’t in the hardware business.

Number 4, Web-oriented architecture, mashups, or the use of Web development, primarily for new applications. Microsoft’s in that, but that’s a problem, because there isn't always a tie-in to their platform. It’s really a Web- and browser-based business, which has been somewhat troublesome for Microsoft, given its software plus services focus.

Number 5, mashups. Same story there. Microsoft does have tools and approaches, but it doesn’t necessarily feed their cash cow of selling more operating systems or upgrades to operating systems.

Number 6, specialized systems. I’m not exactly sure what that means, but I don’t think Microsoft is so verticalized that this is going to be a growth area for them.

Number 7, social software and social networking. We haven’t seen Microsoft dominate here. In fact, they tried to buy their way into this with Yahoo and failed.

Number 8, unified communications. Microsoft has been big there. That’s a potential growth area for them.

Number 9, business intelligence, another big growth arena.

Then, Number 10 from Gartner’s list, Green IT. Green IT, of course, means consolidation, more highly utilized servers, not hundreds of Microsoft Exchange Servers running at 20 percent utilization. So, I would posit that Microsoft is behind the eight ball on Green IT as well.

Does anybody out there want to react to this issue of Microsoft in catch-up mode?

McKendrick: When did Bill Gates start Microsoft? What year was that?

Gardner: 1977.

McKendrick: It was actually 1975. That was the worst downturn in our generation, as far as the economy goes. He, and eventually Steve Ballmer, started the company going. What year was MS-DOS launched to licensing? When did that began to catch on?

Baer: 1980, 1981.

McKendrick: Yeah, the other downturn, the other worst economic downturn in our generation. So in other words, in Microsoft’s history it seems they’ve had their crucial turning points, at times when the rest of the economy was in a funk.

Windows was in the early 1990s, another recessionary period.

I was speaking with Brian Loesgen from Neudesic a couple of weeks ago. It was in the midst of the first wave of financial panic in the economy. He put it this way. Microsoft has its own economy. No matter what happens to the economy at large, Microsoft has its own economy going, and just seems to get through all this.

What’s driven Microsoft from day one, and continues to do so, is that Microsoft is the software company for Joe the Plumber. That’s their constituency, not necessarily Joe the Developer. They cater to Joe the Developer, Joe the CIO, and Joe the Analyst certainly likes to check in on what they are doing. It's this whole idea of disruptive technology. They have always targeted the under-served and un-served parts of the marketplace and move up from there.

Gardner: So we have two narratives. We have Microsoft is too big to fail, has done well regardless of economics in the past, and is independent of larger economic trends because of its "Joe the Plumber" appeal. We also have this narrative of they are playing catch-up.

McKendrick: The base of Microsoft, these companies that are using Microsoft technology, don’t necessarily get virtualization or cloud computing.They just want a solution installed on their premises and want it to work.

Gardner: Dave Linthicum, are you out there now?

Dave Linthicum: Yeah, I am out there now. How are you doing Dana? I was actually crying over my 401(k) portfolio, so I got in late on the call.

Gardner: Well, I can see why that would choke you up. Now, what's your position on these dual narratives: Microsoft, too big to fail, has done always well in the past -- or Microsoft behind the eight ball on virtualization, cloud computing, and some of the other major growth areas of the next couple of years?

Linthicum: I think they are behind the eight ball. A lot of the strategy I’ve seen coming out of Microsoft over the last few years, especially as it relates to cloud computing, SOA, and virtualization, has been inherently flawed. They get into very proprietary things very quickly. It really comes down to how are they going to sell an additional million desktop operating systems.

Ultimately, they just don’t get where this whole area is going. If you think about Joe’s point, going back in history, not as far, but to the whole Internet trend, it turned out to be an explosion back in the middle ‘90s.

They missed the boat on that completely. They were off doing their own MSN network and working on that kind of stuff, and they really were catching up in the end. They had a pretty good offering and they took a large part of the market because they own the desktop and all those things going on.

Now, we’re heading into an area where they may not be as influential as they think they should be. They may be not only behind the eight ball, but lots of other organizations that are better at doing cloud computing, virtualization, and things like that, and have a good track record there, are going to end up owning a lot of the space.

Microsoft isn’t going to go away, but I think they’re going to find that their market has changed around them. The desktop isn't as significant as it once was. People aren’t going to want to upgrade Office every year. They’re not going to want to upgrade their desktop operating systems every year. Apple Macs are making big inroads into their market space, and it’s going to be a very tough fight for them. I think they’re going to be a lot smaller company in five years than they are today.

Gardner: Let’s take that notion to Mike Meehan. Is Microsoft going to be the same, smaller, or bigger in five years?

Meehan: I wouldn’t say smaller, only because they got maybe as large as they were going to get in the earlier part of this decade. Dave is absolutely right in that the one area that Microsoft never really conquered that it needed to conquer, given its strength in the desktop, is the handheld. If they are not going to be there with the handheld long-term, that’s a major growth area that they are going to miss out on. That’s where a lot of the business is going to shift to.

I don’t spend all my day on a handheld, but I live in Boston. I can ride the T and I can see a lot of people who do use handhelds. If you want to be there, if you want to be in the cloud services, that’s where a lot of people are going to be getting consumer cloud services from. It’s going to be right off those handhelds, and Microsoft is just not there.

On the SOA side, as I said before, Microsoft is just trying to be as service-oriented as they can for users who are trying to be not SOA-driven, but "As Service-Oriented As Possible."

In fact, make that an acronym, ASOAP. There are going to be a number of users who are not going to go fully into SOA, because they have an enterprise architecture. It’s too hard to do, too hard to maintain. They’re never going to quite figure that out. They are just going to try to be tactical and as service-oriented as possible. Microsoft will try to service them and hold that part of their business.

What’s the next big thing they’re going to do? Joe referred to Microsoft having come up with that in previous downturns. I don’t see where they have got that right yet, and so I think that leads to them being smaller long-term.

Baer: I think the biggest deficiency in this go-around, compared to the Internet about a dozen years ago, is that they don’t have a figure like Bill Gates to crystallize turning the company around.

That was an amazing case study back around 1995, where Microsoft was caught by surprise by the Internet. Gates basically convened a weekend-long retreat, or something like that. I’m not sure how long it was, but it was pretty short.

At that time, the company was small enough -- and I use the term “small” in a relative sense -- that the company could turn around. More importantly, in someone like Gates, they had someone with the type of vision that could crystallize everyone to start thinking on the same page. I don’t think they have that same kind of figure now.

Gardner: That's right. It was the first week of December, 1995 that Microsoft came out and announced that the Internet was a big deal, and within two years they were the top browser company in the world, and have remained there ever since. So they have demonstrated an ability to move quickly.

Let’s go to Brad Shimmin. Brad, you are going to go to PDC. If there’s any venue where Microsoft can talk to Joe the Plumber and Joe the Developer, and convince the world that its vision of the future is the right way to go, it’s at the PDC.

Do you think that Microsoft is going to have an opportunity to change this perception of it being behind the eight ball in any appreciable way at the PDC?

Shimmin: I do, and simply because they don’t have to. I think back to a number of points that’s been made here that to be successful Microsoft doesn’t need to convince the world. It just needs to convince the people that attend the PDC. They have such an expansive and well-established channel, with all the little plumber-developers running around building software with their code, that just as 40 is the new 30, Microsoft is really kind of the new Apple, in a way.

They don’t need to be Oracle to succeed, they really need to have control over their environment and provide the best sort of tooling, management, deployment, and execution software that they can for those people who have signed on to the Microsoft bandwagon and are taking the ride with them.

That’s what it’s all about for them at these shows. In general, it’s the same way. They don’t need to be the next Oracle to remain successful on the business space.

As Mike said, they’re kind of capped out in many ways relative to the consumer market, but, gosh, they have shown that with things like SharePoint, for example, Microsoft is able to virally infest an organization successfully with their software without having to even lift a finger.

They’ll continue to do that, because they have this Visual Basic mentality. I hate to say it, but they have that mentality of “Let’s make it as simple as possible” for the people that are doing ASOAP, as Mike said, that don’t need to go all the way, but really just need to get the job done. I think they’ll be successful at that.

Kobielus: I just want to elaborate on what Brad said and then bring it back to the question of will Microsoft be larger, smaller, or the same size in five years time. I think they will be larger, and they will be larger for the simple reason that they do own the desktop, but the desktop is becoming less relevant.

But now, what’s new is that they do own the browser, in terms of predominant market share or installed base. They do own the spreadsheet. They do own the portal. As Brad indicated, SharePoint is everywhere.

One of the issues that many of our customers at Forrester have hit on -- CIO, CTO, that level -- is that SharePoint is everywhere. How do they manage SharePoint? Its a fait accompli, and they have to somehow deal with it. It’s the de-facto standard portal for a large swath of the corporate world.

Microsoft, to a great degree, owns the mid-market database with SQL Server. So owning so many important components of the SOA stack, in terms of predominant market share, means that Microsoft has great clout to go in any number of directions.

One direction in which they’re clearly going in a very forceful way that brings all this together is in BI and online analytical processing (OLAP).

The announcements they made a few weeks ago at the BI conference show where Microsoft clearly is heading. They very much want to become and remain a predominant BI vendor in the long run.

What that means is a number of things. First and foremost, innovating at the desktop within SharePoint and in Excel to enable, in memory, deeply dimensional user-driven modeling to begin to dissolve the OLAP cube and enable users to begin to develop their own advanced analytics, build it out, and grow that knowledge base in a collaborative environment that’s very much hinged on SharePoint -- the collaborative features, version management, library check-in and check-out, and so forth.

In five years time, Microsoft will be one of the predominant BI players. It already is, but it will become more important as one of the main BI platforms out there.

I don’t imagine Microsoft would become as verticalized a BI player as say a SAS Institute, but Microsoft, as several other analysts on this call have mentioned, has a phenomenal partner ecosystem, and they are providing an evermore powerful platform for those vendors and professional sources and customers to build out those analytics. So, they will be bigger.

Gardner: Okay. So, Microsoft has its installed base. It has its devotees, people who are making their living based on its products. It’s a huge channel. You see in a number of key IT areas a deep advantage in terms of their installed base, but that begs the question of whether things remain fundamentally the same or whether we’re going through a period of transformation.

Let’s go back to Brad. Based on what you know about PDC announcements, how is Microsoft going to pull off both retaining its installed base strengths, and also ushering people into higher productivity and lower cost, which are going to become essential?

Many Microsoft products are not the lower cost alternatives in the market, particularly from an architectural standpoint. Does anything come out in your understanding of the PDC announcements that will help solidify its base, but also substantially reduce total cost?

Shimmin: I do. It’s kind of funny, because a lot of the stuff they are going to be announcing, or demoing I should say, at PDC, lean toward some of the things we have been dinging them on.

For example, they are making Windows Communication Foundation (WCF) and Windows Workflow Foundation (WWF) form the heart of their ASOAP model, if you will, but they have been very much geared toward the bitheads that are working in Visual Studio to develop them.

What they’re trying to do is move those more toward an Oslo perspective of compilation and composition, so they’re making them such that they have a much better workflow capability. You had to code it by hand, but they are just coding it in, which goes back to their entire approach with tooling in general. They try to take you as far as they can, so that you don’t have to make as many decisions or intellectual efforts to make your software work.

They’re doing the same thing not just with .NET but also with their Windows Server, which I found to be the most curious part of what they are doing at PDC.

They have had Window Server sort of unofficially as their application container, but really it’s not. BizTalk has been their application container for everything SOA. They’re moving toward Oslo, with the Dublin release. They’re making it more of a first-rate citizen for hosting composite applications as a container.

Gardner: Oslo is their next generation development and deployment framework, which is highly focused on services and business-process level integration.

Shimmin: It is. It’s nice, because they are actually going to have a registry- repository. You literally just have to partner and use standards for anything like that with them now, but they are going to build their own on top of SQL Server, which I think is a smart move, by the way.

But they will have that, and the development tooling that’s going to be hooked indirectly to .NET and the Windows Server. They’re going to make BizTalk more of a B2B integration, yet making it more of an enterprise service bus (ESB), which is the last thing they would have ever told you they want a BizTalk to be. But, they’re going to make it more of that in the future and make Windows Server more of your traditional Java development, Java Shop, which would be your app server.

Gardner: So we have an ESB function set. We have a registry-repository function set. Microsoft is coming not on the leading edge of these technologies. They’re clearly five or seven years behind some other entrants in the marketplace. But, on the total cost perspective, I think what I am hearing from you is that if you go all Microsoft all the time, there are going to be efficiencies, productivity, and cost savings. Is that the mantra? Is that the vision?

Shimmin: That‘s exactly right, Dana. That’s what they’re banking on, and that’s why I think they are the next Apple, in a way, because they are downtrodden, compared to some of the other big guns we’re talking about with Oracle, SAP, and IBM inside the middleware space. But that doesn’t matter, because they have a loyal following, which, if you guys have ever attended these shows of theirs, you’d see that they are just as rabid as Mac fans in many ways.

They’re going to do their best job to make their lives as easy as possible, so that they remain loyal subjects. That’s a key to success. That’s how you succeed in keeping your customers.

Gardner: Dave Linthicum, Microsoft is continuing to make offers that their installed loyal base can’t refuse. But the total cost of ownership (TCO) equation comes in a little bit later. That is to say, if you have bought into the Microsoft-oriented architecture vision, and you’ve spent a lot of money with Microsoft in doing so, you will be able to do all of these things better in the future. What’s wrong with that vision?

Linthicum: Ultimately, people are looking for open solutions that are a lot more scalable than this stuff that Microsoft has to offer. The point that was just made, there are a bunch of huge Microsoft fans that will buy anything that they sell, that’s the way the shops are. But the number of companies that are doing that right now are shrinking.

People are looking for open, scalable, enterprise-ready solutions, they understand that Microsoft is going to own the desktop, at least for the time being, and they are going to keep them there. But, as far as their back office things and some of the things that Microsoft has put up as these huge enterprise class solutions, people are going to opt for other things right now.

It's just a buying pattern. It may be a perception issue or a technological issue. I think it’s a matter of openness or their insistence that everything be proprietary and come back to them.

I heard the previous comment that looking at all Microsoft all the time will provide the best bang for the buck. I think people are very suspicious of that.

If you look back in history, Microsoft Transaction Server (MTS) and all of these other things that Microsoft has built over time to get into enterprise-scale computing, haven’t worked very well. Either it was perceptions or openness. I reviewed MTS when I was at PC Magazine and I found it to be a pretty good product, but it just had no uptake into the market space.

I think their current efforts are going to run into the same issues. You’re not always going to have people who are going to buy it. It’s part of the bundles that they’re offering to the enterprise, the enterprise license agreements that they are selling in, but it's going to be a very hard path for them I think.

Gardner: Mike Meehan, virtualization is obviously a big topic these days. VMware came out with results that showed these things are selling like hot cakes. VMware itself is going to be under pressure in competitive offerings in the marketplace.

Is virtualization at the hardware level, infrastructure level, applications level, and then ultimately at the desktop level -- where we have virtual desktop infrastructure (VDI) -- a game changer in terms of Microsoft being able to pull this off? ... “If you do it all with us you have a better economic story.” How does virtualization change Microsoft’s strategy, if at all?

Meehan: I don't know that it does, in that you have to be so integrated with the company to take advantage of that, that I am not really sure that Microsoft is in the right position to do that.

For example, four years ago, Sun Microsystems started beating that drum that they were going to take these virtual environments, put it together with their software environments, and have this soup-to-nuts computing that was going to be five times more powerful and so much more efficient.

It just never happened on their end. It's hard to execute. It’s hard for Microsoft to align itself with what anybody else is doing. Whatever VMware is doing, I find it a little difficult to believe that Microsoft is willing to be the tail that’s wagged by any other dog.

To a certain extent, Microsoft will try to plug-in to that in its own way. What its own way is, and where exactly it plugs in though, are unknowns to Microsoft itself, and its going to want to own something in there. I don’t even know what it wants to own in terms of virtualization.

Gardner: It seems it wants to own the hypervisor. It’s going to make the Hyper-V hypervisor part and parcel with other infrastructure, and, I would imagine, at a price that people can’t refuse. They’ll also continue to sell Windows licenses for all those virtualized instances of an operating system. That’s still Windows. That’s still good revenue.

Does anyone else have a sense of whether virtualization, as a general trend, knocks down Microsoft’s ability to do it all and well?

McKendrick: VMware announced that operating system, what’s it called, the VMware VDOS, do I have that correct?

Gardner: KVS. Is it their Hypervisor?

McKendrick: No, they are actually calling it an operating system.

Dana Gardner: That's right, their cloud-based infrastructure operating system.

McKendrick: Exactly. That’s the direction organizations are going. Cloud computing, SOA, virtualization, all those things are going to be internal clouds, private clouds, maintained within enterprises.

When you think about an operating system, what is an operating system? That’s virtualization, right? An operating system virtualizes resources underneath, in the server, the hardware, and storage. Virtualized operating system, like VMware is talking about, is probably the next evolution of operating systems in general.

Gardner: That’s right. A disk operating system virtualizes the disk.

McKendrick: Right. That’s what an operating system is, virtualization. People don’t think about it that way.

Gardner: So, your point is that Microsoft is in the position to take its advantages and strengths and move that up yet another abstraction to this private-cloud infrastructure level.

McKendrick: I think so. Steve Ballmer kind of responded to the VMware announcement by saying that Microsoft has something cooking in that regard too, some kind of virtualized operating system. I don’t know if it will be separate from what Windows will be in the future. An operating system is a cloud management system, when you really get down to it, and it’s the next natural evolution for operating systems. That’s what Microsoft is good at.

Gardner: We’ve heard quite a bit on this cloud operating system from Red Hat, Citrix, VMware, IBM, and HP talked it up a little bit. No one’s really come out with a lot of detail, but clearly this seems to be of interest to some of the major vendors.

Let’s go back to Dave Linthicum. What is the nature of this operating system for the cloud, and does it have the same winner-take-all advantage for a vendor that the operating system on the desktop and departmental server had?

Linthicum: I think it does in virtualization. Once one vendor gets that right, people understand it, there are good standards around it, there are good use cases around it, and there’s a good business case around it, that particular vendor is going to own that space.

I’m not sure it’s going to be Microsoft. They’re very good about building operating systems, but in understanding my Vista crashes that are happening once a day, they are not that good.

Also, there are lots of guys out there who understand the virtualization space and the patterns for use there. The technology they’re going to use, the enabling standards, are going to be very different than what you are going to use on a desktop or even a small enterprise departmental kind of problem domain.

Ultimately, a large player is going to step into this game and get a large share of this marketplace pretty quickly, because the cost and ease of moving to that particular vendor is very low.

I can decide this morning that I want to use a particular virtualization vendor, sign up with them, and start putting my assets out in that world in a very short time, versus buying hardware and software I am installing in my own systems and other things that are going to be leveraged.

These virtualization operating systems that are enterprise bound or even in a gray area with the cloud are going to come from somebody else besides Microsoft. That’s just my own personal opinion, based on what they are doing.

Kobielus: I think that Microsoft stands a chance of becoming the predominant cloud OS vendor. Let me just define the level set of what I mean by that. At the heart of any cloud or virtualized cloud operating system is a virtualized database environment. Database virtualization is a real hot topic, and it means many things to many vendors and to many analysts.

Fundamentally, like any other virtualization approach, it simply involves abstracting the internal implementation from the external calling interface, using a variety of approaches.

It’s not all together yet, but Microsoft is coming along with a fairly interesting database virtualization story that will play out in releases over the next several years.

For one thing, of course, they bought DATAllegro a few months back, and now Microsoft is building a shared-nothing, massively parallel database, a data warehousing environment that can scale up to thousands of nodes potentially and many petabytes of data. It’s grid at its very heart. So, a grid environment is virtualization, a database virtualization on one level.

Also, Microsoft has a very interesting project going on that will probably see the light of day in terms of roll-out in the whole SQL Server vNext timeframe in 2011. It’s called Project Velocity, which is very much virtualizing data persistence across both disk-based and spindle-based storage, as well as in-memory cache across a distributed virtualized fabric.

There's also a bit of virtualization going on in the front end of their BI stack, in terms of using in-memory approaches more deeply in all the app, and so on.

Of course, Microsoft has got the whole SQL Server Data Services, software-as-a-service (SaaS) initiative ongoing, and they will continue to ramp that up in coming years. I see all this coming together as the heart of a database virtualization environment.

Then, one other thing you need to have for a fuller virtualized OS in this environment is something called in-database analytics, where you can run the compute intents of algorithms right inside the database. You can take advantage of all the parallelization.

Microsoft doesn’t have a strong story there yet, but I think that in the next year or so, they will roll out a much more interesting story that tracks with what's going on elsewhere, like vendors in the data warehousing arena that have aligned themselves around this framework called MapReduce. A lot of that will come together in the Microsoft side over the next few years, and I think they will be a power in cloud OSs.

Gardner: So, I think what I am hearing from you is that virtualization, grid, and cloud can help Microsoft in its database and data services story, particularly up against someone like Oracle and IBM.

Kobielus: Yes, yes, yes.

Gardner: Okay. There's another difference here though with cloud and private cloud and that is that Joe the Plumber and Joe the Developer aren't going to be deciding the architecture for this cloud.

Also, moving toward the cloud infrastructure is a significant multimillion dollar decision process, involves creating new data centers, tens of millions of dollars in facilities, infrastructure, and manpower, and energy types of investments, things that will impact the company for five, 10, 15 years.

It seems to me that there's only going to be a handful, perhaps fewer than 25 true third-party cloud providers, and that the type of organization where a private cloud makes sense are going to be the Global 2000, maybe down to the Global 500, who would be interested in investing and have the cost savings in scale that would make cloud computing make sense.

So, in a sense, this move toward private-cloud and public-cloud infrastructure really does not benefit Microsoft’s traditional market and channel distribution and penetration. We’re really talking about perhaps as few as 2,500 total customers across the world who would be buying this. Given that that’s the economic landscape, does this not impact Microsoft in terms of its ability, or even interest, in approaching this market?

Baer: A couple of things. I agree with you in terms of the private cloud. I don't think that's really a real winner of a market for Microsoft, because it will require the customer to put in significant capital investments from the top-down. The thing is, those types of customers have not traditionally been Microsoft’s strengths.

I had a couple of thoughts as this session has drifted. One, how does the cloud really impact Microsoft and its prospects, and will Microsoft be able to compete in a more open world?

I have a couple of answers to that. You still have a certain, very stubborn level of mid-size businesses that are Microsoft shops. You go to these PDC conferences, which unfortunately I won’t be at next week, and you see these armies of people, who have been loyal ever since Visual Basic 0.5. They have built a huge developer base, which is translated to an incredible base among small businesses.

So, on one hand, I don’t think that Microsoft is going to lose its grip on its Joe the Plumber small and mid-size business (SMB), enterprise business. On the other hand, in terms of the emergence of clouds, and forgetting about private clouds at the moment, on public clouds I’m not sure. Microsoft has a software-plus-services strategy, the idea of which is to make it as invisible as possible. That has a nice value proposition to its traditional market base.

On the other hand, when you start seeing the proliferation of these third-party clouds, which are coming very much commodity prices -- the Amazons of the world, and so on -- I’m getting the sense that these public clouds are going to become so commoditized that there’s not going to be any single player that’s going to dominate.

I think that Microsoft will be able to retain a very loyal niche at SMB, but I don’t think when it gets to cloud that its going to dominate.

Shimmin: I just want to add to what Tony was saying. Yesterday, Amazon announced that EC2 is now running on Windows Server and Microsoft SQL Server.

Obviously, this is a public cloud, but in my mind, the fact that Microsoft has virtualization is a necessity for them to move forward, I don’t think it’s something they are going to be building a direct business on, like VMware. For them, it’s simply a necessity so that they can run on places like EC2.

The most important thing is, as Tony was just saying with Visual Basic, it all comes back to where you develop your application. Whatever you code in, the tool you’re using is going to dictate where you push that final application out. If it’s to your local server or to a cloud is irrelevant to you.

Whether you’re saving money going to a public cloud, for example, or you have your own investment internally doesn’t matter. The point is that Microsoft, to succeed, needs to have its application container. What I was saying is the Windows Server is a WebSphere Application Server in the cloud, and it seems like they are heading in that direction. So, I think they’re going to be able to ride this virtualization wave.

Gardner: Perhaps it will allow Joe the Developer to have it his way. That is to say, develop in what you like and what you know, target the Microsoft middleware functional set, as well as the containers that the tools are integrated to and aligned with, but perhaps host that up at a cloud.

Now, if Amazon is going to do it, and then Microsoft is going to probably want to do it too -- and they more than likely will -- it’s almost certain that Microsoft will have its own cloud. You use their tools, perhaps their tools are in the cloud as well. So platform is a service value for Microsoft.

That’s all well and good, and it certainly would cut total cost and demonstrate the value of doing it on Microsoft. However, their ability to charge for those services is going to be up against other commodity-level platform-as-a-service and cloud-services sets. Microsoft’s ability to take money from each of these accounts, each of these developers, each of these departments would be severely crippled under that circumstance.

It raises the question: In five years will Microsoft, on a revenue basis, be bigger, the same, or smaller?

Let's wrap up our discussion today by going around and asking that very question to each of our participants. Let's start with you Brad. Brad Shimmin, Microsoft five years from now, bigger, the same, smaller, revenue wise?

Shimmin: I think they will be smaller revenue wise, but they will be making more money from their infrastructure and their business applications than they were in the past.

Gardner: Good. Dave Linthicum, same question.

Linthicum: I already said they are going to be smaller. I think it's going to be turned kind of more into a cash-cow company. They’re going to have hooks into some of these new trends. Where they’re going to find their business model and the culture within the company is going to be the single most preventive factor for them expanding their revenue.

Gardner: So, you see it as a smaller revenue and a smaller profit.

Linthicum: Smaller revenue, smaller profit, and smaller impact on the marketplace.

Gardner: Michael Meehan?

Meehan: Just because I think the economy will grow over the next five years -- almost because it has to -- I’m going to say they are going to be bigger in revenue but they will have smaller impact on the marketplace.

Gardner: Tony Baer?

Baer: I agree with Mike. The economy will grow and, more importantly, world markets will grow, and they just will not be the single biggest frog in the pond.

Gardner: Jim Kobielus?

Kobielus: I think they will be bigger, and their growth will be in packaged applications, analytics, BI, and performance management.

Gardner: Joe McKendrick.

McKendrick: I agree with what Mike originally said. They will be bigger, because the whole pie will be a lot larger in the next few years. Let’s face it, many competitors have taken on Microsoft have had their head handed to them on a plate over the years. Don’t underestimate the folks in Redmond.

Gardner: Very good. I’ll throw my two cents in. I think their revenues will be smaller, but not appreciably so, but that their margins will continue to erode, and that’s going to force them to pick and choose businesses more carefully, and have to decide what they want to be when they grow up rather than try to be everything to everybody.

Well, thanks everyone. This has been a good and fun discussion about Microsoft and their PDC. I want to thank all of our guests for joining.

I also want to thank our charter sponsor for the BriefingsDirect Analyst Insights Edition Podcast Series, Active Endpoints, maker of the ActiveVOS Visual Orchestration System. I am your host and moderator Dana Gardner, principal analyst at Interarbor Solutions. You’ve been listening to Volume 32 of our series. Thanks and come back next time.

Special offer: Download a free, supported 30-day trial of Active Endpoint's ActiveVOS at www.activevos.com/insight.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: Active Endpoints.

Transcript of BriefingsDirect podcast on the outlook for Microsoft. Copyright Interarbor Solutions, LLC, 2005-2008. All rights reserved.