Wednesday, November 12, 2008

Webinar: IDC Research Shows SOA Adoption Deepens in Enterprises Based on Key Implementation Practices

Transcript of Oct. 14, 2008 webinar on SOA research and how companies are implementing SOA more strategically based on essential adoption best practices.

Listen to the podcast. Download the podcast. Access the Webinar. Learn more. Sponsor: Hewlett-Packard.

Download the IDC report "A Study in Critical Success Factors for SOA."

Introduction: Hello, and welcome to a special BriefingsDirect presentation, a podcast created from a recent Hewlett-Packard (HP) webinar on Service Oriented Architecture (SOA) adoption trends. The webinar examines recent findings and analysis from original IDC surveys and research into actual enterprise SOA use and their reported business outcomes.

We'll hear from executives at both IDC and HP on how SOA is rapidly increasing in its importance and value for developers, architects and IT strategists. The presentation is followed by a question and answer session from the live webinar audience.

Please now welcome the webinar moderator, BriefingsDirect's Dana Gardner.

Dana Gardner: Hello, and welcome to our live webcast, “Expanding SOA Adoption to Mainstream IT,” brought to you by HP and InfoWorld. I’m your moderator Dana Gardner, principal analyst at Interarbor Solutions.

Today we’re going to examine fresh research from IDC on SOA adoption patterns and what users of SOA identify as success factors. In other words, for those doing SOA right, what it is that led them there, and what is it that they’re doing that others can learn from, in terms of best practices and insight?

We’ll hear from Sandy Rogers, program director for SOA, Web services, and integration research at IDC. We’ll also hear from Kelly Emo, SOA product marketing manager for HP Software.

Now let's dig into SOA use today, and the adoption patterns that show how and why SOA is moving into mainstream IT. We’re seeing a move to a strategic SOA value that support business goals and not just SOA that supports IT goals, such as the benefits around code reuse or development agility.

We’re starting to see a great deal of movement to the strategic value of SOA, and that means moving toward the aspects that create the need for governance, develops SOA benefits across larger business processes, and starts to show the paybacks in terms of actual business outcomes.

What are the factors that determine the success of SOA, generating that strategic and business level payback? Let's now go to Sandy Rogers at IDC, and learn more about her research into success in a SOA world.

Sandra Rogers: Thank you, Dana. Hello, everyone. First of all, what we want to do is take a look at what has brought us to this period of time -- where we now need to create enterprise-level systems.

Right now, we’re seeing a lot of systems where their foundations are basically breaking, and we’re dealing with a mixture of different generations, different types of systems, different ways that they were developed, different technologies, and different ways that they are and continue to be procured. Enterprises are challenged to address new and changing business requirements, and that volatility of business change is increasing at very rapid rate.

Organizations are looking for much more consistency across enterprise activities and views, and are really finding a lot of competitive differentiation in being able to manage their processes more effectively. That requires the ability to stand across different types of systems and to respond, whether in a reactive mode or a proactive mode, to opportunities.

The types of individuals who are being served by these systems are different, and that’s because of the extended value change, new types of workers entering the workforce, and many different business models that require either some type of self-service capability, or even more of a high-touch personalized type of engagement and experience with systems.

What we’re finding is that, as we go to this generation, SOA, in and of itself, is spawning the ability to address new types of models, such as event-based processing, model-based processing, cloud computing, and appliances. We’re really, as a foundation, looking to make a strategic move. With that kind of structure, it's also balancing freedom.

So, moving on, what we see -- and this is a poll that was recently run by IDC this summer, primarily with mid- and large-sized organizations -- is that if they haven’t already adopted SOA, they are planning on it, and at greater levels of engagement. So, if it is not going to be "the" standard for most or all systems, it's important, and will be used for all new projects, or it's a preferred approach.

The issue is not necessarily deciding if they should go toward SOA. What we're finding is that for most organizations this is the way that they are going to move, and the question is just navigating how to best do that for the best value and for better success.

According to the same poll, what we show is the top IT objectives and challenges for SOA. We also asked for business objectives and IT objectives. What's different from past surveys that we've run is that the flexibility and agility to respond to changing business needs is actually number one now. In previous responses, that had been in the top tier of three, but not necessarily the first one.

What are most interesting are the top challenges in implementing SOA. All of our past studies reinforced that skills, availability of skills, and training in SOA continue to be a number one challenge. What’s really noticeable now is that setting up an SOA governance structure has reached the second most-indicated challenge. This is the top 3 of 18 options.

In the past you may have seen security or other technical elements, interoperability, or maturity of standards. What this is telling us is that we have reached another stage of maturity, and that in order to move forward organization will need to think about SOA as an overall program, and how it impacts both technology and people dimensions within the organization.

We find that when we ask this from a business objectives and challenges view, the business is looking at more efficient processes at greater levels of service and customer service throughout their entire environment. Some of their top challenges involve gaining agreement on what processes and services should exist, how to define those, and how to agree upon those, and also rallying individuals around support of budgeting and funding for SOA. This all points to an overall need to step up the ability to address this as a managed business discipline, versus a technology discipline.

Defining SOA Success


We wanted to look at how SOA's success is actually defined, even though SOA can have varying definitions amongst individuals, and what factors and practices in these organizations that are successful have the most impact. We wanted to see what tactics and technologies are being leveraged, and how they are being leveraged, and how they are being introduced and expanded within the enterprise.

Then, we wanted to see what other words of advice experienced leaders want to impart to others, as we are seeing a next wave of adopters that may be a little bit new to SOA, versus those that had come before.

So, with this study, we did primary research, mostly U.S.- and European-based companies who had successfully implemented enterprise SOA programs. Most of them had from two, to two-and-a-half, to over eight years of experience. Some of these companies actually had started their SOA endeavors at the turn of the century.

They’re senior level individuals with enterprise perspective and they’re primarily from the IT ranks. They also have had certain levels of experience that might range from CIO to enterprise architect, to quality manager. So, we got a broad-brush view, but most of these individuals were actually charged with, or were a core part of who was driving their SOA initiative in their organization.

This was based on a semi-structured interview format, so that while we wanted to capture some basic information about the overall IT environment structure, the SOA initiatives in particular, the technology, topology, the business goals, and drivers of the organization, we really won't have that broad brush view to present a context.

We also did this in a semi-structured way, in order not to skew the results and to unearth varying elements that may have influenced their success, despite what these individuals brought to the table. And, there was representation across various sized companies and industries sectors.

A few of the overarching trends reinforced what we have been seeing in some of our studies. We are indeed moving from project- and application-level SOA to more of a system and enterprise scale. And, the short- and long-term impact of SOA needs to be better understood and addressed. Enterprises need to manage the expectations of the individuals in the organizations as to how their roles will be impacted, and what kind of benefits they may get on a short term basis, versus that long term view and accumulation, and they need to try to balance strategy with tactics.

While technologies are key enablers, most of the study participants focused on organizational and program dynamics as being key contributors to success. Through technology, they are able to influence the impact of the activities that they are introducing into the overall SOA program.

That success can be defined in multiple dimensions, but rising to the top, we found that, in part because of their roles, the pervasiveness of SOA adoption in the enterprise was a key determinant of how they were looking at it, whether their programs were gaining traction in the right ways, and were being successful. They were achieving clear business results, and those that can be measured.

When we gathered all of this information, we had many different tactics and activities, and some of them started to become repetitive in our research. We created a framework of varying components, and elements that impacted success. Then, we aggregated these into seven key domains, as we call them.

Domains of SOA interest

While different elements can impact each different domain, and vice versa, it was interesting trying to think of a way to present his. All the different domains impact one another. Therefore, if you’re able to handle trust, you’re able to influence organizational change management effectiveness. If you’re able to address business alignment, then you’ll have much more success in understanding the impact on architecture and vice versa.

With this, it's much more of a network of different activities and components. The technologies interact on the network's foundation. When you really think about it, services is basically a network topology. SOA puts a wrapper around this environment, and tries to give it a foundation and a framework, for which it will function effectively.

The seven domains are: Business Alignment, Organizational Change Management, Communication, Trust, Scale and Sustainability, Architecture and Governance.

Now, we’re going to briefly go over these seven domains, and give some key trends that we discovered about how different activities and tactics can make each of these areas much more effective?

With business alignment we discovered that with these organizations and these individuals, SOA is truly understood as a business discipline. Now, many of them did admit that within their organization. They still have a way to go to educate, and incorporate SAO as more of a business view. It's often seen as more of an IT agenda, but that is starting to change. But, they themselves, in the way they have approached the issues, and the way that they are thinking about their program, see it as a business discipline.

That alignment of business to what is being incorporated in the program, services, and processes that are being created means the involvement of key individuals who have a keen understanding of the business. Sometimes, it might mean involving someone at a high enough level within a business division who can see how activities within that business division may be impacted or may impact other divisions, yet have enough understanding of some of the details around what transpires, what incorporates the business, what defines the business -- the elements and the processes -- and get them involved in the overall initiative.

These individuals also can influence others. They not only influence outcomes of what is actually being created, but they can actually influence the acceptance rate, the understanding of services of SOA within their network of individuals, and within the activities that they are involved in.

Many of these individuals can help with reflecting on what the current state of the environment is. That can actually help define what future state might need to be created. Think about the overall framework. I believe that you can get access to this and that HP is making it available to you for more detail. I know it's difficult to see here on the screen.

One of the factors highlighted here is indeed getting those individuals really involved, and helping them with determining a key business taxonomy, and vocabulary. It's very important that everyone get on the same page of how they are going to communicate with one another, and define that within an aligned business appropriate to what is being presented.

If we look at the next domain, organizational change management, one of the key factors we found is that many of these individuals did what they could do to ease the transition to ensure that individuals may not have overly complex requirements posed to them at the onset. They tried to figure out what the existing modus operandi was, incorporate what they could into that, and move the organization along. The key is to disseminate that knowledge, and give tools and different technologies that can help with that change, without imparting too much complexity, which disrupts, and/or impacts the goals that they are trying to achieve on a day-to-day basis.

This is very important, versus task initiatives that we have seen, where organizations put together a road map, expected the individuals to follow certain protocols, but really didn't think about how that would impact everything else that these individuals were involved in.

It also gets them to think outside the box across the enterprise. A way to do that is to show how different services are being created and what it looks like, giving them an overall understanding of the program, and driving cooperation through examples, and helping with their understanding by presenting them with concrete examples in their context. Then, they can start to envision what other types of services and processes it could be impacting, and who to get also involved if they want to build out this network impact.

One thing we found is that it's important to have that overall view of the program, of the enterprise, and to have key individuals as part of the central architecture team, the center of excellence, or whatever, work side-by-side with individuals on the different divisional development teams, the different business analysts involved. They can start to understand and impact the outcomes of initial projects.

Then through a train-the-trainer approach, they can get more individuals involved in utilizing what could be at their disposal. As time goes on and these organizations start automating more of the capabilities in the foundation for SOA, all of these different parties understand how to engage with the systems, take advantage of it, and then can start envisioning how to utilize them to their own success.

Successful communication proves essential

The next major domain is around communication. It was very important to these leaders that they were seen as a business leaders, as well as an IT leader. They were also evangelists, and politicians. They actually went out and did one-on-one discussions with key stakeholders. They would have discussions about what policies, protocols, and standards they were thinking about incorporating.

By the time those decisions needed to be agreed upon to gain that buy-in, a lot of that lobbying had happened behind the scenes, and there were a lot of lunch-and-learn type of sessions. That was really making those connections, and showing individuals how they could not only impact the success of the overall initiative, but also how they could actually gain some things from cooperating and drive that network effect.

In order to do that, a lot more visibility needed to be presented in a variety of different forms that these institutions used, and accessibility to this information is key. Trying not to have too many middlemen and trying to automate it, so individuals find what they needed, was very important.

Many of them had designed their own dashboards and wikis, different ways to present information on the overall program. They were thinking about the details, introducing things like registries and repositories from a more technical dimension. Engaging in more of a collaborative atmosphere really drove a lot and also allowed individuals to communicate with one another, and not always through the central team. This was very important.

Some smaller initiative may be able to run manually through a central group, but as soon as it started to extend, that's where they found that allowing different conversations and negotiations to happen, being able to have a potential policy consumer, finding various services, knowing how to engage with the service provider, and having that as automated as possible will not disrupt their daily routine. It will make it easier and will also capture all the necessary information to go forward, without having a lot of rework and lot of hand-holding. This was again very important that that kind of visibility and accessibility is so key.

That dovetails with the next domain, which is really about building trust, enabling cooperation, and building in a sense of security that these services are going to run and be available for someone to consume. Developers can now think about that loose coupling and drive more of a SOA architecture in what they’re developing. That sense of security is beyond just traditional security. It's performance. It's ensuring that the availability of that service is going to be there under load. There are lots of different concerns.

We found that many companies standardize security in the form of services embedded within the framework, foundation, and infrastructure that they had created for the SOA program. That, in and of itself, started alleviating the pressure on individuals to know exactly what security protocols to cover. They ensured identity rights and they could be protected, and mitigated their risks on inappropriate use of services.

They understood that this secure environment is important, and that the reputation of a service matters. That means validating that standards and policies were upheld and integrating what they could regarding testing and validation in the course of the lifecycle of developing their services. By the time a service was posted and available, that service could be depended on and there were no concerns about it. If this was being done manually or ad hoc, people found that they had different experiences.

One organization we interviewed found that over 50 percent of the services that were in production didn’t adhere to policies. As soon as they automated that and brought it up to 100 percent adherence, the network effect started to take over, regarding the consumption, use, and even the development of services.

The transparency and visibility into past behaviors and the history of the service is very important to individuals who are going to take that risk and take that step that they are not going to develop something on their own, but are going to reuse and capitalize on what is built out as this network of resources and services.

Architecture over technology


Of course, we can't talk about SOA without mentioning architecture, and what we kept hearing over and over again was that architecture should come first over technology. Many of these companies had to come up with reference architecture and had to prototype the technology and the reference architecture, so that it would be able to address reality.

In the past, I’ve heard, some of these enterprise architecture teams actually did, beyond a proof of concept, prototype the varying scenarios that were likely to serve in rolling out the architecture to more entities, more systems, more and system type.

So, setting the standards, setting the reference architecture, defining messages, schemas, and protocols -- more so than mandating the specific use of certain technologies -- was a real learning experience for these organizations. Large organizations, especially, may not be able to influence every technology that will be used or can't envision every single technology that will be introduced to the environment in the future.

What they needed to do was focus on all the architectural dimensions, best practices, and standards, and define those, so that when they needed to test whether technology could interact appropriately, they had much of this defined. That allowed that level of flexibility to flow through to the different divisions and teams. Thinking about architecture beyond technology, about the process of how you are dealing with setting up architecture, how you are engaging with architecture, and seeing it through it's different stages and lifecycle are very important.

Updating process and SOA technologies, vis-à-vis the overall IT environment, started to surface as a key concern in what needed to transpire. You can't treat SOA as a silo. You need to think about this as an overall environment that incorporates many different options. One of the major reasons people are moving to SOA is to take advantage of a heterogeneous mix of resources, whether internal to an organization, or external through an organization.

Thinking through those dynamics leads to the next domain, which is scale and sustainability, to prepare for that viral network effect and to automate and test for higher volumes of demands early on. We found some organizations that had either built some of their own technologies, or had sort of incorporated things that were available very early on in the decade. When it came time to scale out, and there was more pressure imposed on these systems, they really couldn’t handle the demand.

What we learned from them is to really make sure that you test that early. Even though you may not have the volume now, you may some day, and even one service could get hit by an unknown amount of consumption. Being able to prepare for that, and prepare your architecture, as well as your policies and your governance processes, to be more distributed and federated to prepare for more of a federated type of environment, means that these policies and technologies can scale out.

We found that it's a learning experience to get the right definition around a service, the right fit, and the right granularity. That's something that comes with experience, and you might need to go through a couple of iteration of services before people understand how to best keep volatility down on services, put the right level of abstracted tiers for processes and rules, and plan and test that you have the right levels. Then, you can scale out.

That may mean atomic services at some levels and more broad-grained services at others, and seeing how that impacts the infrastructure, and testing that out in all the different scenarios and dependencies that could exist.

Governance helps assure ongoing success


Last, but not least, the final domain, where we start impacting everyone, was the issue of bringing in proper governance. Thinking about it in balancing control with empowerment and driving consistency throughout the environment is very important. People needed to plan on the processes and ramp up the speed of development and deployment into production, until having that level of consistency was giving them huge amounts of business value, savings efficiencies, and opportunities in the market to differentiate and compete.

Making moves early to automate was a learning experience for a lot of these organizations that didn't do this. They found that they could have not only expanded their programs more effectively, but they could have mitigated their risks. They could have avoided a lot of rework, as they had automated governance processes earlier on, integrating it into the overall flow, as we stated before, and thinking about it not as a just “these are the things that need to be met and this is the information that needs to be captured.” It's really thinking about the processes.

What happens is that there are these exceptions, dealing with review committees that should be involved, determining the right roles and responsibilities, and sometimes that may mean amending what you have.

In other instances, it was creating anew. We found in other studies that a lot of organizations did not have strong governance. SOA almost forces these companies to do what they should have been doing all along around incorporating the right procedures around governance, and making that a non-intrusive approach.

If you make it too complex, no one is going to follow it. If you make it a mandated activity, without a lot of tools to help facilitate, it becomes a chore to do that non-intrusively. Also speaking to non-intrusive runtime governance, many of these organizations found that you really should have a centralized foundation of runtime governance incorporated into the fabric of the SOA architecture, and technology infrastructure.

From all the right monitoring and management around services, in particular portions of this network that's integrated together, you can gain that overall perspective and drive what’s necessary to move forward, both from a business goal perspective and from an IT topology perspective.

To quickly wrap up, here are some additional words of advice from the field. We found that enforcing policies, not putting off governance till later on, was very important, putting more efforts into business modeling, which many of these organizations are doing now. They said that they wished they had done a little bit more when thinking about the services that were created, focusing on preparing the architecture for much more process and innovation.

So, with that, I’d like to hand this off to Kelly Emo to speak on HP's offerings in the SOA space.

Kelly Emo: Thank you, Sandy. And good morning and afternoon, everyone. I’m just going to wrap up the webinar with about 10 minutes or so. I’m going to hit four main points that I think will dovetail nicely with what you heard today from Sandy Rogers, some great insights coming from customers who have “been there, done that,” and have been working with SOA for while. It’s real advice that we all can learn from.

I’m going to dive now into a little bit more context around why, if you’re doing SOA, now is absolutely the right time to be thinking about and working on a governance program. I’m going to share a few key governance-specific best practices that we've also gleaned from our customers who have been down the road of their SOA journey.

We’ll talk a little bit about the value of using an automated SOA governance platform, to help automate those manual activities and get you there faster. And, I’m going to wrap up with one customer success story, a customer who is almost complete with their SOA journey -- about 70 percent there -- and who sees significant business benefits through their investments in not only SOA, but SOA governance, and SOA management.

You heard from IDC the seven critical SOA success factors that came from this in-depth analysis of customers. The point that I want to reiterate here that was so powerful in this discussion is the idea that the seven domains are linked. By putting energy and effort in any one of them, you are setting yourself up for more success across the board.

What we are going to do now is drill down into that domain of governance. You’ll see as we talk through some of the key capabilities for SOA moving to the enterprise from a governance perspective, how it will help establish other success factors, like building our trust, or facilitating communication across IT silos, for example.

I’m just going to touch on this briefly. It's interesting here. HP has used this graphics for quite a while in talking about the need for governance, having a governance program that helps bring together the different IT stakeholders that play a part in the successful delivery and realization of business services that return the results that the business expects and that behave, so that they can be easily consumed, and reused for even more responsiveness and agility.

This graphic rings true even more now with the kind of pressures that our businesses and IT are under to realize the results of their investment in SOA faster with existing resources. What we’re seeing again and again with customers who have been implementing SOA and going down to the path of scaling it out, is that they have to invest in processes and best practices to not only deliver services, but to ensure that the services are of the highest quality, that they can be managed over time, and that policies are consistently applied, so that we can handle events like change and new consumption in a way that delivers the result that we expect.

We see many of our customers now crossing the enterprise scalability divide with their SOA, looking to incorporate SOA into their mainstream IT organizations, and they’re seeing the benefits of that initial investment in governance help them make that leap.

So why invest in SOA governance now? It's an interesting question I’ve been getting a lot lately. “Hey, you know, we’re under a lot of economic pressure, budgets are tight, there's fewer resources to do the same work.” This sounds counter-intuitive, absolutely, but this is the right time to make that investment in SOA governance, because the benefits are going to pay off significantly.

SOA governance is all about helping IT get to the expected business benefits of their SOA. You can think of SOA governance, in essence, as IT's navigation system to get to the end goal of SOA. What it's going to help IT do, as they look to scale SOA out, is to more broadly foster trust across those distributed domains. It's going to help become a catalyst for communication and collaboration, and it's going to help jump-start that non-expert staff.

You may recall that Sandy mentioned one of the biggest challenges with SOA is building out the education and expertise among the staff. If governance can assist in shrinking that learning curve, enable IT organizations to understand the unique attributes of SOA and what process is need to be applied to successfully realize their SOA goal, that will help accelerate the transformation that needs to occur from the SOA perspective.

The thing that's key about governance is that it helps integrate those silos of IT. It helps integrate the folks who are responsible for designing services with those who actually have to develop the back end implementations and with those who are doing the testing of performance and functionality. Alternately, it integrated them with the organizations that are responsible for both deploying the services and the policies and integration logic that will support accessing those services.

So, governance becomes the catalyst for integrating these silos and facilitating communication. One of the keys, one of the best practices we are seeing across these customers, is that they approach governance from a lifecycle perspective. They are not just thinking of one aspect, but they are actually thinking of all the different collaboration points, all the different key decisions that need to be made, as a service goes from initial concept, into its design, into the development organizations that are responsible for delivering the implementations, out into the QA organizations that are responsible for defining the requirements for testing aspects of those services.

This includes the functional test, the performance test, the security test, and then out into the operational teams. These teams will be responsible for deploying services into the network, and understanding the implications on the stacks, and data sources, services access, and those that are responsible for deploying policies, such as authentication policies, authorization policies, the protocol mediations, and all the way back into the change process.

On ramps to governance

I'm not saying that an organization has to automate and create a complete governance infrastructure for all aspects of the lifecycle on day one. Certainly, there’s going to be the starting point that's going to make the most sense, based on the organization's maturity. Maybe the first thing to bite off is automating how organizations can get visibility of services and putting some automated policy checks in place on the design side for testing supportive standards and interoperability.

By keeping a perspective on lifecycle governance, your organization can be primed and ready to handle SOA, as it scales, as more and more services go into production, and more and more services are deemed to be ready for consumption and reuse into new composite applications.

The key is to keep a service lifecycle governance perspective in mind, as you go about your governance program, and automation is key. Sandy touched on this, and my intent here is not to talk through all aspects of this slide, but just to show you that there are a number of different aspects of governance that can be automated. If they are, that will have significant efficiency pay off downstream.

Automating policy compliance can bring a huge pay off. Sandy mentioned an example where a customer went into their governance program assuming that people were doing the right thing and found initially that fewer than 50 percent of the services being built were actually in compliance with the design policies that they have established to meet their corporate and IT objective.

Automation that will ensure that those issues can be caught quickly, and the collaboration and processes can be put in place to affect that behavior, and alternately work into that network effect that Sandy talked about can not only be in compliance, but also be something to brag about.

This next slide makes a point that I think is important here. I talk to a lot of folks about SOA. When you talk about a service lifecycle, it's very different than a traditional application lifecycle. It builds upon the concept of an app lifecycle, where you have a planning, building, testing, deploying, and changed cycle, but you also have the impact of consumption. Those are the consuming services and will have a lifecycle of their own.

It involves planning what services they are going to consume in the building out of the composite application, locating those services, engaging in a contractual relationship with the service provider, and alternately testing and delivering that composite application. Then, if there is a change on either side of the equation, if there is a change to the underlying service or a change to the composite application that, in turn, impacts, both the service and the composite application from a expectations perspective and, potentially, in performance and quality perspective.

The SOA lifecycle can actually be thought of as concentric circles. It's an iterative, living, breathing thing, and that's why service lifecycle governance is so key, to keep all these dependencies and the synthetic relationship, moving smoothly forward in terms of meeting its business objective over time, not just the initial deployment, but as both services and consumption patterns change. And governance is a powerful tool to manage this power, and complexity downstream.

So what I am going to show you next are a couple of key things you can do with the governance program to help you manage and scale. The first is really thinking about this idea of the service manager. I talked about the lifecycles of providing a service and consuming a service. A service manager is either a real or virtual person who is responsible for ensuring that the service is delivering against the goals of the business, not just at its initial deployment, but as different people are consuming it.

When I say a “virtual person,” I realize how funny that sounds. I don't literally mean a virtual person. What I mean is that it could be the service provider, or could be a committee of service providers, key service consumers, and maybe a line-of-business owner.

Manage the services like a service

What we are finding more and more now is that organizations are actually investing in a role known as service manager, someone who oversees the implication of not only delivering a service over time, but those that are consuming it. I see this as a best practice that can be supported by SOA governance, and which helps empower them by giving them a foundation to set up policies and have visibility in terms of how this service is meeting its objective and who is consuming the service.

Sandy talked a little bit about this as well. This is another best practice we are seeing mature customers that are being successful, and that's the way SOA governance deploys out there. That's bringing together the robust and well-developed processes around SOA governance and quality assurance, and creating a collaborative environment between those that are responsible for managing the entire testing process of new applications and services and those that are involved with the initial planning, with the design of services, and the governance of services.

You can actually get a dialog going between your enterprise architecture and planning teams, your development teams, and your testing teams, in terms of the expectations, and requirements right upfront, as the concept of the service is being ferreted out.

Get that communication occurring, so that everybody knows what are the key aspects we are going to test, how we are going to deliver it, what the expectations are, and what the policies are from a quality perspective that’s going to drive governance decisions downstream.

A really simple example comes from a customer who automated a very simple policy, which said that if a service has any critical or serious defects, we won’t allow it to be pushed into the staging environment, but rather we’ll flag it, bring it back into the testing process, and have it a discussion around how we’re going to mitigate those requirements. That doesn't sound like a lot of work, but it was important as the number of services scaled up to really automate that collaboration, and ensure that things didn't fall in the cracks.

Ultimately, it's about driving collaboration between the enterprise architecture and development team and the quality assurance team through automated governance, connected to quality. The same thing can be said about operations, by aligning and integrating the processes and knowledge that’s gleaned through the planning, and design processes, and the governance processes about how services are expected to behave once they go into deployment.

With the information that flows to the operations team, what we avoid is throwing a service over the wall, and then testing to figure it out. As a result, the SOA and the operational aspects of a service that ultimately get realized in production align with the original expectations that we are established.

You get the service behavior that was originally intended, and as your SOA scales and you get more and more services out there, this becomes essential. It keeps that line of communication seamless and flowing between the planning side and the delivery side, so that you get the behavior that meets the needs of the service consumers, and ultimately the business. And again, automated governance can help with this.

HP just recently announced the third generation of our automated governance platform, HP SOA Systinet. We’re positioning this as IT's navigation system for SOA. It’s designed at the core to guide our customers through their SOA journey by automating the governance aspects of the service lifecycle. This will ensure that the policies that are defined and automated, at the design, planning, and build side, as well as at the testing and run side, will map to the goals of the business in IT.

There are a number of functions inside Systinet designed to empower that concept of the service manager, supporting both parts of the lifecycle by providing a service, and the consumption of services, and supporting that collaboration between those responsible for providing services, and those who want to engage in a contractual relationship with them. Ultimately, the idea behind Systinet 3.0 is IT's objective in scaling out their SOA across their enterprise to realize the business value of their SOA investments faster, which is so important in today's environment.

Success stories from the field

Let's talk about a customer who has actually done this. This is the success story of a major European telecom provider. They’ve implemented approximately 70 percent of their SOA objective and right from the get-go, they made an investment in SOA governance. What they have seen over the last three years is an ROI of 327 percent, and it's really benefited them in four main dimensions.

First, they’ve been able to reduce the amount of downtime in the provisioning and delivery of new mobile subscribers services. A lot of this has to do with the fact that the services that are being delivered have been designed compliant to policies and have been tested and confirmed that they will deliver the behavior that was expected in terms of how they execute an operation.

The customer has also been able to increase customer retention, which has really resulted from two things, faster delivery of new services, and reduced downtime.

They've been able to reduce the time to market behind the delivery of new services, because of the timing of the communication flows, and ensuring interoperability and compliance.

And, they have seen an overall IT cost reduction of a significant amount of money, almost a million dollars, with their investment in SOA governance from the get-go.

Investing in SOA governance, as I mentioned at the very beginning of my presentation, while it may require a little investment upfront, can have a significant and powerful payoff downstream, when you go to move SOA out into the mainstream.

With that, I’m just about out of time, and I want to make sure that we have some time for Q and A, for both Sandy and myself. So, here is a pointer to where you can learn more, as both Dana and Sandy mentioned upfront. The IDC research is available on www.hp.com/go/soa, so you can download the reports and dig in to the specific detail behind what was shared with you today.

So, I’m going to turn it back to Dana and let him lead Q and A.

Gardner: Thank you, Kelly. Now that we've had a few questions, I’m going to direct the first one -- from me, actually -- to Sandy. I wanted to find out if there were any real surprises, any unexpected results, when you went out to the field to uncover SOA practices. Was there anything that caught you by surprise?

Rogers: What was interesting was that when it comes to providing metrics around the SOA experience, we have got a long way to go. A lot of organizations knew about different approaches to SOA management in monitoring, and understanding dimensions around the environment well before they made those investments.

Then, they still followed the same pattern that we have seen in past generations of technology where, when we were first being introduced in the marketplace, people said they weren't going to make the same sort of mistakes. It seems that SOA is very much like other initiative.

What they are doing now is a really fast catch-up in finding a lot of immediate value from doing that. Part of that had to do with gaining the buy-in, justifying the investments. What most of these organizations are discovering is that the last mile is dealing with that funding hurdle, showing that kind of value. What they’re realizing is that if they have these kinds of capabilities, they would have been able to measure that value much earlier.

It's more of a word of advice that we were getting than it was a real surprise, but it's something that, as an industry watcher, you just sort of see. Also, a lot of these organizations are indeed starting to tackle Web 2.0, and mashups for other kinds of dimensions within their organization. They really see it as an overarching type of trend. They don't see these as separate technology initiatives, and that's actually a pleasant type of view and a surprise to me as an analyst to see that. That sort of holistic view is starting to take hold.

Gardner: Here's another question directed at Sandy. You mentioned that the automating governance is an important element. What best practices have you seen for convincing the management in these enterprises to start an automated governance program?

Rogers: I was mentioning that just a moment ago. The kind of visibility that you are able to give to management presents the information on what services are being incorporated. But, if those services are designed well, you can actually be able to track that to key performance indicators (KPIs) in business measures and understand how this can be justified and funded? That kind of visibility is very important.

The other thing with automating governance, and I think Kelly referred to this, is that you do not need to do it all at once. That really targets protecting the environment, being able to automate as much as possible, have standards services, and schema automatically populated in the tools, and have that shared metadata concept start to expand. So, whether you’re creating services from one tool or another, that metadata is being captured. As time proceeds, you act on that metadata in the form of what kinds of policies you need, and so the threshold you measure that you can achieve is an overall process.

Gardner: This is a question for Kelly. The European company mentioned that had the 320 percent-plus return on investment. Did they start their SOA with SOA management or governance? What best practices have you learned about when to start managing SOA?

Emo: It’s a very good question. In that particular scenario, they actually started first with management, because they were having downtime issues. What they found right away, as they started to instrument their SOA environment and understand where the issues were taking place, was the part of the problem was inconsistency, in terms of what the operational team understood they were supposed to do, when they provisioned out of service from a load balancing, a security, and an overall performance perspective.

They found very rapidly that if they put governance in place to start to capture those expectations back on the design side, and then communicate those expectations to operations, they were able to alleviate that gap. They are now at a point where they're automating a production deployment of services using templates that come out of the governance side. So, they’re seeing additional timesavings in terms of how quickly they can provision new mobile subscriber services to their customer base.

Gardner: Okay, I have another. I think it’s directed to Sandy. SOA is not happening in a vacuum. There are other major undertakings in IT departments and across enterprises of virtualization, cloud computing, data center consolidation have all have been quite prominent lately. The question is, how does SOA help or align with these other types of goals that IT is tackling?

Rogers: When it comes being able to support an effort like consolidation, the whole idea behind what a lot of people are doing with SOA is to try to consolidate on core functional and information elements, and then share those to the rest of the enterprise, and through different applications and systems.

So, it dovetails very nicely with consolidation. What a lot of organizations might do is try to consolidate first and then think about enabling with services. However, being able to expose varying parts of different systems and have that visibility into the core services and how they are being used can actually facilitate on both consolidation and modernization efforts.

When it comes to initiatives like cloud computing and virtualization, it's really thinking about the overall architecture, what kind of interfaces that you have, what kind of service needs to be supported in a virtualized environment being able to componentize, and modularize it, and allow for that necessary interoperability.

When you think about virtualizing systems, and when you think about that overarching idea of on-demand cloud computing, the first step is interoperability. We found a lot of even on-demand providers who didn't go down the Web services and services-oriented route, having to go back, and re-architect their solution. It's important to have that kind of interoperability facilitated on a standardized basis to enable those kinds of activities to proceed.

Gardner: I believe we’ve run out of time. I certainly want to thank our guests. We’ve been speaking with Sandra Rogers of IDC, and Kelly Emo of HP Software. This webcast is sponsored by HP Software and includes commissioned research from IDC.

You can get more information at www.hp.com/go/soa.

This is your moderator, Dana Gardner, principal analyst at Interarbor Solutions. Thank you all for joining the webcast.

You’ve been listening to a sponsored BriefingsDirect Podcast on SOA adoption patterns based on original research from IDC and HP. Thanks for listening and come back next time.

Download the IDC report "A Study in Critical Success Factors for SOA."

Listen to the podcast. Download the podcast. Access the Webinar. Learn more. Sponsor: Hewlett-Packard.

Transcript of Oct. 14, 2008 webinar on SOA research and how companies are implementing SOA more strategically based on essential adoption best practices.

Monday, November 10, 2008

Solving IT Energy Use Issues Requires Holistic Approach to Efficiency Planning and Management

Transcript of a BriefingsDirect podcast with HP’s Ian Jagger and Andrew Fisher on the role of energy efficiency in the data center.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: Hewlett-Packard.

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions and you’re listening to BriefingsDirect. Today we present a sponsored podcast discussion on the critical and global problem of energy management for IT operations and data centers.

We will take a look at energy demand, supply, costs, and ways to develop a complete management perspective across the entire IT energy equation. The goal is to find innovative means to conservation so that existing facilities don't need to be expanded or replaced.

Good energy management is not as simple as just less hardware or better cooling, but it really requires an enterprise-by-enterprise examination of the "many sins" of energy and resources misuse.

In order to put into practice longer-term benefits, behaviors, and measurements, the whole picture needs to be taken into consideration. The goal, of course, is to promote a low-risk matching of energy supply and cost with the lowest IT energy demand possible.

To help us examine these important topics we’re joined by Ian Jagger. He is the Worldwide Data Center Services marketing manager in Hewlett-Packard's (HP) Technology Solutions Group. Welcome to our podcast, Ian.

Ian Jagger: Thank you, happy to be here.

Gardner: We’re also joined by Andrew Fisher. He is the manager of technology strategy in the Industry Standard Services group at HP. Welcome, Andrew.

Andrew Fisher: Thank you, very much.

Gardner: Let's take a look first at the broad picture of larger trends in this whole energy equation. As I say, it’s not simple. There are a lot of moving parts, and there are a lot of mega tends and external factors involved as well.

I suppose the first thing to look at is capacity. I’d like to direct this to Ian. How critical is the situation now where large enterprises with vast data centers are actually facing an energy crisis?

Jagger: I think it's quite critical Dana. Data centers typically were not designed for the computing loads that are available to us today and they have been caught out. Enterprise customers are having to consider strategically what they need to do with respect to their facilities and their capability to bring enough power to be able to supply the future capacity needs coming from their IT infrastructure.

Gardner: Now, at the most general level, is this a case where there is not enough electricity available or that the growth and demand of electricity is just growing so quickly, or both?

Jagger: I think it's both, and there is also a third level, which is how adequate is the cooling design within the data center itself. So, it is a question of how much power is available, of how much can be drawn into the data center, what is the capacity of the data center, and as I said, how that is cooled.

Gardner: We are also, of course, involving green concerns. There are issues around carbon and pollution, and mandates around these issues. We are also faced with regulatory issues and compliance that are of a separate nature, and many organizations are behaving more like service bureaus, where they have service level agreements.

So there is not too much wiggle room in terms of what needs to be adhered to from compliance and/or service levels. What are the variables that companies need to first start focusing on in order to better execute their management of energy?

Fisher: That's a good question. One of the most important things to understand is how they have allocated power within that data center. There are new capabilities that are going to be coming online in the near future that allow greater control over the power consumption within the data center, so that precious capacity that's so expensive at the data center level can be more accurately allocated and used more effectively.

Gardner: This does vary from region to region, and HP being a global company, perhaps we should also take a look at the fact that in the United States, for example, there are limitations from the grid. The capacity of moving energy, even if it can be generated, is an issue, and in the U.K., apparently in the London area at least, there’s been somewhat of a lockdown in terms of use restrictions around the Olympics.

Ian, perhaps you could fill us in a little bit on some of the regional impacts and how this is supercritical perhaps in some areas more than others.

Jagger: I think you have just got it with the example you have used. It does vary region to region, depending on the capacity of the grid, the ability to distribute it along the grid and how that impacts customers geographically. It's not just about power distribution and generation, but it's also about the nascent situation with respect to compliance.

In Europe, we are now seeing countries, particularly the U.K., who have taken the lead in terms of carbon reduction. Legislation is coming on line, kicking in from 2010, but compliance requirements from 2009, where the top 5,000 companies or so, companies that use a given volume of energy or a value of energy, have to justify that usage in terms of purchasing carbon credits which are set against them.

Each of those companies -- and this includes HP U.K. -- need to establish what the energy usage is and show the roadmap -- how they can reduce that year over year towards the legislation that's in play there. It's only around the corner before that's applied in the U.S. too.

Gardner: Now, we recognize that this is a large problem. Many components -- I have heard the phrase “many sins” -- are involved. I wonder if either of you, or perhaps both, could fill us in a little bit about what are the types of past behaviors, approaches, mentalities, and philosophies about energy that need to be reexamined in order to get closer to where we need to go.

Jagger: I think the contrast among the silos between facilities and real estate and IT are based in the contradiction between cost and availability. You mentioned service levels earlier. From an IT perspective, that’s service level agreements to the business in terms of availability, the uptime of equipment. But, from the real estate perspective, the facility perspective, it's about cost control and CAPEX and OPEX with respect to the facility itself.

They have tended to operate in independent silos, but now the general problem we have, which is overriding both of those departments, is the cost of energy. Typically the cost of energy is now approaching 10 percent of IT budgets and that's significant. It now becomes a common problem for both of these departments to address. If they don't address it themselves then I am sure a CEO or a CFO will help them along that path.

Gardner: How about it, Andy? What sort of sins unfortunately have people overlooked as a result of lower energy cost in the past, but that really can't be overlooked now?

Fisher: First of all, it's a complex system. When you look at the total process of delivering the energy from where it comes in from the utility feed, distributing it throughout the data center with UPS capability or backup power capability, through the actual IT equipment itself, and then finally with the cooling on the back end to remove the heat from the data center, there are a thousand points of opportunity to improve the overall efficiency.

To complicate it even further, there are lot of organizational or behavioral issues that Ian alluded to as well. Different organizations have different priorities in terms of what they are trying to achieve. So, there is rarely a single silver bullet to solve this complex problem.

You need to take a complete end-to-end solution that involves everything from analysis of your operational processes and behavioral issues, how you are configuring your data center, whether you have hot-aisle or cold-aisle configurations, these sorts of things, to trying to optimize the performance or the efficiency of the power delivery, making sure that you are getting the best performance per watt out of your IT equipment itself. Probably most importantly, you need to make sure that your cooling system is tuned and optimized to your real needs.

One of the biggest issues out there is that the industry, by and large, drastically overcools data centers. That reduces their cooling capacity and ends up wasting an incredible amount of money. So we have at HP a wide range of capabilities, including our EYP Mission Critical Facilities Services to help you analyze those operational issues as well as structural ones, and make recommendations, in addition to products that are more efficient as well.

Gardner: You raise a couple of interesting points. It's hard to fix something that you can't measure. What are the basic measurement guidelines for energy use?

I have heard of Defense Council on Integrity and Efficiency (DCIE). There is also a Power Usage Effectiveness (PUE). How does a large organization start to get a handle on this? As you say, or it has been mentioned, it's a siloed problem in the past, now it needs to be tackled head on?

Jagger: You have touched on the principal benchmarks that go through the industry there -- the PUE and the Infrastructure Efficiency Ratio, which is the inverse of the PUE. Put very simply, the PUE would be the total power coming into the data center over the amount of power required for computing purposes. So how efficient is that? How efficient is the data center and service of overall power that is required for computing?

In other words, if you need one kilowatt for computing, and your PUE is two-and-a-half, than you need to be bringing 2.5 kilowatts to the wall to be able to run those computers.

They are not perfect, and there are industry bodies that are looking to drive greater elements of perfection out of this. So for example, PUE is a Green Grid Rating System that is generally used, but Green Grid themselves are looking to migrate through the inverse ratio of the data center infrastructure and efficiency ratio, and use that going forward before they can develop the next level.

The principal problem is that they tend to be snapshots in time and not necessarily a great view of what's actually going on in the data center. But, typically we can get beyond that and look over annualized values of energy usage and then take measurements from that point.

The best way of saving energy is, of course, to turn the computers off in the first place. Underutilized computing is not the greatest way to save energy.

Gardner: That dovetails, of course, with a number of other initiatives we have underway, such as virtualization, application modernization, winnowing out apps that aren't being used very much. Service-oriented architecture (SOA) encourages reuse and making sure that common services are supported efficiently.

There is also data center unification and modernization of hardware. All these things come together and ultimately increase utilization, which then changes the energy equation.

The question is how do we make these things work in concert? How is there some coordination between getting the right mix on energy along with some of these other initiatives? Why don't we start with Ian on that?

Jagger: They feed off each other. If you look at virtualizing the environment, then the facility design or the cooling design for that environment would be different. If you weren't in a virtualized environment, suddenly you are designing something around 15-35 kilowatts per cabinet, as opposed to 10 kilowatts per cabinet. That requires completely different design criteria. You’re using four to eight times the wattage in comparison. That, in turn, requires stricter floor management.

But having gotten that improved design around our floor management, you are then able to look at what improvements can be made from the IT infrastructure side as well. I guess Andy would have some thoughts there.

Fisher: There is a wide range of opportunities. Just the latest generation server technology is something like 325 percent more energy efficient in terms of performance-per-watt than older equipment. So, simply upgrading your single-core servers to the latest quad-core servers can lead to incredible improvements in energy efficiency, especially when combined with other technologies like virtualization.

Gardner: Once these organizations start hitting the wall on energy, it behooves them to look at some of these other initiatives, rather than just saying, “Wow, we need another data center at 10, 20, maybe 100 million dollars.” Is that more the philosophy here -- be smart not big?

Fisher: Absolutely. There is a substantial opportunity to extend the life of your data center, and I recommend that you give HP a call and talk to us here. We have a wide range of things that we can help with.

Ian can talk to the services here in a second, but from a product perspective, we’re bringing to market new capabilities in terms of efficiency of the platforms to help you reduce that total energy consumption of the IT equipment itself. We’re also working on unique ways of reclaiming existing capacity. Instead of having to build another 50 or 100-million-dollar data center, you can live longer in the data center that you have.

Gardner: I suppose one of the fundamental shifts recently with the cost of energy going up considerably is that the return on investment (ROI) equation shifts as well. If I were selling systems I need to know, given the harsh economic climate, that I have a good ROI investment story -- that if you invest $10, you can save $15 over X amount of time. The energy factor now plays a much larger role in that.

Perhaps, Andy, you could tell us a little bit about how the cost of energy, instead of an afterthought, is now a fore thought, when it comes to these -- whether it’s worth these modernization efforts.

Fisher: We look at it both from an OPEX, or your monthly cost of electricity -- and that’s rising rapidly, as the cost of energy goes up -- as well as from a CAPEX perspective, with your investment in your data center.

The first thing is to optimize your CAPEX investment, the money you have already sunk into your data center. You want to make sure that from an investment perspective you don't have to lay out another huge chunk of money to build another data center. So, number one, we want to optimize on the CAPEX side and make sure that you are using what you have most effectively.

But, from an operational cost perspective, it's really about reducing your total energy consumption. You can approach that initially from optimizing the energy use of your IT equipment itself, because that is core to the PUE calculation that we talked about.

If you are able to reduce the number of watts that you need for your IT equipment by buying more energy efficient equipment or by using virtualization and other technologies, then that has a multiplying effect on total energy. You no longer have to deliver power for that wattage that you have eliminated and you don't have to cool the heat that is no longer generated.

Otherwise, there are opportunities… We’ve introduced products that help you optimize your cooling, which typically can be up to 50 percent or more of your total energy budget. So by making sure that you fine tune your cooling to meet your actual demand of your IT you can make substantial reductions on your monthly electric bill.

Gardner: Now, how does the Adaptive Infrastructure relate to this as well? It seems that would also be a factor in some of these equations?

Fisher: We are really talking about the Adaptive Infrastructure in action here. Everything that we are doing across our product delivery, software, and services is really an embodiment of the Adaptive Infrastructure at work in terms of increasing the efficiency of our customers' IT assets and making them more efficient.

Gardner: Let's go back to Ian. It seems that, as with many areas like manufacturing or application development, the history has been that you build it and then you throw it over the wall and someone has to put it into production or build it.

I expect that maybe data centers have had a similar effect when it comes to energy. We set up requirements. We build based on performance requirements. And then, oh, by the way, energy issues come as an afterthought.

Is that true and is that the outmoded method, and are we now, in a sense, building for energy conservation from the get-go? Has it become more of a city- or town-planner mentality, rather than simply an architect approach. What's the mindset shift that's taking place?

Jagger: That's a good question. I think you have to address it at all the levels you talked about. At the company level or the enterprise level, you are absolutely right. That has been the mentality or the approach, we need a data center, and we base it where we are. Nothing else matters. Base it adjacent to us.

Energy costs or supply have not been a consideration. Now they are. That's on the basis that you don't have any other complexities coming at you. But, if you are just looking at the strategy for your data centers in terms of business growth and your capacity, storage, and availability requirements that you have going forward, and you do the math, you can understand the size of the data center you need and how that works with respect to virtualization strategies and so on.

On top of that, we have the latest complexities, where you simply don't have the forward view on things. In just the last few days we’ve seen, for example, Wells Fargo buying Wachovia. I’m not sure how many data centers are within those two organizations, but you can bet they are in the scores. Suddenly, we have real estate and IT managers who are scratching their heads thinking, “How on earth do we bring all this together. There are different approaches now being taken at the enterprise level.

At the architects’ level, it would be irresponsible for an architect today not to build energy efficiency into a green field building or any building, not just a data center. It’s pretty much been established that it just makes sense if you are designing a new building to be building energy efficiency into it, because your operating costs will far outweigh the capital expenditure on those building rather quickly.

I’m not sure how a company like HP can influence at the planning level, but where we can influence is at the industry level and at the governmental level. We have experts within the company who sit on think tanks and governance boards. We advise bodies like the EPA. We sit with the leading organizations in energy building design, and discuss how governance with respect to green building design can be built and can be moved forward within the market.

That's how we can start to influence at the industry level in terms of having industry standards created, because if the industry doesn't create them itself, then governmental bodies will do it.

Gardner: It also seems that because it's so difficult to predict all the variables, that a need for modularity has emerged in the data center design, so that the end result can be amended and adjusted without all the other parts being interconnected and brittle. It’s similar to software, where you would want to have modularity in software, so you gain flexibility and it’s not too brittle. Can you explain more deeply how that relates to best energy management practice?

Jagger: The approach that we at HP are now taking is to move toward a new model, which we called the Hybrid Tiered Strategy, with respect to the data center. In other words, it’s a modular design, and you mix tiers according to need.

What has gone on in the past and today is that as an enterprise you may have a requirement for a Tier 4 level of structure, with respect to the data center, which is putting out at 100 watts per square foot, for example. Let’s say, for the sake of argument, that's a 100,000-square-foot data center, but you don't need all that data center infrastructure at a Tier 4 topology.

If you look at how you’re going to structure your virtualization program, you may only need 50 percent of it at Tier 4 for high density computing, and the rest of it can be at a Tier 2 level.

If that were the situation, you would be clearing roughly 25 percent of your capital costs on building that data center. Just doing simple math, if you are looking at 100,000 square feet, that's in the region of $40 to $50 million. So, there are some clear consequences of moving to a hybrid tiered or a modular model.

Gardner: Are there some examples out there that you can give us? It would be great if you could name some companies, or at least give us use-case scenarios where organizations have adjusted, adopted some of these practices, implemented some of these standards, used common measurement practices, and have resisted having to spend $40 million on CAPEX, but also perhaps utilizing their existing resources even better.

Jagger: I think HP is the biggest example. We are the biggest example of designing modularity into our own data centers.

Beyond HP, you could look at supercomputing centers, high density computing -- the Internet service providers, the Googles of this world, and Microsoft themselves. The companies that require high-level resolve, high density and supercomputing typically are moving in this direction. We are pioneering this with our in-house capabilities. We are at the leading edge of this level of innovation.

Gardner: Let's take a look forward a little bit. What can we expect? Obviously, this makes more sense over time. Green issues are going to become more prevalent. Carbon is going to become more regulated. Costs are going to become prohibitive for waste, and the amount of data moving around increases all the time.

Perhaps you can explain the roadmap, the future, some of the concepts around optimizing data centers -- without pre-announcing things, but at least, give us a sense of what's coming.

Fisher: How about if I talk to that one first. One thing that was just announced is relevant to what Ian was just talking about. We announced recently the HP Performance-Optimized Data Center (POD), which is our container strategy for small data centers that can be deployed incrementally.

This is another choice that's available for customers. Some of the folks who are looking at it first are the big scale-out infrastructure Web-service companies and so forth. The idea here is you take one of these 40-foot shipping containers that you see on container ships all over the place and you retrofit it into a mini data center.

In the HP implementation, it's a very simple kind of layout. You just have a single row of 50 U racks. I believe there’s something like 22 of them in this 40-foot container. There’s a single hot aisle and a single cool aisle, with overhead cooling that takes the exhaust hot air from the back and cools it and delivers it to the front.

Using the HP POD you can install any standard equipment into the 19-inch racks and build out a very efficient data center that has a very low PUE or a leading PUE, from a cooling perspective. So that's yet another option in the HP side.

From the product side of HP here, one of the biggest things we’re seeing is that power and cooling capacity is allocated by facilities in a very conservative manner. It's hard to understand exactly how much energy is required for each individual server or blade enclosure. So, there’s typically quite a bit of a conservative reserve that is allocated on top of what's probably actually being consumed.

In fact, if it's in the purview of the facilities team to allocate that power, they would treat it as any piece of electrical equipment and they would just look at what the max power rating or requirement is for the piece of equipment. What we’re seeing is that this can actually overstate the power requirement by up to three times what is actually needed.

So, there’s an incredible opportunity to reclaim that reserve capacity, put it to good use, and continue to deploy new servers into your data center, without having to break ground on a new data center.

Very soon, you’re going to be hearing some exciting news from HP about how we’re going to provide the opportunity for fine-tuned control of exactly how much power the servers in the IT racks are going to actually use.

Gardner: So, not only are we moving toward modularity at a number of levels, we’re bringing more intelligence to bear on the problem?

Fisher: Yes. A key to addressing this problem is to have accurate measurement and the ability to have predictability and control of the actual power consumption of the core IT equipment that the whole infrastructure is supporting.

Gardner: Alright. How about a roadmap, from a strategic point of view, of methodologies and best practices. Ian, what new innovations can we expect along those lines?

Jagger: In all this complexity, it's a relatively simple path to follow. It all starts with discovery -- where are we today? Given what we know about business direction, where do we need to get to? What do we need to be capable of from a business technology perspective that incorporates a facility as a holistic or a hybrid view of those departments combined? What is it that they need to produce to support the business going forward?

Then, you have a gap. The next question is how do we fill that gap, how do we get there? Various strategies can accrue from that, depending on what your needs are.

We would look at that with customers, and we would sit down with them and ask them some pretty basic questions. Do you need to be where you are today? If you are in Phoenix, does the data center needs to be in Phoenix or could it be in Washington state? It’s cooler and you don't therefore have the energy costs that you would in Phoenix. So, let's have a look at that.

What is your position from a corporate social-responsibility perspective with respect to the environment? How visible are you in addressing that in comparison to your industry peers? What are the pressures on you to do that? So, let's have a look at alternate energy sources with respect to your data center.

For example, we have just announced our San Diego facility, which is now powered by solar panels. We are involved quite heavily right now in Iceland, providing geothermal technologies for data centers. So, a question there would be, can you be in Iceland? One issue there would be the question of latency. There are several questions that you would ask in terms of direction and how to get there.

Having answered those, you would move into planning and design phases and we would address those at that point too. We would build into the operation of any given new sites, or retrofitted site, the processes with respect to service management across the facility and IT structures. Service management is now not only about IT, but it’s about the facility as well, and how that is brought together in one motion.

So, it's pretty much a simple lifecycle approach within a complex field, and that will get you there. Along the process, we would be able to give the orders of magnitude of cost and typical ROI based on the strategies that you are looking to undertake.

Gardner: It certainly sounds like being efficient and getting this larger management capability over energy and facilities and resources is becoming a core competency and not an option. Is that fair to say?

Jagger: Yes. I think the spin on that is, going back to the example I just used of Wells Fargo and Wachovia, who do you turn to who can help you with that? You don't face that every day of your life, either within facilities or within IT, and you need help. You need to reach out for where the help is.

Traditionally, in our industry, as we have been discussing, it has tended to be siloed into real estate and into IT. What’s now required is the holistic view of infrastructure. I mean the physical infrastructure and the IT infrastructure. Customers need to reach out to firms that they feel comfortable reaching out to.

I think it was Andy who actually conducted this survey -- so correct me if I’m wrong, Andy. We recently undertook a survey in each of our worldwide regions, all enterprise customers. The finding was that the more the customers themselves had issues that they needed to address with respect to the environment and energy the more likely it was that they were going to come to HP as their vendor of choice.

Fisher: That's correct.

Gardner: Well, clearly if you don't have the holistic view you are going to have to learn how to get one, right?

Fisher: Right.

Gardner: Ian, let me direct this to you. I suppose there is some thought around environmental benefits and green IT, in which people believe that this is an additional cost or an expense. It seems to me, though, from what we have been discussing, that moving towards good environmental practices is actually moving towards good energy management practices too.

Jagger: That's absolutely right. It is not a choice of one or the other. Now, the business outcomes that come from energy management are also environmental outcomes, but there are apparent barriers to implementing environmental solutions, which, as you just, said are actually energy management solutions. Primarily they revolve around the lack of identifiable ROI or the payback period around any green improvement and then the measurements of that improvement itself.

More recently, we’ve been able to show customers the typical examples of how they can move through that environmental curve or that energy management curve going back to the industry standard benchmarks of PUE.

By showing them what a rough order of magnitude cost would be to move them grade by grade through the ranking system of energy efficiency, we show them what that cost would be, what the return would be, as a result of that in terms of carbon savings, in terms of dollar savings, and what the payback period would be based again on those dollar savings.

So, we can have a very strategic, yet tactical, view on how to approach this. A customer can take a larger view in terms of how far they want to go with their environmental approach and balance that with their energy-management approach.

There is obviously a curve here. The larger the investment in improving energy management, then the greater the return. At some point, that return slows down, because of the amounts of actual investment you have put in. So, there is a curve there, and we can show you how to get to any point along that curve.

Gardner: Excellent! We have been discussing the large global problem around energy management and how it has become more critical for IT operations -- energy not as an afterthought, but really the forethought and an overriding stratagem for how to conduct business in IT.

I want to thank our guests today. We have been joined by Ian Jagger. He is the Worldwide Data Center Services marketing manager in HP's Technology Solutions Group. Appreciate your input Ian.

Jagger: You’re very welcome, Dana, I am happy to have taken part.

Gardner: Andrew Fisher, the manager of technology strategy in the Industry Standard Services group at HP. Thank you, Andy.

Fisher: You are welcome.

Gardner: This is Dana Gardner, principal analyst at Interarbor Solutions. You’ve been listening to a sponsored BriefingsDirect podcast. Thanks for listening and come back next time.

For more information on energy-efficiency in the data center, read the whitepaper.

For more information about HP Energy Efficiency Services.

For more information on HP Thermal Logic technology.

For more information on HP Adaptive Infrastructure.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: Hewlett-Packard.

Transcript of a BriefingsDirect podcast with HP’s Ian Jagger and Andrew Fisher on the role of energy efficiency in the data center. Copyright Interarbor Solutions, LLC, 2005-2008. All rights reserved.

Thursday, November 06, 2008

Implementing ITIL Requires Log Management and Analytics to Help IT Operations Gain Efficiency and Accountability

Transcript of BriefingsDirect podcast on the role of log management and systems analytics within the Information Technology Infrastructure Library (ITIL) framework.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: LogLogic.

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you’re listening to BriefingsDirect. Today, a sponsored podcast discussion on how to run your IT department well by implementing proven standards and methods, and particularly leveraging the Information Technology Infrastructure Library (ITIL) prescriptions and guidelines.

We’ll talk with an expert on ITIL and why it’s making sense for more IT departments and operations around the world. We’ll also look into ways that IT leaders can gain visibility into systems and operations to produce the audit and performance data trail that helps implement and refine such frameworks as ITIL.

We’ll examine the use of systems log management and analytics in the context of ITIL and of managing IT operations with an eye to process efficiency, operational accountability, and systems behaviors, in the sense of knowing a lot about the trains, in order to help keep them running on time and at the lowest possible cost.

To help us understand these trends and findings we are joined by Sudha Iyer. She is the director of product management at LogLogic. Welcome to the show, Sudha.

Sudha Iyer: Thank you.

Gardner: We’re also joined by Sean McClean. He is a principal at KatalystNow in Orlando, Florida. It's a firm that handles mentoring, learning, and training around ITIL and tools used to implement ITIL. Welcome to the show, Sean.

Sean McCLean: Thank you very much.

Gardner: Let's start by looking at ITIL in general for those folks who might not be familiar with it. Sean, how are people actually using it and implementing it nowadays?

McCLean: ITIL has a long and interesting history. It's a series of concepts that have been around since the 1980, although lot of people will dispute exactly when it got started and how. Essentially, it started with the Central Computer and Telecommunications Agency (CCTA) of the British government.

What they were looking to do was create a set of frameworks that could be followed for IT. Throughout ITIL's history, it has been driven by a couple of key concepts. If you look at almost any other business or industry, accounting for example, it’s been around for years. There are certain common practices and principles that everyone agrees upon.

IT, as a business, a practice, or an industry is relatively new. The ITIL framework has been one that's always been focused on how we can create a common thread or a common language, so that all businesses can follow and do certain things consistently with regard to IT.

In recent times, there has been a lot more focus on that, particularly in two general areas. One, ITIL has had multiple revisions. Initially, it was a drive to handle support and delivery. Now, we are looking to do even more with tying the IT structure into the business, the function of getting the business done, and how IT can better support that, so that IT becomes a part of the business. That has kind of been the constant focus of ITIL.

Gardner: So, it's really about maturity of IT as a function that becomes more akin to other major business types of functions or management functions.

McCLean: Absolutely. I think it's interesting, because anyone in the IT field needs to remember that we are in a really exciting time and place. Number one, because technology revises itself on what seems like a daily basis. Number two, because the business of IT supporting a business is relatively new, we are still trying to grow and mature those frameworks of what we all agree upon is the best way to handle things.

As I said, in areas like accounting or sales, those things are consistent. They stay that way for eons, but this one is a new and changing environment for us.

Gardner: Are there any particular stumbling blocks that organizations have as they decide to implement ITIL? When you are doing training and mentoring, what are the speed bumps in their adoption pattern?

McCLean: A couple of pieces are always a little confusing when people look at ITIL. Organizations assume that it’s something you can simply purchase and plug into your organization. It doesn't quite work that way. As with any kind of framework, it’s there to provide guidance and an overall common thread or a common language. But, the practicality of taking that common thread or common language and then incorporating it or interpreting it in your business is sometimes hard to get your head around.

It's interesting that we have the same kind of confusion when we just talk. I could say the word “chair,” and the picture in your head of what a chair is and the picture in my head of what a chair is are slightly different.

It's the same when we talk about adopting a framework such as ITIL that's fairly broad. When you apply it within the business, things like “that business is governance,” “that business is auditing compliance rules” and things like that have to be considered and interpreted within that framework for ITIL. A lot of times, people who are trying to adopt ITIL struggle with that.

If we are a healthcare industry, we understand that we are talking about incidents or we understand that we are talking about the problems. We understand they we are talking about certain things that are identified in the ITIL framework, but we have to align ourselves with rules within the Health Insurance Portability and Accountability Act (HIPAA). Or, if we are an accounting organization, we have to comply to a different set of rules. So it's that element that's interesting.

Gardner: Now, what's interesting to me about the relationship between ITIL and log and systems analytics is that ITIL is really coming from the top-down, and it’s organizational and methodological in nature, but you need information, you need hard data to understand what's going on and how things are working and operating and how to improve. That's where the log analytics comes in from the bottom-up.

Let's go to Sudha. Tell us how a company like LogLogic uses ITIL, and how these two come together -- the top-down and the bottom-up?

Iyer: Sure. That's actually where the rubber meets the road, so to speak. As we have already discussed, ITIL is generally a guidance -- best practices -- for service delivery, incident management, or what have you. Then, there are these sets of policies with these guidelines. What organizations can do is set up their data retention policy, firewall access policy, or any other policy.

But, how do they really know whether these policies are being actually enforced and/or violated, or what is the gap? How do they constantly improve upon their security posture? That's where it's important to collect activity in your enterprise on what's going on.

There is a tight fit there in what we provide as our log-management platform. LogLogic has been around for a number of years and is the leader in this log management industry. It allows organizations to collect information from a wide variety of sources, assimilate it, and analyze it. An auditor or an information security professional can look deep down into what's actually going on, on their storage capacity or planning for the future, on how many more firewalls are required, or what's the usage pattern in the organization of a particular server.

All these different metrics feed back into what ITIL is trying to help IT organizations do. Actually, the bottom line is how do you do more with less, and that's where log management fits in.

Gardner: Back to you, Sean. When companies are trying to move beyond baseline implementation and really start getting some economic benefits, which of course are quite important these days from their ITIL activities, what sort of tools have you seen companies using? To what degree do you need to dovetail your methodological and ITIL activities with the proper tools down in the actual systems?

McCLean: When you’re starting to talk about applying the actual process to the tools, that's the space that's the most interesting to me. It's that element you need some common thread that you can pull through all of those.

Today, in the industry, we have countless different tools that we use, and we need common threads that can pull across all of those different tools and say, “Well, these things are consistent and these things will apply as we move forward into these processes.” As Sudha pointed out, having an underlying log system is a great way to get that started.

The common thread in many cases across those pieces is maintaining the focus on the business. That's always where IT needs to be more conscious and to be constantly driving forward. Ultimately, where do these tools fit to follow business, and how did these tools provide the services that ultimately support the business to do the thing that we are trying to get done?

Does that address the question?

Gardner: I think so. Sudha, tell us about some instances where LogLogic has been used and ITIL has been the focus or the context of its use. Are there some findings general use case findings? What have been some of the outcomes when these two bottom-up, top-down approaches come together?

Iyer: That's a great question. The bottom line is the customers, and we have a very large customer base. It turns out, according to some surveys we have done in our customer base, that the biggest driver for a framework such as ITIL is compliance. The importance of ITIL for compliance has been recognized, and that is the biggest impact.

As Sean mentioned earlier, it's not a package that you buy and plug into your network and there you go, you are compliant. It's a continues process.

What some of our customers have figured out is that adopting our log management solutions allows them to create better control and visibility into what actually is going on on their network and their systems. From many angles, whether it's a security professional or an auditor, they’re all looking at whether you know what's going on, whether you were able to mitigate anything untoward that's happening, and whether there is accountability. So, we get feedback in our surveys that control, and visibility has been the top driver for implementing such solutions.

Another item that Sean touched on, reducing IT cost and improving the service quality, was the other driver. When they look at a log-management console and see this is how many admin accesses that were denied. It happened between 10 p.m. and midnight. They quickly alert, get on the job. and try to mitigate the risk. This is where they have seen the biggest value return on investment (ROI) on implementations of LogLogic.

Gardner: Sean, the most recent version of ITIL, Version 3 focuses, as you were alluding to, on IT service management, of IT behaving like a service bureau, where it is responsible on almost a market forces basis to their users, their constituents, in the enterprise. This involves increasingly service-level agreements (SLAs) and contracts, either explicit or implicit.

At the same time, it seems as if we’re engaging with the higher level of complexity in our data center's increased use of virtualization and the increased use of software-as-a-service (SaaS) type services.

What's the tension here between the need to provide services with high expectations and a contract agreement and, at the same time, this built-in complexity? Is there a role for tools like LogLogic to come into play there?

McCLean: Absolutely. There is a great opportunity with regard to tools such as LogLogic from that direction. ITIL Version 2 focused on simply support and delivery, those two key areas. We are going to support the IT services and we are going to deliver along the lines of these services.

The ITIL Version 2 has started to talk a lot about alignment of IT with the business, because a lot of times IT continues and drives and does things without necessarily realizing what the business is and the business is doing. An IT department focuses on email, but they are not necessarily looking at the fact that email is supporting whatever it is the business is trying to accomplish or how that service does.

As we moved into ITIL Version 3, they started trying to go beyond simply saying it's an element of alignment and move the concept of IT into an area where its a part of the business. Therefore it’s offering services within and outside of the business.

One of the key elements in the new manuals in ITIL V3 is talk to service strategy, and its a hot topic amongst the ITIL community, this push towards a strategic look at IT, and developing services as if you were your own business.

IT is looking and saying, “Well, we need to develop our IT services as a service that we would sell to the business, just as any other organization would.” With that in mind, it's all driving toward how we can turn our assets into strategic assets? If we have a service and its made up of an Exchange server, or we have a service and it’s made up three virtual machines, what can we do with those things to make them even more valuable to the business?

If I have an Exchange server, is there someway that I can parcel it out or farm it to do something else that will also be valuable?

Now, with LogLogic's suite of tools we’re able to pull that log information about those assets. That's when you start being able to investigate how you can make the assets that exist more value driven for the organization's business.

Gardner: Back to you, Sudha. Have you had customer engagements where you have seen that this notion of being a contract service provider puts a great deal of responsibility on them, that they need greater insight and, as Sean was saying, need to find even more ways to exploit their resources, provide higher level services, and increase utilization, even as complexity increases?

Iyer: I was just going to add to what Sean was describing. You want to figure out how much of your current investment is being utilized. If there is a lot of unspent capacity, that's where understanding what's going on helps in assessing, “Okay, here is so much disk space that is unutilized. Or, it's the end of the quarter, we need to bring in more virtualization of these servers to get our accounting to close on time, etc. That's where the open API, the open platform that LogLogic is comes into play.

Today, IT is heavily into the services-oriented architecture (SOA) methodology. So, we say, “Do you have to actually have a console login to understand what's going on in your enterprise?” No. You are probably a storage administrator or located in a very different location than the data center where a LogLogic solution is deployed, but you still want to analyze and predict how the storage capacity is going to be used over the next six months or a year.

The open API, the open LogLogic platform, is a great way for these other entities in an organization to leverage the LogLogic solution in place.

Gardner: Another thing that has impressed me with ITIL over the years is that it allows for sharing of information on best practices, not only inside of a single enterprise but across multiple ones and even across industries and wide global geographies.

In order to better learn from the industries' hard lessons or mistakes, you need to be able to share across common denominators, whether its APIs, measurements, or standards. I wonder if the community-based aspect to log behaviors, system behaviors, and sharing them also plays into that larger ITIL method of general industry best practices. Any thoughts along those line, Sean?

McCLean: It's really interesting that you hit on that piece, because globalization is one of the biggest drivers I think for getting ITIL moving and going on. More and more businesses have started reaching outside of the national borders, whether we call them offshore resources, outshore resources, or however you want to refer to them.

As we become more global, businesses are looking to leverage other areas. The more you do that, the larger you grow your business in trying to make it global, the more critical it is that you have a common ground.

Back to that illustration of the chair, when we communicate and we think we are talking about the same thing, we need some common point, and without it we can't really go forward at all. ITIL becomes more and more valuable the more and more we see this push towards globalization.

It’s the same with a common thread or shared log information for the same purposes. The more you can share that information and bring it across in a consistent manner, then the better you can start leveraging it. The more we are all talking about the same thing or the same chair, when we are referring to something, the better we can leverage it, share information, and start to generate new ideas around it.

Gardner: Sudha, anything to add to that in terms of community and the fact that many of these systems are outputting the same logs. I’s making that information available on a proper context that becomes the value add.

Iyer: That's right. Let's say you are Organization A and you have vendor relationships and customer relationships outside your enterprise. So, you’ve got federated services. You’ve got different kinds of applications that you share between these two different constituents -- vendors and customers.

You probably already have an SLA with these entities, and you want to make sure you are delivering on these operations. You will want to make sure there is enough uptime. You want to grow towards a common future where your technologies are not far behind, and sharing this information and making sure that what you have today is very critical. That's where there is actual value.

Gardner: Let's get into some examples. I know it's difficult to get companies to talk about sensitive systems in their IT practices. So perhaps we could keep it at the level of use-case scenarios.

Let's go to Sean first. Do you have any examples of companies that have taken ITIL to the level of implementation with tools like log analytics, and do you have some anecdotes or metrics of what some of the experiences have been?

McCLean: I wish I had metrics. Metrics is the one thing that seems to be very hard to come up with in this area. I can think of a couple of instances where organizations were rolling out ITIL implementations. In implementations where I am engaged, specifically in mentoring, one of the things I try to get them to do is to dial into the community and talk to other people who are also implementing the same types of processes and practices.

There’s one particular organization out in the Dallas-Fort Worth, Texas area. When they started getting into the community, even though they were using different tools, the underlying principles that they were trying to get to were the same.

In that case they were able to start sharing information across two companies in a manner that was saying, “We do these same things with regard to handling incidents or problems and share information, regardless of the tool being set up.”

Now, in that case I don't have specific examples of them using LogLogic, but what invariably came out in this set of discussions was what we need underneath is the ability to get proactive and start preventing these incidents before they happen. Then, we need metrics and some kind of reporting system where we can start doing the checking issues before they occur and getting the team on board to fix it before it happen. That's where they started getting into log-like tools and looking at using log data for that purpose.

Iyer: That corroborates with one of the surveys we developed and conducted in the last quarter. Organizations reported that the biggest challenge for implementing ITIL was twofold.

The first was the process of implementation, the skill set that they needed. They wanted to make sure there was a baseline, and measuring the quality of improvement was the biggest impediment.

The second one was the result of this process improvement. You get your implementation of the ITIL process itself, and where did you get it? Where were you before and where did you end up after the implementation?

I guess when you were asking for metrics, you were looking for those concrete numbers, and that's been a challenge, because you need to know what you need to measure, but you don't know that because you are not skilled enough in the ITIL practices. Then, you learn from the community, from the best-of-breed case studies on the Web sites and so forth, and you go your merry way, and then the baseline numbers for the very first time get collected from the log tools.

Gardner: I imagine that it's much better to get early and rapid insights from the systems than to wait for the SLAs to be broken, for user surveys to come back, and say, “We really don't think the IT department is carrying its weight.” Or, even worse, to get outside customers or partners coming back with complaints about performance or other issues. It really is about early insights and getting intervention that seems to really dovetail well with what ITIL is all about.

McCLean: I absolutely agree with that. Early on in my career within ITIL I had a debate with a practitioner on the other side of the pond. One thing we had a debate about was about SLAs. I had indicated that it's critical to get the business engaged in the SLA immediately.

His first answer was no, it doesn't have to happen that way. I was flabbergasted. You provide a service to an organization without an SLA first? I thought “This can't be. This doesn't make sense. You have to get the business involved.”

When we talked through it and got down to real cases, it turned out that what he was saying is that it’s not that he didn't feel that the SLA didn’t need to be negotiated with the business. What he meant was that we need to get data and reports about the services that we are delivering before we go to the customer, the customer, in this case, being internal.

His point was that we need to get data and information about the service we are delivering, so that when we have the discussion with a business about the service levels we provide, they have a baseline to offer. I think that's to Sudha's point as well.

Iyer: That's right. Actually, it goes back to one of the opening discussions we had here about aligning IT to the business goals. ITIL helps organizations make the business owners think about what they need. They do not assume that the IT services are going to be there or its not an afterthought. It’s a part of that collective, working toward the common success.

Gardner: Let's wrap up our discussion with some predictions or look into the future of ITIL. Sean, do you have any sense of where the next directions for ITIL will be, and how important is it for enterprises that might not be involved with it now to get involved, so that they can be in a better position to take advantage of the next chapters?

McCLean: The last is the most critical. People who are not engaged or involved in ITIL yet will find they are starting to drop out of a common language. That enables you to do just about everything else you do with regard to IT in your business.

If you don't speak the language and the vendors that provide the services do, then you have a hard time getting the vendors to understand what it is the vendors are offering. If you don't speak the language and you are trying to get information shared, then you have a hard time getting forward in that sense.

It’s absolutely critical for businesses and enterprises to start understanding the need for adopting. I don't want to paint it as if everybody needs to get on board ITIL, but you need to get into that and aware of that, so that you can help drive its future directions.

As you pointed out earlier, Dana, it's a common framework but it's also commonly contributed to. It's very much an open framework, so if a new way to do things comes up and is shared, that makes sense. That would be probably the next thing that's adopted. It’s just like our English language, where new terms and phrases are developed all the time. It's very important for people to get on board.

In terms of what's the next big front, when you have this broad framework like this that says, “Here are common practices, best practices, and IT practices.” If the industry matures, I think we will see a lot of steps in the near future, where people are looking and talking more about, “How do I quantify maturity as an individual within ITIL? How much do you know with regard to ITIL? And, how do I quantify a business with regard to adhering to that framework?”

There has been a little bit of that and certainly we have ITIL certification processes in all of those, but I think we are going to see more drive to understand that and to formalize that in upcoming years.

Gardner: Sudha, it certainly seems like a very auspicious pairing, the values that LogLogic provides and the type of organizations that would be embracing ITIL. Do you see ITIL as an important go-to market or a channel for you, and is there in fact a natural pairing between ITIL-minded organizations and some of the value that you provide?

Iyer: Actually, LogLogic believes that ITIL is one of those strong frameworks that IT organizations should be adopting. To that effect, we have been delivering ITIL-related reporting, since we first launched the Compliance Suite. It has been an important component of our support for the IT organization to improve their productivity.

In today’s climate, it's very hard to predict how the IT spending will be affected. The more we can do to get visibility into their existing infrastructure networks and so on, the better off it is for the customer and for ourselves as a company.

Gardner: We’ve been discussing how enterprises have been embracing ITIL and improving the way that they produce services for their users. We’ve been learning more about visibility and the role that log analytics and systems information plays in that process.

Helping us have been our panelists, Sudha Iyer. She is the director of product management at LogLogic. Thanks very much, Sudha.

Iyer: Thank you, it's a pleasure, to be sure.

Gardner: Sean McClean, principal at KatalystNow, which mentors and helps organizations train and prepare for ITIL and its benefits. It’s based in Orlando, Florida. Thanks very much, Sean.

McCLean: Thank you. It’s been a pleasure.

Gardner: This is Dana Gardner, principal analyst at Interarbor Solutions. Thanks for listening and come back next time.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: LogLogic.

Transcript of BriefingsDirect podcast on the role of log management and systems analytics within the Information Technology Infrastructure Library (ITIL) framework. Copyright Interarbor Solutions, LLC, 2005-2008. All rights reserved.