Showing posts with label Duncan Campbell. Show all posts
Showing posts with label Duncan Campbell. Show all posts

Monday, December 12, 2011

Efficient Data Center Transformation Requires Consolidation and Standardization Across Critical IT Tasks

Transcript of a sponsored podcast discussion in conjunction with an HP video series on the best practices for developing a common roadmap for DCT.

Listen to the podcast. Find it on iTunes/iPod. Download the transcript. Sponsor: HP.

For more information on The HUB, HP's video series on data center transformation, go to www.hp.com/go/thehub.

Dana Gardner: Hi, this is Dana Gardner, Principal Analyst at Interarbor Solutions, and you’re listening to BriefingsDirect.Today, we present a sponsored podcast discussion on quick and proven ways to attain significantly improved IT operations and efficiency.

We'll hear from a panel of HP experts on some of their most effective methods for fostering consolidation and standardization across critical IT tasks and management. This is the second in a series of podcast on data center transformation (DCT) best practices and is presented in conjunction with a complementary video series. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

Here today we will specifically explore building quick data center project wins, leveraging project tracking and scorecards, as well as developing a common roadmap for both facilities and IT infrastructure. You don’t need to go very far in IT to find people who are diligently working to do more with less, even as they're working to transform and modernize their environments.

One way to keep the interest high and those operating and investment budgets in place is to show fast results and then use that to prime the pump for even more improvement and even more funding with perhaps even growing budgets.

With us now to explain how these solutions can drive successful data center transformation is our panel, Duncan Campbell, Vice President of Marketing for HP Converged Infrastructure and small to medium-sized businesses (SMBs); Randy Lawton, Practice Principal for Americas West Data Center Transformation & Cloud Infrastructure Consulting at HP, and Larry Hinman, Critical Facilities Consulting Director and Worldwide Practice Leader for HP Critical Facility Services and HP Technology Services. Welcome to you all.

Let's go first to Duncan Campbell on communicating an ongoing stream of positive results, why that’s important and necessary to set the stage for an ongoing virtuous adoption cycle for data center transformation and converged infrastructure projects.

Duncan Campbell: You bet, Dana. We've seen that when a customer is successful in breaking down a large project into a set of quick wins, there are some very positive outcomes from that.

Breeds confidence

N
umber one, it breeds confidence, and this is a confidence that is actually felt within the organization, within the IT team, and into the business as well. So it builds confidence both inside and outside the organization.

The other key benefit is that when you can manifest these quick wins in terms of some specific return on investment (ROI) business outcome, that also translates very nicely as well and gets a lot of key attention, which I think has some downstream benefits that actually help out the team in multiple ways.

Gardner: I suppose it's not only getting these quick wins, but effectively communicating them well. People really need to know about them.

Campbell: Right. So this is one of the things that some of the real leaders in IT realize. It's not just about attracting the best talent and executing well, but it's about marketing the team’s results as well.

One of the benefits in that is that you can actually break down these projects just in terms of some specific type of wins. That might be around standardization, and you can see a lot of wins there. You can quickly consolidate to blades. You can look at virtualization types of quick wins, as well as some automation quick wins.

We would advocate that customers think about this in terms of almost a step-by-step approach, knocking that down, getting those quick wins, and then marketing this in some very tangible ways that resonate very strongly.



We would advocate that customers think about this in terms of almost a step-by-step approach, knocking that down, getting those quick wins, and then marketing this in some very tangible ways that resonate very strongly.

Gardner: When you start to develop a cycle of recognition, incentives, and buy-in, I suppose we could also start to see some sort of a virtuous adoption cycle, whereby that sets you up for more interest, an easier time evangelizing, and so on.

Campbell: That’s exactly right. A virtuous cycle is well put. That allows really the team to get the additional green light to go to the next step in terms of their blueprint that they are trying to execute on. It gets a green light also in terms of additional dollars and, in some cases, additional headcount to add to their team as well.

What this does is, and I like this term the virtuous cycle, not only allow you to attract key talent, but it really allows you to retain folks. That means you're getting the best team possible to duplicate that, to get those additional wins, and it really does indeed become a virtuous cycle.

Gardner: I suppose one last positive benefit here might be that, as enterprises adopt more of what we call social networking and social media, the ability for the rank and file, those users involved with these products and services, can start to be your best word-of-mouth marketing internally.

TCO savings

Campbell: That’s right. A good example is where we have been able to see a significant total cost of ownership (TCO) type of savings with one of our customers, McKesson, that in fact was taking one of these consolidated approaches with all their development tools. They saw a considerable savings, both in terms of dollars, over $12.9 million, as well as a percentage of TCO savings that was upwards of 50 percent.

When you see tangible exciting numbers like that, that does grab people’s attention and, you bet, it becomes part of the whole social-media fabric and people want to go to a winner. Success breeds success here.

Gardner: Thank you. Next, we're going to go to Randy Lawton and hear some more about why tracking scorecards and managing expectations through proven data and metrics also contributes to a successful ongoing DCT activity.

Randy, why is it so important to know your baseline tracks and then measure them each and every step along the way?

Randy Lawton: Thank you, Dana. Many of the transformation programs we engage in with our customers are substantially complex and span many facets of the IT organization. They often involve other vendors and service providers in the customer organization.

So there’s a tremendous amount of detail to pull together and organize in these complex engagements and initiatives. We find that there’s really no way to do that, unless you have a good way of capturing the data that’s necessary for a baseline.

It’s important to note that we manage these programs through a series of phases in our methodology. The first phase is strategy and analysis. During that phase, we typically run a discovery on all IT assets that would include the data center, servers, storage, the network environment, and the applications that run on those environments.

During the course of the last few years, our services unit has made investments in a number of tools that help with the capture and management of the data, the scorecarding, and the analytics.



From that, we bridge into the second phase, which is architect and validate, where we begin to solution out and develop the strategies for a future-state design that includes the standardization and consolidation approaches, and on that begin to assemble the business case. In a detailed design, we build out those specifications and begin to create the data that determines what the future-state transformation is.

Then, through the implementation phase, we have detailed scorecards that are required to be tracked to show progress of the application teams and infrastructure teams that contribute to the program in order to guarantee success and provide visibility to all the stakeholders as part of the program, before we turn everything over to operations.

During the course of the last few years, our services unit has made investments in a number of tools that help with the capture and management of the data, the scorecarding, and the analytics through each of the phases of these programs. We believe that helps offer a competitive advantage for us and helps enable more rapid achievement of the programs from our customer perspective.

Gardner: As we heard from Duncan about why it’s important to demonstrate wins, I sense that organizations are really data driven now more than ever. It seems important to have actual metrics in place and be able to prove your work each step of the way.

Complex engagements

Lawton: That’s very true. In these complex engagements, it’s normally some time before there are quick-win type of achievements that are really notable.

For example, in the HP IT transformation program we undertook over several years back through 2008, we were building six new data centers so that we could consolidate 185 worldwide. So it was some period of time from the beginning of the program until the point where we moved the first application into production.

All along the way we were scorecarding the progress on the build-out of the data centers. Then, it was the build-out of the compute infrastructure within the data centers. And then it was a matter of being able to show the scorecarding against the applications, as we could get them into the next generation data centers.

If we didn't have the ability to show and demonstrate the progress along the way, I think our stakeholders would have lost patience or would not have felt that the momentum of the program was going on the kind of track that was required. With some of these tools and approaches and the scorecarding, we were able to demonstrate the progress and keep very visible to management the movements and momentum of the program.During the course of the last few years, our services unit has made investments in a number of tools that help with the capture and management of the data, the scorecarding, and the analytics.

If we didn't have the ability to show and demonstrate the progress along the way, I think our stakeholders would have lost patience or would not have felt that the momentum of the program was going on the kind of track that was required.



Gardner: Randy, I know that many organizations are diligent about the scorecarding across all sorts of different business activities and metrics. Have you noticed in some of these engagements that these readouts and feedback in the IT and data center transformation activities are somehow joined with other business metrics? Is there an executive scorecard level that these feed into to give more of a holistic overview? Is this something that works in tandem with other scorecarding activities in a typical corporation?

Lawton: It absolutely is, Dana. Often in these kind of programs there are business activities and projects that are going on within the business units. There are application projects that work into the program and then there are the infrastructure components that all have to be fit together at some level.

What we typically see is that the business will be reporting its set of metrics, each of the application areas will be reporting their metrics, and it’s typically from the infrastructure perspective where we pull together all of the application and infrastructure activities and sometimes the business metrics as well.

We've seen multiple examples with our customers where they are either all consolidated into executive scorecards that come out of the reporting from the infrastructure portion of the program that rolls it all together, or that the business may be running separate metrics and then application teams and infrastructure are running the IT level metrics that all get rolled together into some consolidated reporting on some level.

Gardner: And that, of course, ensures that IT isn’t the odd man out, when it comes to being on time and in alignment with these other priorities. That sounds like a very nice addition to the way things may have been done five or 10 years ago.

Lawton: Absolutely.

Gardner: Any examples, Randy, either with organizations you could name, or use cases where you could describe, where the use of this ongoing baselining, tracking, measuring, and delivering metrics facilitates some benefits? Any stories that you can share?

Cloning applications

Lawton: A very notable example is one of our telecom customers we worked with during the last year and finished a program earlier this year. The company was purchasing the assets of another organization and needed to be able to clone the applications and infrastructure that supported business processes from the acquired company.

Within the mix of delivery for stakeholders in the program, there were nine different companies represented. There were some outsourced vendors from the application support side in the acquiree’s company, outsourcers in the application side for the acquiring company, and outsourcers in the data centers that operated data center infrastructure and operations for the target data centers we were moving into.

What was really critical in pulling all this together was to be able to map out, at a very detailed level, the tasks that needed to be executed, and in what time frame, across all of these teams.

The final cutover migration required over 2,500 tasks across these 9 different companies that all needed to be executed in less than 96 hours in order to meet the downtime window of requirements that were required of the acquiring company’s executive management.

It was the detailed scorecarding and operating war rooms to keep those scorecards up to date in real-time that allowed us to be able to accomplish that. There’s just no possible way we would have been able to do that ahead of time.

For more information on The HUB, HP's video series on data center transformation, go to www.hp.com/go/thehub.

I think that HP was very helpful in working with the customer and bringing that perspective into the program very early on, because there had been a failed attempt to operate this program prior to that, and with our assistance and with developing these tools and capabilities, we were able to successfully achieve the objectives of that program.

Gardner: One thing that jumped out at me there was your use of the words real time. How important is it to capture this data and adjust it and update it in real-time, where there’s not a lot of latency? How has that become so important?

Lawton: In this particular program, because there were so many activities taking place in parallel by representatives from all over the world across these nine different companies, the real-time capture and update of all of the data and information that went into the scorecarding was absolutely essential.

In some of the other programs we've operated, there was not such a compressed time frame that required real-time metrics, but we, at minimum, often required daily updates to the metrics. So each program, the strategies that drive that program, and some of the time constraints will drive what the need is for the real-time update.

We often can provide the capabilities for the real-time updates to come from all stakeholders in the program, so that the tools can capture the data, as long as the stakeholders are providing the updates on a real-time basis.

Gardner: So as is often the case, good information in, good results back.

Lawton: Absolutely.

Organizing infrastructure

Gardner: Let’s move now to our third panelist today. We're going to hear about why organizing facilities and infrastructure planning in conjunction in relationship to one another is so important.

Now to Larry Hinman. Larry, let’s go historical for a second. Has there usually been a completely separate direction for facilities planning in IT infrastructure? Why was that the case, and why is it so important to end that practice?

Larry Hinman: Hi, Dana. If you look over time and over the last several years, everybody has data centers and everybody has IT. The things that we've seen over the last 10 or 15 years are things like the Internet and criticality of IT and high density and all this stuff that people are talking about these days. If you look at the ways companies organized themselves several years ago, IT was a separate organization, facilities was a separate organization, and that actually still exists today.

One of the things that we're still seeing today is that, even though there is this push to try to get IT groups and facilities organizations to talk and work each other, this gap that exists between truly how to glue all of this together.

If you look at the way people do this traditionally -- and when I say people, I'm talking about IT organizations and facilities organization -- they typically will model IT and data centers, even if they are attempting to try and glue them together, they try to look at power requirements.

One of the things that we spotted a few years ago was that when companies do this, the risk of over provisioning or under provisioning is very high. We tried to figure out a way to back this up a few notches.

What we figured out was that you have to stop and back up a few notches to really start to get all this glued together.



How can we remedy this problem and how can we bring some structure to this and bring some, what I would call, sanity to the whole equation, to be able to have something predictable over time? What we figured out was that you have to stop and back up a few notches to really start to get all this glued together.

So we took this whole complex framework and data center program and broke it into four key areas. It looks simplistic in the way we've done this, and we have done this over many, many years of analysis and trying to figure out exactly what direction we should take. We've actually spun this off in many directions a few times, trying to continually make it better, but we always keep coming back to these four key profiles.

Business and risk is the first profile. IT architecture, which is really the application suite, is the second profile. IT infrastructure is the third. Data center facilities is the fourth.

One of the things that you will start to hear from us, if you haven’t heard it already via the data center transformation story that you guys were just recently talking about, is this nomenclature of IT plus facilities equals the data center.

Getting synchronized

L
ook at that, look at these four profiles, and look at what we call a top-down approach, where I start to get everybody synchronized on what risk profiles are and tolerances for risk are from an IT perspective and how to run the business, gluing that together with an IT infrastructure strategy, and then gluing all that into a data center facility strategy.

What we found over time is that we were able to take this complex program of trying to have something predictable, scalable, all of the groovy stuff that people talk about these days, and have something that I could really manage. If you're called into the boss’s office, as I and others have been over the many years in my career, to ask what’s the data center going to look like over the next five years, at least I would have some hope of trying to answer that question.

That is kind of the secret sauce here, and the way we have developed our framework was breaking this complex program into these four key areas. I'm certainly not trying to say this is an easy thing to do. In a lot of companies, it’s culture changes. It’s a threat to the way the very organization is organized from an IT and a facilities perspective. The risk and recovery teams and the management teams all have to start working together collaboratively and collectively to be able to start to glue this together.

Gardner: You mentioned earlier the issues around energy and the ongoing importance around the cost structure for that. I suppose it's not just fitting these together, but making them fit for purpose. That is to say, IT and facilities on an ongoing basis.

You get it pointing the right direction, collect the data, complete the modeling, put it in the toolset, and now you have something very dynamic that you can manage over time.



It’s not really something that you do and sit still, as would have been the case several years ago, or in the past generation of computing. This is something that's dynamic. So how do you allow a fit-for-purpose goal with data-center facilities to be something that you can maintain over time, even as your requirements change?

Hinman: You just hit a very important point. One of the the big lessons learned for us over the years has been this ability to not only provide this kind of modeling and predictability over time for clients and for customers. We had to get out of this mode of doing this once and putting it on a shelf, deploying a future state data center framework, keep client pointing in the right direction.

The data is, as you said, gets archived, and they pick it up every few years and do it again and again and again, finding out that a lot of times there's an "aha" moment during those periods, the gaps between doing it again and again.

One thing that we have learned is to not only have this deliberate framework and break it into these four simplistic areas, where we can manage all of this, but to redevelop and re-hone our tools and our focus a little bit, so that we could use this as a dynamic ongoing process to get the client pointing the right direction. Build a data center framework that truly is right size, integrated, aligned, and all that stuff. But then, to have something that was very dynamic that they could manage over time.

That's what we've done. We've taken all of our modeling tools and integrated them to common databases, where now we can start to glue together even the operational piece, of data center infrastructure management (DCIM), or architecture and infrastructure management, facilities management, etc., so now the client can have this real-time, long-term, what we call a 10-year view of the overall operation.

So now, you do this. You get it pointing the right direction, collect the data, complete the modeling, put it in the toolset, and now you have something very dynamic that you can manage over time. That's what we've done, and that's where we have been heading with all of our tools and processes over the last two to three years.

EcoPOD concept

Gardner: I also remember with great interest the news from HP Discover in Las Vegas last summer about your EcoPOD and the whole POD concept toward facilities and infrastructure. Does that also play a part in this and perhaps make it easier when your modularity is ratcheted up to almost a mini data center level, rather than at the server or rack level?

Hinman: With the various what we call facility sourcing options, which PODs are certainly one of those these days, we've also been very careful to make sure that our framework is completely unbiased when it comes to a specific sourcing option.

What that means is, over the last 10 plus years, most people were really targeted at building new green-field data centers. It was all about space, then it became all about power, then about cooling, but we were still in this brick and mortar age, but modularity and scalability has been driving everything.

With PODs coming on the scene with some of the other design technologies, like multi-tiered or flexible data center, what we've been able to do is make sure that our framework is targeted at almost a generic framework where we can complete all the growth modeling and analysis, regardless of what the client is going to do from a facilities perspective.

It lays the groundwork for the customer to get their arms around all of this and tie together IT and facilities with risk and business, and then start to map out an appropriate facility sourcing option.

We find these days that POD is actually a very nice fit with all of our clients, because it provides high density server farms, it provides things that they can implement very quickly, and gets the power usage effectiveness (PUE) and power and operational cost down.



We find these days that POD is actually a very nice fit with all of our clients, because it provides high density server farms, it provides things that they can implement very quickly, and gets the power usage effectiveness (PUE) and power and operational cost down. We're starting to see that take a stronghold in a lot of customers.

Gardner: As we begin to wrap up, I should think that these trends are going to be even more important, these methods even more productive, when we start to factor in movement toward private cloud. There's the need to support more of a mobile tier set of devices, and the fact that we're looking for of course even more savings on those long-term energy and operating costs.

Back to you, Randy Lawton. Any thoughts about how scorecards and tracking will be even more important in the future, as we move, as we expect we will, to a more cloud-, mobile-, and eco-friendly world?

Lawton: Yes, Dana. In a lot of ways, there is added complexity these days with more customers operating in a hybrid delivery model, where there may be multiple suppliers in addition to their internal IT organizations.

Greater complexity

Just like the example case I gave earlier, where you spread some of these activities not only across multiple teams and stakeholders, but also into separate companies and suppliers who are working under various contract mechanism, the complexity is even greater. If that complexity is not pulled into a simplified model that is beta driven, that is supported by plans and contracts, then there are big gaps in the programs.

The scorecarding and data gathering methods and approaches that we take on our programs are going to be even more critical as we go forward in these more complex environments.

Operating the cloud environments simplifies things from a customer perspective, but it does add some additional complexities in the infrastructure and operations of the organization as well. All of those complexities add up to, meaning that even more attention needs to be brought to the details of the program and where those responsibilities lie within stakeholders.

Gardner: Larry Hinman, we're seeing this drive toward cloud. We're also seeing consolidation and standardization around data center infrastructure. So perhaps more large data centers to support more types of applications to even more endpoints, users, and geographic locations or business units. Getting that facilities and IT equation just right becomes even more important as we have fewer, yet more massive and critical, data centers involved.

Hinman: Dana, that's exactly correct. If you look at this, you have to look at the data center facilities piece, not only from a framework or model or topology perspective, but all the way down to the specific environment.

You have to look at the data center facilities piece, not only from a framework or model or topology perspective, but all the way down to the specific environment.



It could be that based on a specific client’s business requirements and IT strategy that it will require possibly a couple of large-scale core data centers and multiple remote sites and/or it could just be a bunch of smaller types of facilities.

It really depends on how the business is being run and supported by IT and the application suite, what the tolerances for risk are, whether it’s high availability, synchronous, all the groovy stuff, and then coming up with a framework that matches all those requirements that it’s integrating.

We tell clients constantly that you have to have your act together with respect to your profile, and start to align all of this, before you can even think about cloud and all the wonderful technologies that are coming down the pike. You have to be able to have something that you can at least manage to control cost and control this whole framework and manage to a future-state business requirement, before you can even start to really deploy some of these other things.

So it all glues together. It's extremely important that customers understand that this really is a process they have to do.

Gardner: Very good. You've been listening to a sponsored BriefingsDirect podcast discussion on how quick and proven ways to attain productivity can significantly improve IT operations and efficiency.

This is the second in an ongoing series of podcasts on data center transformation best practices and is presented in conjunction with a complementary video series.

I'd like to thank our guests, Duncan Campbell, Vice President of Marketing for HP Converged Infrastructure and SMB; Randy Lawton, Practice Principal in the Americas West Data Center Transformation & Cloud Infrastructure Consulting at HP, and Larry Hinman, Critical Facilities Consulting Director and Worldwide Practice Leader for HP Critical Facility Services and HP Technology Services. So thanks to you all.

This is Dana Gardner, Principal Analyst at Interarbor Solutions. Also, thanks to our audience for listening, and come back next time.

Listen to the podcast. Find it on iTunes/iPod. Download the transcript. Sponsor: HP.

For more information on The HUB, HP's video series on data center transformation, go to www.hp.com/go/thehub.

Transcript of a sponsored podcast discussion in conjunction with an HP video series on the best practices for developing a common roadmap for DCT. Copyright Interarbor Solutions, LLC, 2005-2011. All rights reserved.

You may also be interested in:

Thursday, July 10, 2008

HP's Adaptive Infrastructure Head Duncan Campbell Discusses Data Center Efficiency and Energy Conservation Best Practices

Transcript of BriefingsDirect podcast recorded at the Hewlett-Packard Software Universe Conference in Las Vegas, Nevada the week of June 16, 2008.

Listen to the podcast here. Sponsor: Hewlett-Packard.

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you’re listening to a special BriefingsDirect podcast recorded live at the Hewlett-Packard Software Universe Conference in Las Vegas. We are here in the week of June 16, 2008. This sponsored HP Software Universe live podcast is distributed by BriefingsDirect Network.

We are joined now by Duncan Campbell, the vice president in-charge of the Adaptive Infrastructure program at HP. Welcome to show, Duncan.

Duncan Campbell: Great. My pleasure to be here, Dana.

Gardner: You know, a lot has been said about data centers and how they are shifting, how people are trying to bring in more capability at higher scale to deal with more complexity, and, of course, cut costs and even labor resources. That is to say, automate whenever possible. Tell us a little bit about how you characterize or describe the data center situation and the challenges the companies are facing right now.

Campbell: It will be my pleasure. In fact, Dana, what we're seeing is almost a perfect storm happening here in the data center right now, and the next generation data center is, in fact, a hot topic. It's a hot topic not just because we're here in Las Vegas and it's 102 degrees, but, in fact, the fundamental design center of the data center is being challenged right now and it's really under siege by a number of different factors.

One of the things you talked about is cost, and another one of the things is energy efficiency. Another key element that we are seeing at this point really has to do with the fundamental challenge that customers who are striving more-and-more to deal with automation have less time. We have an excellent opportunity here to have conversations with customers and partners of Software Universe about the adaptive infrastructure, which is HP's program for the next generation data center.

Gardner: What is different from this next generation data center, the one that we are working with, working toward even, and what was described as a very modern up-to-date data center five years ago?

Campbell: Good question. Fundamentally, what HP has is a strategy that allows our IT managers to be much more engaged with lines of business, because we are allowing IT, at this point, really to participate in a dialogue to be much more engaged with lines of business, as it relates to how IT can be fundamentally thought more as not just a cost-type of agenda, but in fact be more fundamental to driving the business.

That being the case, Dana, we have six fundamental technology enablers that we work with customers to select from to design their next generation data center, and these six technologies are really critical. One has to do with the type of systems they choose, and more and more it's becoming a reality where they are becoming more dense. These systems are drawing more power, so we need to work with customers on how best to design those solutions.

Second, are key enablers around energy-efficiency type of technologies. The third is around virtualization. The fourth is around management, and then we also have security, and finally automation. These are some of the key technologies that are part of the adaptive infrastructure.

Gardner: Now, it seems that the architects and the decision-makers, the specifiers in the operations units of large organizations, have their hands full these days, and, as you've mentioned, have energy issues to contend with. They are also dealing with consolidation in many cases and legacy modernization, bringing more of a services orientation to their applications for purposes of reuse and governance and extending across multiple business processes, assets and resources. So, in an efficiency drive it seems that there is a notion of having to fly the airplane and change the wings at the same time.

I also hear from a lot of enterprises concerned that manual processes are not scaling. When it comes to test, a bug, change management, issues around performance management, making a printout and sticking up on the wall and finding the time-stamps for incidents and uncovering the root causes that way is not scaling. How does HP come in with products and services to help companies manage these multiple major trends that are impacting them?

Campbell: Well, I think you did nail some of the key needs. So, we would agree entirely. It's not just about a rip-and-replace strategy. It is dealing with those core issues that you spoke of, both in terms of cost, energy, and some other elements around quality of service to be more aligned with the line of business and then speed.

So, to your point about how to get started, most customers understand the basic value proposition around the adaptive infrastructure, which is about this 24/7 lights-out computing environment. It's based on modular building blocks, using very off-the-shelf software and comprehensive services.

One thing that we do that is unique. We provide specific assessment services for our customers, and this is not just about product. It's really more understanding their needs, where they set the baseline of their specific needs by business. And, it's not just about technology. It's about their governance management and organizational type of needs. Then, we design specific recommendations on how to proceed, given their specific environment.

Gardner: Because we're at a user conference and a technology forum, I am assuming that there is some news to be had here, or perhaps you can share some of that with us. Something about blade servers, I believe it was.

Campbell: Exactly, and so I hope you are holding onto your hat there. One of the things about the adaptive infrastructure, people are always looking for proof points. They say, "Yeah, great strategy. I understand the value proposition, Duncan, but it's all about the proof points in making it real."

Last year, we had both our blade systems, which was really it's an adaptive infrastructure in a box. It includes virtualization. It includes blade storage and servers and management capabilities.

One of the areas that people fundamentally love, which was rock-solid business and high availability, were our non-stop servers, and they say, "Are you just abandoning that?" No, the news that we are offering here is that we're going to now have brand new bladed non-stop systems that are going to be a fundamental proof point of our adaptive infrastructure.

Were bringing some of those high-availability features people love, but in more of this adaptive infrastructure type of environment. And it's one of the one thing our software customers love, because as you start to kick up service-oriented architecture (SOA) projects, specific business continuity projects, or strategy applications, you have to have adaptive infrastructure that provides that type of value to you.

Gardner: Let's return to the energy issue. I'm also seeing some news coming out of these events this week around dynamic smart cooling. That's a mouthful. What does it really mean?

Campbell: Good question. Dynamic smart cooling. The one thing that you should understand when we talk about energy-efficiency, is it's about not just the new technology, which is always improving, but some of the facilities type of capabilities we have. So, we have some fantastic new services from our EYP services, which HP acquired, that designs most of the new data centers on Planet Earth at this point. Among the new capabilities we have around the data center, the key one is dynamic smart cooling.

Barclays Bank, for example, recently adopted it across their whole company to save greater than 13 percent of their data-center cost. It manages the air flow in your facilities. So, in combination with services, plus this new technology that came from HP labs, plus the new servers and software elements, this is the type of winning combination customers demand and expect from HP.

Gardner: We are also, I believe, going to see some news later in the week around change management and problem management and resolution. I don't have the details, and we can't pre-announce that, but it does bring to mind the question about hardware/software services, these major trends, methodologies and maturity models, the Information Technology Infrastructure Library (ITIL).

For those folks managing multiple dimensions of IT operational integrity and efficiency, how do you get a handle on a holistic, top-down approach that includes elements of hardware, energy, software, change management, and IT systems management? Is there a whole greater than the sum of the parts here?

Campbell: We are finding that customers are demanding that holistic approach, which is why dealing with the company with the size and the depth and breadth of HP makes a lot of sense. Some of the software attributes that you've mentioned here at HP really do come to bear when you think about the adaptive infrastructure. Some of the fundamental building blocks from Opsware are great examples of that.

When you think about data-center automation, that is a great example, and Forrester recently called HP's Opsware product suite the number one offering out there. And that's in combination with, as I mentioned before, some of our maturity-model type of assessment that we do with our largest customers. It is a fantastic dialogue in assessment built on rich set of data best practices, where we understand where they are trying to go with their environment, and then work with them on specific recommendations. It's a fantastic process that we engage in with customers.

Gardner: I suppose an important aspect of going holistic is that people don't want iterative payback. They are looking for substantive efficiency and performance improvements. To that element, do you have some examples of companies or a matrix? What is the baseline? Are people looking for 15-20 percent that says "Yeah, I am ready to go holistic?" Is this more up towards 30-40 percent? What are the bottom line elements of what these customers are expecting from these kinds of major activities?

Campbell: Good question. We have a very robust solution called our Data Center Transformation Solution from HP, which is a composite of some concrete specific solutions with specific return-on-investment type of numbers in the range you mentioned for energy-efficiency IT consolidation, and business continuity in data center automation.

As you are saying, though, lots of customers don't have the time or the runway to expect a long-term project with a speculative type of payback. What we do is break it into bite-size chunks, into fundamental progress, with return-on-investment in these concrete solution areas.

Gardner: Let's look to the future a little bit. We are hearing a lot these days about cloud computing. Many people think of that as a greater utility function that someone else does, but for some of the enterprises that I speak to, they actually like the idea of private clouds -- taking the best of the technology and efficiencies at a cloud computing approach.

I believe it is taking the methodology and approach, as well as the technology set, and using that to support their services, their data, and perhaps start doing more platform as a service, integration as service activities, but for their internal constituencies, and then, over time, into their partners and business ecologies. What do you see coming from an adaptive solutions perspective for cloud computing?

Campbell: From my standpoint, I think you've nailed it, because we do not see our major enterprise customers turning over lock, stock, and barrel, their whole IT environment to a perhaps less insecure type of environment with less predictive type of results.

What we see, though, is that customers like attributes of the cloud. So, the private cloud concept that you speak of here is much more near-and-dear to the heart as we've heard from some of our advisory type of customers and our lighthouse customers. From that standpoint they are looking very much to an adaptive infrastructure to provide those type of attributes of a cloud, but still under the control and under the security type of requirements that they have for their specific enterprise and their domains.

Gardner: So, when we think of the next next-generation data center architectures and the requirements for them, do you think cloud computing is going to play a significant role in that?

Campbell: That's the hot topic, and it's interesting, because of these specific benefits that we provide with the adaptive infrastructure around speed, cost, quality of service, and energy. It turns out those value propositions still remain true. So, we see this as more of an opportunity for us to provide new technology innovation for our customers through some of the attributes of the cloud. There are a lot of people working on this within HP, but I think it's providing customer choice, while providing no specific benefits in the next generation data center, and that is exactly our plan.

Gardner: Very good, and just to close out our discussion, you announced today the non-stop blade servers. When will those be available in the market?

Campbell: At this point, that news is being transmitted as we speak, and so as our press release comes across the wire we will all know that, and read that with great relish and anticipation.

Gardner: Okay, we could fill that in a little later in a future podcast. But, thank you. We've been speaking with Duncan Campbell. He is the vice president in charge of Adaptive Infrastructure and the Adaptive Infrastructure Program here at Hewlett-Packard. Also, you're delivering, I believe, some keynotes and other discussions at the live event throughout the week.

Gardner: This comes to you as a sponsored HP Software Universe live podcast recorded at the Venetian Resort in Las Vegas. Look for other podcast from this HP event at hp.com website, under "Software Universe Live Podcasts," as well as, through the BriefingsDirect Network. I would like to thank our producers on today’s show, Fred Bals and Kate Whalen, and also our sponsor Hewlett-Packard.

I'm Dana Gardner, principal analyst at Interarbor Solutions. Thanks for listening, and come back next time for more in-depth podcasts on enterprise software infrastructure and strategies. Bye for now.

Listen to the podcast. Sponsor: Hewlett-Packard.

Transcript of BriefingsDirect podcast recorded at the Hewlett-Packard Software Universe Conference in Las Vegas, Nevada. Copyright Interarbor Solutions, LLC, 2005-2008. All rights reserved.