Showing posts with label data center. Show all posts
Showing posts with label data center. Show all posts

Monday, December 12, 2011

Efficient Data Center Transformation Requires Consolidation and Standardization Across Critical IT Tasks

Transcript of a sponsored podcast discussion in conjunction with an HP video series on the best practices for developing a common roadmap for DCT.

Listen to the podcast. Find it on iTunes/iPod. Download the transcript. Sponsor: HP.

For more information on The HUB, HP's video series on data center transformation, go to www.hp.com/go/thehub.

Dana Gardner: Hi, this is Dana Gardner, Principal Analyst at Interarbor Solutions, and you’re listening to BriefingsDirect.Today, we present a sponsored podcast discussion on quick and proven ways to attain significantly improved IT operations and efficiency.

We'll hear from a panel of HP experts on some of their most effective methods for fostering consolidation and standardization across critical IT tasks and management. This is the second in a series of podcast on data center transformation (DCT) best practices and is presented in conjunction with a complementary video series. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

Here today we will specifically explore building quick data center project wins, leveraging project tracking and scorecards, as well as developing a common roadmap for both facilities and IT infrastructure. You don’t need to go very far in IT to find people who are diligently working to do more with less, even as they're working to transform and modernize their environments.

One way to keep the interest high and those operating and investment budgets in place is to show fast results and then use that to prime the pump for even more improvement and even more funding with perhaps even growing budgets.

With us now to explain how these solutions can drive successful data center transformation is our panel, Duncan Campbell, Vice President of Marketing for HP Converged Infrastructure and small to medium-sized businesses (SMBs); Randy Lawton, Practice Principal for Americas West Data Center Transformation & Cloud Infrastructure Consulting at HP, and Larry Hinman, Critical Facilities Consulting Director and Worldwide Practice Leader for HP Critical Facility Services and HP Technology Services. Welcome to you all.

Let's go first to Duncan Campbell on communicating an ongoing stream of positive results, why that’s important and necessary to set the stage for an ongoing virtuous adoption cycle for data center transformation and converged infrastructure projects.

Duncan Campbell: You bet, Dana. We've seen that when a customer is successful in breaking down a large project into a set of quick wins, there are some very positive outcomes from that.

Breeds confidence

N
umber one, it breeds confidence, and this is a confidence that is actually felt within the organization, within the IT team, and into the business as well. So it builds confidence both inside and outside the organization.

The other key benefit is that when you can manifest these quick wins in terms of some specific return on investment (ROI) business outcome, that also translates very nicely as well and gets a lot of key attention, which I think has some downstream benefits that actually help out the team in multiple ways.

Gardner: I suppose it's not only getting these quick wins, but effectively communicating them well. People really need to know about them.

Campbell: Right. So this is one of the things that some of the real leaders in IT realize. It's not just about attracting the best talent and executing well, but it's about marketing the team’s results as well.

One of the benefits in that is that you can actually break down these projects just in terms of some specific type of wins. That might be around standardization, and you can see a lot of wins there. You can quickly consolidate to blades. You can look at virtualization types of quick wins, as well as some automation quick wins.

We would advocate that customers think about this in terms of almost a step-by-step approach, knocking that down, getting those quick wins, and then marketing this in some very tangible ways that resonate very strongly.



We would advocate that customers think about this in terms of almost a step-by-step approach, knocking that down, getting those quick wins, and then marketing this in some very tangible ways that resonate very strongly.

Gardner: When you start to develop a cycle of recognition, incentives, and buy-in, I suppose we could also start to see some sort of a virtuous adoption cycle, whereby that sets you up for more interest, an easier time evangelizing, and so on.

Campbell: That’s exactly right. A virtuous cycle is well put. That allows really the team to get the additional green light to go to the next step in terms of their blueprint that they are trying to execute on. It gets a green light also in terms of additional dollars and, in some cases, additional headcount to add to their team as well.

What this does is, and I like this term the virtuous cycle, not only allow you to attract key talent, but it really allows you to retain folks. That means you're getting the best team possible to duplicate that, to get those additional wins, and it really does indeed become a virtuous cycle.

Gardner: I suppose one last positive benefit here might be that, as enterprises adopt more of what we call social networking and social media, the ability for the rank and file, those users involved with these products and services, can start to be your best word-of-mouth marketing internally.

TCO savings

Campbell: That’s right. A good example is where we have been able to see a significant total cost of ownership (TCO) type of savings with one of our customers, McKesson, that in fact was taking one of these consolidated approaches with all their development tools. They saw a considerable savings, both in terms of dollars, over $12.9 million, as well as a percentage of TCO savings that was upwards of 50 percent.

When you see tangible exciting numbers like that, that does grab people’s attention and, you bet, it becomes part of the whole social-media fabric and people want to go to a winner. Success breeds success here.

Gardner: Thank you. Next, we're going to go to Randy Lawton and hear some more about why tracking scorecards and managing expectations through proven data and metrics also contributes to a successful ongoing DCT activity.

Randy, why is it so important to know your baseline tracks and then measure them each and every step along the way?

Randy Lawton: Thank you, Dana. Many of the transformation programs we engage in with our customers are substantially complex and span many facets of the IT organization. They often involve other vendors and service providers in the customer organization.

So there’s a tremendous amount of detail to pull together and organize in these complex engagements and initiatives. We find that there’s really no way to do that, unless you have a good way of capturing the data that’s necessary for a baseline.

It’s important to note that we manage these programs through a series of phases in our methodology. The first phase is strategy and analysis. During that phase, we typically run a discovery on all IT assets that would include the data center, servers, storage, the network environment, and the applications that run on those environments.

During the course of the last few years, our services unit has made investments in a number of tools that help with the capture and management of the data, the scorecarding, and the analytics.



From that, we bridge into the second phase, which is architect and validate, where we begin to solution out and develop the strategies for a future-state design that includes the standardization and consolidation approaches, and on that begin to assemble the business case. In a detailed design, we build out those specifications and begin to create the data that determines what the future-state transformation is.

Then, through the implementation phase, we have detailed scorecards that are required to be tracked to show progress of the application teams and infrastructure teams that contribute to the program in order to guarantee success and provide visibility to all the stakeholders as part of the program, before we turn everything over to operations.

During the course of the last few years, our services unit has made investments in a number of tools that help with the capture and management of the data, the scorecarding, and the analytics through each of the phases of these programs. We believe that helps offer a competitive advantage for us and helps enable more rapid achievement of the programs from our customer perspective.

Gardner: As we heard from Duncan about why it’s important to demonstrate wins, I sense that organizations are really data driven now more than ever. It seems important to have actual metrics in place and be able to prove your work each step of the way.

Complex engagements

Lawton: That’s very true. In these complex engagements, it’s normally some time before there are quick-win type of achievements that are really notable.

For example, in the HP IT transformation program we undertook over several years back through 2008, we were building six new data centers so that we could consolidate 185 worldwide. So it was some period of time from the beginning of the program until the point where we moved the first application into production.

All along the way we were scorecarding the progress on the build-out of the data centers. Then, it was the build-out of the compute infrastructure within the data centers. And then it was a matter of being able to show the scorecarding against the applications, as we could get them into the next generation data centers.

If we didn't have the ability to show and demonstrate the progress along the way, I think our stakeholders would have lost patience or would not have felt that the momentum of the program was going on the kind of track that was required. With some of these tools and approaches and the scorecarding, we were able to demonstrate the progress and keep very visible to management the movements and momentum of the program.During the course of the last few years, our services unit has made investments in a number of tools that help with the capture and management of the data, the scorecarding, and the analytics.

If we didn't have the ability to show and demonstrate the progress along the way, I think our stakeholders would have lost patience or would not have felt that the momentum of the program was going on the kind of track that was required.



Gardner: Randy, I know that many organizations are diligent about the scorecarding across all sorts of different business activities and metrics. Have you noticed in some of these engagements that these readouts and feedback in the IT and data center transformation activities are somehow joined with other business metrics? Is there an executive scorecard level that these feed into to give more of a holistic overview? Is this something that works in tandem with other scorecarding activities in a typical corporation?

Lawton: It absolutely is, Dana. Often in these kind of programs there are business activities and projects that are going on within the business units. There are application projects that work into the program and then there are the infrastructure components that all have to be fit together at some level.

What we typically see is that the business will be reporting its set of metrics, each of the application areas will be reporting their metrics, and it’s typically from the infrastructure perspective where we pull together all of the application and infrastructure activities and sometimes the business metrics as well.

We've seen multiple examples with our customers where they are either all consolidated into executive scorecards that come out of the reporting from the infrastructure portion of the program that rolls it all together, or that the business may be running separate metrics and then application teams and infrastructure are running the IT level metrics that all get rolled together into some consolidated reporting on some level.

Gardner: And that, of course, ensures that IT isn’t the odd man out, when it comes to being on time and in alignment with these other priorities. That sounds like a very nice addition to the way things may have been done five or 10 years ago.

Lawton: Absolutely.

Gardner: Any examples, Randy, either with organizations you could name, or use cases where you could describe, where the use of this ongoing baselining, tracking, measuring, and delivering metrics facilitates some benefits? Any stories that you can share?

Cloning applications

Lawton: A very notable example is one of our telecom customers we worked with during the last year and finished a program earlier this year. The company was purchasing the assets of another organization and needed to be able to clone the applications and infrastructure that supported business processes from the acquired company.

Within the mix of delivery for stakeholders in the program, there were nine different companies represented. There were some outsourced vendors from the application support side in the acquiree’s company, outsourcers in the application side for the acquiring company, and outsourcers in the data centers that operated data center infrastructure and operations for the target data centers we were moving into.

What was really critical in pulling all this together was to be able to map out, at a very detailed level, the tasks that needed to be executed, and in what time frame, across all of these teams.

The final cutover migration required over 2,500 tasks across these 9 different companies that all needed to be executed in less than 96 hours in order to meet the downtime window of requirements that were required of the acquiring company’s executive management.

It was the detailed scorecarding and operating war rooms to keep those scorecards up to date in real-time that allowed us to be able to accomplish that. There’s just no possible way we would have been able to do that ahead of time.

For more information on The HUB, HP's video series on data center transformation, go to www.hp.com/go/thehub.

I think that HP was very helpful in working with the customer and bringing that perspective into the program very early on, because there had been a failed attempt to operate this program prior to that, and with our assistance and with developing these tools and capabilities, we were able to successfully achieve the objectives of that program.

Gardner: One thing that jumped out at me there was your use of the words real time. How important is it to capture this data and adjust it and update it in real-time, where there’s not a lot of latency? How has that become so important?

Lawton: In this particular program, because there were so many activities taking place in parallel by representatives from all over the world across these nine different companies, the real-time capture and update of all of the data and information that went into the scorecarding was absolutely essential.

In some of the other programs we've operated, there was not such a compressed time frame that required real-time metrics, but we, at minimum, often required daily updates to the metrics. So each program, the strategies that drive that program, and some of the time constraints will drive what the need is for the real-time update.

We often can provide the capabilities for the real-time updates to come from all stakeholders in the program, so that the tools can capture the data, as long as the stakeholders are providing the updates on a real-time basis.

Gardner: So as is often the case, good information in, good results back.

Lawton: Absolutely.

Organizing infrastructure

Gardner: Let’s move now to our third panelist today. We're going to hear about why organizing facilities and infrastructure planning in conjunction in relationship to one another is so important.

Now to Larry Hinman. Larry, let’s go historical for a second. Has there usually been a completely separate direction for facilities planning in IT infrastructure? Why was that the case, and why is it so important to end that practice?

Larry Hinman: Hi, Dana. If you look over time and over the last several years, everybody has data centers and everybody has IT. The things that we've seen over the last 10 or 15 years are things like the Internet and criticality of IT and high density and all this stuff that people are talking about these days. If you look at the ways companies organized themselves several years ago, IT was a separate organization, facilities was a separate organization, and that actually still exists today.

One of the things that we're still seeing today is that, even though there is this push to try to get IT groups and facilities organizations to talk and work each other, this gap that exists between truly how to glue all of this together.

If you look at the way people do this traditionally -- and when I say people, I'm talking about IT organizations and facilities organization -- they typically will model IT and data centers, even if they are attempting to try and glue them together, they try to look at power requirements.

One of the things that we spotted a few years ago was that when companies do this, the risk of over provisioning or under provisioning is very high. We tried to figure out a way to back this up a few notches.

What we figured out was that you have to stop and back up a few notches to really start to get all this glued together.



How can we remedy this problem and how can we bring some structure to this and bring some, what I would call, sanity to the whole equation, to be able to have something predictable over time? What we figured out was that you have to stop and back up a few notches to really start to get all this glued together.

So we took this whole complex framework and data center program and broke it into four key areas. It looks simplistic in the way we've done this, and we have done this over many, many years of analysis and trying to figure out exactly what direction we should take. We've actually spun this off in many directions a few times, trying to continually make it better, but we always keep coming back to these four key profiles.

Business and risk is the first profile. IT architecture, which is really the application suite, is the second profile. IT infrastructure is the third. Data center facilities is the fourth.

One of the things that you will start to hear from us, if you haven’t heard it already via the data center transformation story that you guys were just recently talking about, is this nomenclature of IT plus facilities equals the data center.

Getting synchronized

L
ook at that, look at these four profiles, and look at what we call a top-down approach, where I start to get everybody synchronized on what risk profiles are and tolerances for risk are from an IT perspective and how to run the business, gluing that together with an IT infrastructure strategy, and then gluing all that into a data center facility strategy.

What we found over time is that we were able to take this complex program of trying to have something predictable, scalable, all of the groovy stuff that people talk about these days, and have something that I could really manage. If you're called into the boss’s office, as I and others have been over the many years in my career, to ask what’s the data center going to look like over the next five years, at least I would have some hope of trying to answer that question.

That is kind of the secret sauce here, and the way we have developed our framework was breaking this complex program into these four key areas. I'm certainly not trying to say this is an easy thing to do. In a lot of companies, it’s culture changes. It’s a threat to the way the very organization is organized from an IT and a facilities perspective. The risk and recovery teams and the management teams all have to start working together collaboratively and collectively to be able to start to glue this together.

Gardner: You mentioned earlier the issues around energy and the ongoing importance around the cost structure for that. I suppose it's not just fitting these together, but making them fit for purpose. That is to say, IT and facilities on an ongoing basis.

You get it pointing the right direction, collect the data, complete the modeling, put it in the toolset, and now you have something very dynamic that you can manage over time.



It’s not really something that you do and sit still, as would have been the case several years ago, or in the past generation of computing. This is something that's dynamic. So how do you allow a fit-for-purpose goal with data-center facilities to be something that you can maintain over time, even as your requirements change?

Hinman: You just hit a very important point. One of the the big lessons learned for us over the years has been this ability to not only provide this kind of modeling and predictability over time for clients and for customers. We had to get out of this mode of doing this once and putting it on a shelf, deploying a future state data center framework, keep client pointing in the right direction.

The data is, as you said, gets archived, and they pick it up every few years and do it again and again and again, finding out that a lot of times there's an "aha" moment during those periods, the gaps between doing it again and again.

One thing that we have learned is to not only have this deliberate framework and break it into these four simplistic areas, where we can manage all of this, but to redevelop and re-hone our tools and our focus a little bit, so that we could use this as a dynamic ongoing process to get the client pointing the right direction. Build a data center framework that truly is right size, integrated, aligned, and all that stuff. But then, to have something that was very dynamic that they could manage over time.

That's what we've done. We've taken all of our modeling tools and integrated them to common databases, where now we can start to glue together even the operational piece, of data center infrastructure management (DCIM), or architecture and infrastructure management, facilities management, etc., so now the client can have this real-time, long-term, what we call a 10-year view of the overall operation.

So now, you do this. You get it pointing the right direction, collect the data, complete the modeling, put it in the toolset, and now you have something very dynamic that you can manage over time. That's what we've done, and that's where we have been heading with all of our tools and processes over the last two to three years.

EcoPOD concept

Gardner: I also remember with great interest the news from HP Discover in Las Vegas last summer about your EcoPOD and the whole POD concept toward facilities and infrastructure. Does that also play a part in this and perhaps make it easier when your modularity is ratcheted up to almost a mini data center level, rather than at the server or rack level?

Hinman: With the various what we call facility sourcing options, which PODs are certainly one of those these days, we've also been very careful to make sure that our framework is completely unbiased when it comes to a specific sourcing option.

What that means is, over the last 10 plus years, most people were really targeted at building new green-field data centers. It was all about space, then it became all about power, then about cooling, but we were still in this brick and mortar age, but modularity and scalability has been driving everything.

With PODs coming on the scene with some of the other design technologies, like multi-tiered or flexible data center, what we've been able to do is make sure that our framework is targeted at almost a generic framework where we can complete all the growth modeling and analysis, regardless of what the client is going to do from a facilities perspective.

It lays the groundwork for the customer to get their arms around all of this and tie together IT and facilities with risk and business, and then start to map out an appropriate facility sourcing option.

We find these days that POD is actually a very nice fit with all of our clients, because it provides high density server farms, it provides things that they can implement very quickly, and gets the power usage effectiveness (PUE) and power and operational cost down.



We find these days that POD is actually a very nice fit with all of our clients, because it provides high density server farms, it provides things that they can implement very quickly, and gets the power usage effectiveness (PUE) and power and operational cost down. We're starting to see that take a stronghold in a lot of customers.

Gardner: As we begin to wrap up, I should think that these trends are going to be even more important, these methods even more productive, when we start to factor in movement toward private cloud. There's the need to support more of a mobile tier set of devices, and the fact that we're looking for of course even more savings on those long-term energy and operating costs.

Back to you, Randy Lawton. Any thoughts about how scorecards and tracking will be even more important in the future, as we move, as we expect we will, to a more cloud-, mobile-, and eco-friendly world?

Lawton: Yes, Dana. In a lot of ways, there is added complexity these days with more customers operating in a hybrid delivery model, where there may be multiple suppliers in addition to their internal IT organizations.

Greater complexity

Just like the example case I gave earlier, where you spread some of these activities not only across multiple teams and stakeholders, but also into separate companies and suppliers who are working under various contract mechanism, the complexity is even greater. If that complexity is not pulled into a simplified model that is beta driven, that is supported by plans and contracts, then there are big gaps in the programs.

The scorecarding and data gathering methods and approaches that we take on our programs are going to be even more critical as we go forward in these more complex environments.

Operating the cloud environments simplifies things from a customer perspective, but it does add some additional complexities in the infrastructure and operations of the organization as well. All of those complexities add up to, meaning that even more attention needs to be brought to the details of the program and where those responsibilities lie within stakeholders.

Gardner: Larry Hinman, we're seeing this drive toward cloud. We're also seeing consolidation and standardization around data center infrastructure. So perhaps more large data centers to support more types of applications to even more endpoints, users, and geographic locations or business units. Getting that facilities and IT equation just right becomes even more important as we have fewer, yet more massive and critical, data centers involved.

Hinman: Dana, that's exactly correct. If you look at this, you have to look at the data center facilities piece, not only from a framework or model or topology perspective, but all the way down to the specific environment.

You have to look at the data center facilities piece, not only from a framework or model or topology perspective, but all the way down to the specific environment.



It could be that based on a specific client’s business requirements and IT strategy that it will require possibly a couple of large-scale core data centers and multiple remote sites and/or it could just be a bunch of smaller types of facilities.

It really depends on how the business is being run and supported by IT and the application suite, what the tolerances for risk are, whether it’s high availability, synchronous, all the groovy stuff, and then coming up with a framework that matches all those requirements that it’s integrating.

We tell clients constantly that you have to have your act together with respect to your profile, and start to align all of this, before you can even think about cloud and all the wonderful technologies that are coming down the pike. You have to be able to have something that you can at least manage to control cost and control this whole framework and manage to a future-state business requirement, before you can even start to really deploy some of these other things.

So it all glues together. It's extremely important that customers understand that this really is a process they have to do.

Gardner: Very good. You've been listening to a sponsored BriefingsDirect podcast discussion on how quick and proven ways to attain productivity can significantly improve IT operations and efficiency.

This is the second in an ongoing series of podcasts on data center transformation best practices and is presented in conjunction with a complementary video series.

I'd like to thank our guests, Duncan Campbell, Vice President of Marketing for HP Converged Infrastructure and SMB; Randy Lawton, Practice Principal in the Americas West Data Center Transformation & Cloud Infrastructure Consulting at HP, and Larry Hinman, Critical Facilities Consulting Director and Worldwide Practice Leader for HP Critical Facility Services and HP Technology Services. So thanks to you all.

This is Dana Gardner, Principal Analyst at Interarbor Solutions. Also, thanks to our audience for listening, and come back next time.

Listen to the podcast. Find it on iTunes/iPod. Download the transcript. Sponsor: HP.

For more information on The HUB, HP's video series on data center transformation, go to www.hp.com/go/thehub.

Transcript of a sponsored podcast discussion in conjunction with an HP video series on the best practices for developing a common roadmap for DCT. Copyright Interarbor Solutions, LLC, 2005-2011. All rights reserved.

You may also be interested in:

Tuesday, November 15, 2011

Germany's Largest Travel Agency Starts a Virtual Journey to Get Branch Office IT Under Control

Transcript of a sponsored podcast discussion from VMworld 2011 in Copenhagen on how DER Deutsches Reisebüro virtualized 2,300 desktops to centralize administration.

Listen to the podcast. Find it on iTunes/iPod. Download the transcript. Sponsor: VMware.

Dana Gardner: Hello, and welcome to a special BriefingsDirect podcast series coming to you from the VMworld 2011 Conference in Copenhagen. We're here in the week of October 17 to explore the latest in cloud computing and virtualization infrastructure developments.

I'm Dana Gardner, Principal Analyst at Interarbor Solutions, and I’ll be your host throughout this series of VMware-sponsored BriefingsDirect discussions. [Disclosure: VMware is a sponsor of BriefingsDirect podcasts.]

Our next case study focuses on how Germany’s largest travel agency has remade their PC landscape across 580 branch offices using virtual desktops. We’ll learn how Germany’s DER Deutsches Reisebüro redefined the desktops delivery vision and successfully implemented 2,300 Windows XP desktops as a service.

Here to tell us what this major VDI deployment did in terms of business, technical, and financial payoffs is Sascha Karbginski, Systems Engineer at DER Deutsches Reisebüro, based in Frankfurt. Welcome to the show, Sascha.

Sascha Karbginski: Hi, Dana.

Gardner: Why were virtual desktops such an important direction for you? Why did it make sense for your organization?

Karbginski: In our organization, we’re talking about 580 travel agencies all over the country, all over Germany, with 2,300 physical desktops, which were not in our control. We had life cycles out there of about 4 or 5 years. We had old PCs with no client backups.

The biggest reason is that recovery times at our workplace were 24 hours between hardware change and bringing back all the software configuration, etc. Desktop virtualization was a chance to get the desktops into our data center, to get the security, and to get the controls.

Gardner: So this seemed to be a solution that’s solved many problems for you at once.

Karbginski: Yes. That’s right.

Gardner: All right. Tell me a little bit about DER, the organization. I believe you’re a part of the REWE Group and you’re the number one travel business in Germany. Tell us a little bit about your organization before we go further into why desktop virtualization is good for you.

Karbginski: DER in Germany is the number one in travel agencies. As I said, we're talking about 580 branches. We’re operating as a leisure travel agency with our branches, Atlasreisen and DER, and also, in the business travel sector with FCm Travel Solutions.

IT-intensive business

Gardner: This is a very IT-intensive business now. Everything in travel is done though networked applications and cloud and software-as-a-service (SaaS) services. So a very intensive IT activity in each of these branches.

Karbginski: That’s right. Without the reservation systems, we can’t do any flight bookings or reservations or check hotel availability. So without IT, we can do nothing.

Gardner: And tell me about the problem you needed to solve in a bit more detail. You had four generations of PCs. You couldn’t control them. It took a lot of time to recover if there was a failure, and there was a lot of different software that you had to support.

Karbginski: Yes. We had no domain integration no control and we had those crashes, for example. All the data would be gone. We had no backups out there. And we changed the desktops about every four or five years. For example, when the reservation system needed more memory, we had to buy the memory, service providers were going out there, and everything was done during business hours.

Gardner: Okay. So this would have been a big toll on your helpdesk and for your support. With all of these people in these travel bureau locations calling you, it sounds like it was a very big problem.

There were some challenges during the rollout. The bandwidth was a big thing.



To what degree have you fully virtualized all of these desktops? Do you have a 100-percent deployment or you face deployment across these different organizations and these different agencies?

Karbginski: We have nearly about 100 percent virtualization now. We have only two or three offices, which are coming up next. We have some problem with the service provider for the VPN connection. So it's about 99 percent virtualization.

Gardner: That's pretty impressive. What were some of the issues that you encountered in order to enable this? Were there network infrastructure or bandwidth issues? What were some of the things that you had to do in order to enable this to work properly?

Karbginski: There were some challenges during the rollout. The bandwidth was a big thing. Our service provider had to work very hard for us, because we needed more bandwidth out there. The path we had our offices was 1 or 2-Mbit links to the headquarters data center. With desktop virtualization, we need a little bit more, depending on the number of the workplaces and we needed better quality of the lines.

So bandwidth was one thing. We also had the network infrastructure. We found some 10-Mbit half-duplex switches. So we had to change it. And we also had some hardware problems. We had a special multi-card board for payment to read out passports or to read out credit card information. They were very old and connected with PS/2.

A lot of problems

So there were a lot of problems, and we fixed them all. We changed the switches. Our service provider for Internet VPN connection brought us more quality. And we changed the keyboards. We don’t need this old stuff anymore.

Gardner: And so, a bit of a hurdle overcome, but what have been some of the payoffs? How has this worked out in terms of productivity, energy savings, lowering costs, and even business benefits?

Karbginski: Saving was our big thing in planning this project. The desktops have been running out there now about one year, and we know that we have up to 80 percent energy saving, just from changing the hardware out there. We’re running the Wyse P20 Zero Client instead of physical PC hardware.

Gardner: How about on the server side; are there energy benefits there?

Karbginski: We needed more energy for the server side in the data center, but if you look at it, we have 60 up to 70 percent energy savings overall. I think it’s really great.

Gardner: That’s very good. So what else comes in terms of productivity? Is there a storage or a security benefit by having that central control?

The data is under our control in the data center, and important company information is not left in an office out there.



Karbginski: As far as security, we've blocked the USB sticks now out there. So the data is under our control in the data center, and important company information is not left in an office out there. Security is a big thing.

Gardner: And how about revisiting your helpdesk and support? Because you have a more standardized desktop infrastructure now, you can do your upgrades much more easily and centrally and you can support people based on an access right directly to the server infrastructure. What’s been the story in terms of productivity and support in helpdesk?

Karbginski: In the past, the updates came during the business hours. Now, we can do all software updates at nights or at the weekends or if the office is closed. So helpdesk cost is reduced about 50 percent.

Gardner: Wow. That adds up.

Karbginski: Yeah, that’s really great.

Gardner: How big a team did it take to implement the virtualized desktop infrastructure activity for you? Was this a big expenditure in terms of people and time to get this going?

Few personnel

Karbginski: We built up the whole infrastructure -- I think it was in 9 or 10 months without the planning -- with a team of three persons, three administrators.

Gardner: Wow.

Karbginski: And now we're managing, planning, deploying, and updating it. I really think it's not a good idea to do with just three people, but it works.

Gardner: And you’ve been the first travel organization in Germany to do this, but I understand that others are following into your footsteps.

Karbginski: I've heard from some other companies that are interested in a solution like this. We were the first one in Germany, and many people told us that it wouldn't work, but we showed it works.

Gardner: And you're a finalist for the TechTarget VMware Best Award because of the way in which you’ve done this, how fast you’ve done it, and to the complete degree that you’ve done it. So I hope that you do well and win that.

We built up the whole infrastructure with a team of three persons, three administrators.



Karbginski: I received an email that we are one of the finalists, and it would be a great thing.

Gardner: Tell me now that we understand the scope and breadth of what you’ve done, a little about some of the hurdles that you’ve had to overcome. The fact that you're doing this with three people is very impressive. What does the implementation consist of? What is it you’ve got in place in terms of product that has become your de-facto industry stack for VDI?

Karbginski: I can also talk about some problems we had with this, because with the network component, for example, we have another team for it.

Gardner: I was actually wondering what products are in place? What actual technology have you chosen that then enabled you to move in this direction so well? Software, hardware, the whole stack, what is the data center stack or set of components that enables your VDI?

Karbginski: We're using Dell servers with two sockets, quad-core, 144-gigabyte RAM. We're also using EMC Clariion SAN with 25 terabytes. Network infrastructure is Cisco, based on 10 GB Nexus data center switches. At the beginning the project, we had View 4.0 and we upgraded it last month to 4.6.

The people side

Gardner: What were some of the challenges in terms of working this through the people side of the process? We've talked about process, we've talked technology, but was there a learning curve or an education process for getting other people in your IT department as well as the users to adjust to this?

Karbginski: There were some unknown challenges or some new challenges we had during the rollout. For example, the network team. The most important thing was understanding of virtualization. It's an enterprise environment now, and if someone, for example, restarts the firewall in the data center, the desktops in our offices were disconnected.

It's really important to inform the other departments and also your own help desk.

Gardner: So there are a lot of different implications across the more traditional or physical environment. How about users? Have they been more satisfied? Is there something about a virtual desktop, perhaps the speed at which it boots up, or the ability to get new updates and security issues resolved? How have the end users themselves reacted?

Karbginski: The first thing that the end users told us was that the selling platform from Amadeus, the reservation system, runs much faster now. This was the first thing most of the end users told us, and that’s a good thing.

The next is that the desktop follows the user. If the user works in one office now and next week in another office, he gets the same desktop. If the user is at the headquarters, he can use the same desktop, same outlook, and same configuration. So desktop follows the user now. This works really great.

The desktop follows the user. If the user works in one office now and next week in another office, he gets the same desktop.



Gardner: Looking to the future, are you going to be doing this following-the-user capability to more devices, perhaps mobile devices or at home PCs? Is there the ability to take advantage of other endpoints, perhaps those even owned by the end users themselves and still deliver securely the applications and data that you need?

Karbginski: We plan to implement the security gateway with PCoIP support for home office users or mobile users who can access their same company desktop with all their data on it from nearly every computer in the world to bring the user more flexibility.

Gardner: So I should think that would be yet another payoff on the investments that you’ve made is that you will now be able to take the full experience out to more people and more places, but for relatively very little money to do that.

Karbginski: The number of desktops is still the same, because the user gets the same desktop. We don’t need for one user two or three desktops.

Gardner: Right, but they're able to get the information on more devices, more screens as they say, but without you having to buy and manage each of those screens. How about advice for others? If you were advising someone on what to learn from your experience as they now move toward desktop virtualization, any thoughts about what you would recommend for them?

Inform other departments

Karbginski: The most important thing is to get in touch with the other departments and inform them about the thing you're doing. Also, inform the user help desk directly at the beginning of the project. So take time to inform them what desktop virtualization means and which processes will change, because we know most of our colleagues had a wrong understanding of virtualization.

Gardner: How was it wrong? What was their misunderstanding do you think?

Karbginski: They think that with virtualization, everything will change and we'll need other support servers, and it's just a new thing and nobody needs it. If you inform them what you're doing that nothing will be changed for them, because all support processes are the same as before, they will accept it and understand the benefits for the company and for the user.

The most important thing is to get in touch with the other departments and inform them about the thing you're doing.



Gardner: We’ve been talking about how DER Deutsches Reisebüro has been remaking their PC landscape across 500 in 80 branch officers. They're part of Germany’s largest travel agency and they’ve been deploying desktop virtualization successfully, but very broadly up to 100 percent across their environment. So it's very impressive. I’d like to thank our guest, Sascha Karbginski, Systems Engineer there at DER Deutsches Reisebüro. Thank you so much, Sascha.

Karbginski: Thank you, Dana.

Gardner: And thanks to our audience for joining this special podcast coming to you from the VMworld 2011 Conference in Copenhagen. I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host throughout this series of VMware-sponsored BriefingsDirect discussion. Thanks again for listening, and come back next time.

Listen to the podcast. Find it on iTunes/iPod. Download the transcript. Sponsor: VMware.

Transcript of a sponsored podcast discussion from VMworld 2011 in Copenhagen on how DER Deutsches Reisebüro virtualized 2,300 desktops to centralize administration. Copyright Interarbor Solutions, LLC, 2005-2011. All rights reserved.

You may also be interested in:

Tuesday, June 07, 2011

Deep-Dive Discussion on HP's New Converged Infrastructure, EcoPOD and AppSystem Releases at Discover

Transcript of a sponsored podcast discussion in conjunction HP Discover 2011 on how HP's converged infrastructure strategy supports data center transformation and applications modernization.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: HP.

Dana Gardner: Hi, this is Dana Gardner, Principal Analyst at Interarbor Solutions, and you're listening to BriefingsDirect. Today we present a sponsored podcast discussion in conjunction with the HP Discover 2011 conference in Las Vegas.

We’ll explore some major news around converged infrastructure and data center transformation, and learn how these strategic business goals of enterprises are more tightly aligned than ever to how IT infrastructure modernization takes root.

Until fairly recently, large IT organizations were grappling with a lot of unknown unknowns when it comes to the rapidly shifting requirements for their infrastructure and facilities. There was a sizable risk of locking in too quickly or in adopting unproven technology -- and then paying a dear price later, either in wasted investments or ending up with insufficient resources.

But now, after a series of rapidly maturing trends around application types, cloud computing, mobility, and changing workforces, the proper IT requirements mix seems much clearer. In just the past few years, the definition of what a modern IT infrastructure needs and what it needs to do has finally come into focus. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

We know, for example, that we’ll see most data centers converge their servers, storage, and network platforms intelligently. We know that we’ll see higher levels of virtualization across these platforms and more applications, and that, in turn, will support the adoption of hybrid and cloud models.

In just the past few years, the definition of what a modern IT infrastructure needs and what it needs to do has finally come into focus.



We’ll surely see more compute resources devoted to big data and business intelligence (BI) values that span ever more applications and data types. And of course, we’ll need to support far more mobile devices and distributed, IT-savvy workers.

There is no longer a lot of risk in describing the quintessential data center of today and tomorrow and in recognizing that it will need to be highly energy efficient, automated, flexible, and modular. It will need to scale up and down and to adapt without complexity, delay, or undue waste.

How well companies modernize and transform these strategic and foundational IT resources will then hugely impact their success and managing their own agile growth and in controlling ongoing costs and margins. Indeed the mingling of IT success and business success is clearly inevitable.

So, now comes the actual journey. At HP Discover, the news is largely about making the inevitable future happen more safely by being able to transform the IT that supports businesses in all of their computing needs for the coming decade. IT executives must execute rapidly now to manage how the future impacts them and to make rapid change an opportunity, not an adversary.

How to execute

We're here with a panel of HP executives to explore the how -- no longer dwelling on the why or when -- to best execute on converged infrastructure and data center transformation. Please join me now in welcoming our panel, Helen Tang, Solutions Lead for Data Center Transformation and Converged Infrastructure Solutions for HP Enterprise Business. Welcome, Helen.

Helen Tang: Thanks, Dana. Great to be here.

Gardner: We are also here with Jon Mormile, Worldwide Product Marketing Manager for Performance-Optimized Data Centers in HP's Enterprise Storage Servers and Networking (ESSN) group within HP Enterprise Business. Welcome, Jon.

Jon Mormile: Thanks, Dana. Glad to be here.

Gardner: And, we're here with Jason Newton, Manager of Announcements and Events for HP ESSN. Welcome, Jason.

Jason Newton: Thanks, Dana.

Gardner: And lastly, Brad Parks, Converged Infrastructure Strategist for HP Storage in the HP ESSN organization. Welcome, Brad.

Brad Parks: Thanks. Glad to be here.

Gardner: Helen, let me start with you. You've been looking at these trends, and we’ve summed up a little bit of the urgency, but also the clarity when it comes to what’s needed. You’ve done additional research leading up to the Discover conference here in Las Vegas. What are some of the findings, and how are the trends from your perspective coming together to make this IT transformation inevitable?

Tang: Last year, HP rolled out this concept of the Instant-On Enterprise, and it’s really about the fact that we all live in a very much instant-on world today. Everybody demands instant gratification, and to deliver that and meet other constituent’s needs, an enterprise really needs to become more agile and innovative, so they can scale up and down dynamically to meet these demands.

In order to get answers straight from our customers on how they feel about the state of agility in their enterprise, we contracted with an outside agency and conducted a survey earlier this year with over 3,000 enterprise executives. These were CEOs, CIOs, CFOs across North America, Europe, and Asia, and the findings were pretty interesting.

Essentially, there were three buckets of questions asked in the survey titled "The State of Enterprise Agility." The first set of question was, "How important do you believe agility is in the enterprise?" Not surprisingly, over 95 percent of respondents said, it's very critical. It’s important to their overall enterprise success, not just in IT.

The second bucket question was, "If that’s the case, how agile do you feel your current organization is?" Less than 40 percent of our respondents said, "I think we are doing okay. I think we have enough agility in the organization to be able to meet these demands."

Not surprising

So the number is so low, but not very surprising to those of us who have worked in IT for a while. As you know, compared to other enterprise disciplines, IT is a little bit more pre-Industrial Revolution. It’s not a streamlined. It’s not a standardized. There's a long way to go. That clearly spells out a big opportunity for companies to work on that area and optimize for agility.

The last area or bucket of questions we asked was, "What do you think is going to change that? How do you think enterprises can increase their agility?" The top two responses coming back were about more innovative, newer applications.

But, the number one response coming from CEOs was that it’s transforming their technology environment. That’s precisely what HP believes. We think transforming that environment and by extension, converged infrastructure, is the fastest path towards not only enterprise agility, but also enterprise success.

Gardner: Let’s look at some of the news. There are various parts, and they are related. If we take them into certain order, I think we can then look at why this whole is greater than the sum of the parts.

Let’s start with Brad Parks. Looking at the Storage Foundation, HP Storage, tell me how we got here. Why has storage been, in fact, fractured, difficult to manage, and quite expensive. And then, what have we done now here at Discover to help bring that together and make that part of a larger converged infrastructure?

Parks: A couple of years ago, HP took a step back from the current trajectory that we were on as a storage business and the trajectory that the storage industry as a whole was on. We took a look at some of the big trends and problems that we were starting to hear from customers around virtualization or on the move to cloud computing, this concept of really big everything.

We’re talking about data, numbers of objects, size, performance requirements, just everything at massive, massive scale. When we took a look at those trends, we saw that we were really approaching a systemic failure of the storage that was out there in the data center.

The challenge is that most of the storage deployed out in the data center today was architected about 20 years ago for a whole different set of data-center needs, and when you couple that with these emerging trends, the current options at that time were just too expensive.

They were too complicated at massive scale and they were too isolated, because 20 years ago, when those solutions were designed, storage was its own element of the infrastructure. Servers were managed separately. Networking was managed separately, and while that was optimized for the problems of the day, it in turn created problems that today’s data centers are really dealing with.

Thinking about that trajectory, we decided to take a different path. Over the last two years, we’ve spent literally billions of dollars through internal innovation, as well as some external acquisitions, to put together a portfolio that was much better suited to address today’s trends.

Common standard

A
t the event here, we're talking about HP Converged Storage, and this addresses some of the gaps that we’ve seen in the legacy monolithic and even the legacy unified storage that’s out there. Converged Storage is built on a few main principles we're trying to drive towards common industry-standard hardware, building on ProLiant BladeSystem based DNA.

We want to drive a lot more agility into storage in the future by using modern Scale-Out software layers. And last, we need to make sure that storage is incorporated into the larger converged infrastructure and managed as part of a converged stack that spans servers and storage and network.

Gardner: Looking at some of the specifics, it seems as if cost is a big issue here. You've done a lot to bring cost down, going standard, and making utilization of storage more integrated into the other facets of the infrastructure. What sort of cost savings are we looking at, when you really do this well and when you look at it strategically?

Parks: There are really different aspects to cost, thinking about first capital expense. When we're able to design on industry-standard platforms like BladeSystem and ProLiant, we can take advantage of the massive supply chain that HP has and roll out solution that are much lower upfront cost point from a hardware perspective.

Second, using that software layer I mentioned, some of the technologies that we bring to bear are like thin provisioning, for example. This is a technology that helps customers cut their initial capacity requirements around 50 percent by just eliminating their over-provisioning that is associated with some of the legacy storage architectures.

One of the things we've seen and talk about with customers worldwide is that data just doesn't go away. It is around forever.



Then, operating expense is the other place where this really is expensive. That's where it helps to consolidating the management across servers and storage and networking, building in as much automation into the solutions as possible, and even making them self-managing.

For example, our 3PAR Storage solution, which is part of this converged stack, has autonomic management capabilities which, when we talk to our customers, has reduced some of their management overhead by about 90 percent. It's self-managing and can load balance, and because of its wide straightening architecture, it can respond to some of the unpredictable workloads in the data center, without requiring the administrative overhead.

Gardner: I suppose there's a bit of a catalytic effect, when you do the storage properly or with more of a modern architecture. You start to be able to move to a greater efficiencies in terms of the data lifecycle, managing data with an intelligent path in terms of where it's used and not used. Is there a larger role here for data that also plays into BI, at least addressing data as a lifecycle, rather than a problem asset?

Parks: One of the things we've seen and talk about with customers worldwide is that data just doesn't go away. It is around forever and that has contributed to this massive amount of data growth. So, one of the things we're looking at within HP Converged Storage portfolio is how do we not only help customers store that information -- for example, the ability to you have to look across up to 16 petabytes through a single pane of glass, that management view across that massive amount of information -- but how do they extract more value out of it.

Jason might talk a little bit more about the Vertica AppSystem solution, but within the storage domain, we're looking at building in intelligent search capabilities into these solutions and automated tiering to move data around either by physical location or physical tier to get more efficient and to extract more value out of that content.

Gardner: Let's now move to Jason Newton. Jason, tell about the big converged system’s portfolio news and perhaps a bit more about that AppSystem that was referenced by Brad.

Converged Infrastructure

Newton: We're really excited about this announcement. If you've heard anything from HP over the last few years, you've certainly heard a lot about the Converged Infrastructure and our strategy. In 2009, we started looking at the sprawl that customers were dealing with and the impact it was having on their business and environment. We saw that if you look ahead 5 or 10 years, convergence is a dominant trend.

That's the direction that things were going. We felt like we were in a great position as HP to be the ones to deliver on a promise of converging server, storage, network, management, security application all into individual solutions.

So, 2009 was about articulating the definition of what that should look like and what that data center in the future should be. Last year, we spent a lot of time in new innovations in blades and mission-critical computing and strategic acquisitions around storage, network, and other places.

The result last year was what we believe is one of the most complete portfolios from a single vendor in marketplace to deliver converged infrastructure. Now, what we’re doing in 2011 is building on that to bring all that together and simplify that into integrated solutions and extending that strategy all the way out to the application.

If we look at what kind of applications customers are deploying today and the ways that they’re deploying them, we see three dominant new models that are coming to bear. One is applications in a virtualized environment and on virtual machines and that have got very specific requirements and demands for performance and concerns about security, etc.

Security concerns also require new demands on capacity and resource planning, on automation, and orchestration of all the bits and bytes of the application and the infrastructure.



We see a lot of acceleration and interest in applications delivered as a service via cloud. Security concerns also require new demands on capacity and resource planning, on automation, and orchestration of all the bits and bytes of the application and the infrastructure.

The third way that we wanted to address was a dedicated application environment. These are data warehousing, analytics types of workloads, and collaboration workloads, where performance is really critical, and you want that not on shared resources, but in a dedicated way. But, you also want to make sure that that is supporting applications in a cloud or virtual environment.

So in 2011, it's about how to bring that portfolio together in the solution to solve those three problems. The key thing is that we didn't want to extend sprawl and continue the problem that’s still out there in the marketplace. We wanted to do all that on one common architecture, one common management model, and one common security model.

If you look at this trend toward integration and convergence, and you see some of the answers out there in the marketplace, you’ll see, for example, unique architectural stacks dedicated to a data warehouse environment or a BI environment. Then, you’ll see a completely different physical and software architecture for a virtual environment.

Then, if you look at cloud, you see a whole other island of different tools, different parts, different pieces. With our converged infrastructure strategy, we had the opportunity to do something really special here.

Individual Solutions

What if we could take that common architecture management security model, optimize it, integrate it into individual solutions for those three different application sets and do it on the stuff that customers are already using in the legacy application environment today and they could have something really special?

What we’re announcing today at Discover is this new portfolio we called Converged Systems. For that virtual workload, we have VirtualSystems or the dedicated application environment, specifically BI, and data management and information management. We have the AppSystems portfolio. Then, for where most customers want to go in the next few years, cloud, we announced the CloudSystem.

So, those are three portfolios, where common architecture addresses a complete continuum of customer’s application demands. What's unique here is doing that in a common way and being built on some of the best-of-breed technologies on the planet for virtualization, cloud, high performance BI, and analytical applications.

Gardner: This is an example where truly converged infrastructure has now gotten us to the level where we’re looking at not quite business process, but certainly a solution set and some very powerful capabilities now being executed on at that level.

Let's just quickly dig into one of those levels because it intrigued me. It was from the Vertica acquisition. We now, basically have a data warehouse, big data, real-time crunching capability, and a modern architecture designed just for that, but placed on the converged infrastructure. Tell me why that’s important and why that could be a game changer when it comes to analytics?

With the demands of big everything, the speed and scale at which the economy is moving the business, and competition is moving, you've got to have this stuff in real-time.



Newton: There are a couple of things. You hit on two points there. One is Vertica software, in and of itself. The architecture is one of the most modern architectures out there today to handle the analytics in real time.

Before, analytics in a traditional BI data warehouse environment was about reporting. Call up the IT manager, give them some criteria. They go back and do their wizardry and come back with sort of a status report, and it's just looking at the dataset that’s in one of the data stores he is looking.

It sort of worked, I guess, back when you didn’t need to have that answer tomorrow or next week. You could just wait till the next quarterly review. With the demands of big everything, as Brad was speaking of, the speed and scale at which the economy is moving the business, and competition is moving, you've got to have this stuff in real-time.

So we said, "Let’s go make a strategic acquisition. Let’s get the best-in-class, real-time analytics, a modern architecture that does just that and does it extremely well. And then, let’s combine that with the best hardware underneath that with HP Converged Infrastructure, so that customers can very easily and quickly bring that capability into their environment and apply it in a variety of different ways, whether in individual departments or across the enterprise.

Real-time analytics

There are endless possibilities of ways that you can take advantage of real-time analytics with this solution. Including it into AppSystem makes it very easy to consume, bring it into the environment, get it up and running, start connecting the data sources literally in minutes, and start running queries and getting answers back in literally seconds.

What’s special about this approach is that most analytic tools today are part of a larger data warehouse or BI-centered architecture. Our argument is that in the future of this big everything thing that’s going on, where information is everywhere, you can’t just rely on the data sources inside your enterprise. You’ve got to be able to pull sources from everywhere.

In buying a a monolithic, one-size-fits-all OLTP, data warehousing, and a little bit of analytics, you're sacrificing that real-time aspect that you need. So keep the OLTP environment, keep the data warehouse environment, bring in its best in class real-time analytic on top of it, and give your business very quickly some very powerful capabilities to help make better business decisions much faster.

Gardner: Very good. Jon Mormile, tell me a bit now how these developments we’ve heard from Brad and Jason now come together and are supported by the news around the data center transformation here at Discover.

Mormile: Thanks, Dana. First of all, when you talk about today’s data centers, most of them were built 10 years ago and actually a lot of our analyst’s research talks about how they were built almost 14-15 years ago. These antiquated data centers simply can’t support the infrastructure that today’s IT and businesses require. They are extremely inefficient. More of them require two to three times the amount of power to run the IT, due to inefficient cooling and power distribution systems.

These antiquated data centers simply can’t support the infrastructure that today’s IT and businesses require. They are extremely inefficient.



In addition to these systems, these monolithic data centers are typically over-provisioned and underutilized. Because most companies cannot build new facilities all the time and continually, they have to forecast future capacity and infrastructure requirements that are typically outdated before the data centers are even commissioned.

A lot of our customers are facing similar challenges. As I mentioned, we're talking about the ability to accommodate today’s IT, and there's the lack of scalability. But, they also have other driving factors that are affecting your businesses, such as the ability to build scalar facilities quickly.

They need to reduce construction cost, as well as operational expenses. This places a huge strain on companies' resources and their bottom lines. By not changing their data center strategy, businesses are throttled and simply just can’t compete in today’s aggressive marketplace.

Gardner: What are you doing to help them with that? What’s coming out? I'm intrigued by the EcoPOD, but there is more to it than that.

Mormile: As I mentioned, for some of these challenges that customers are facing today, HP absolutely has a solution. It’s basically surrounding our modular computing portfolio and it helps to solve these problems.

Modular computing

Our modular computing portfolio started about three years ago, when we first took a look at and modified an actual shipping container, turning it into a Performance Optimized Data Center (POD).

This was followed by continuous innovation in the space with new POD designs, the deployment of our POD-Works facility, which is the world’s first assembly line data centers, the addition of flexible data center product, and today, with our newest edition, the POD 240A, which gives all the benefits of a container data center without sacrificing traditional data center look and feel.

Also, with the acquisition of EYP, which is now HP Critical Facilities Services, and utilizing HP Technical Services, we are able to offer a true end-to-end data center solution from planning and installation of the IT and the optimized infrastructure go with it, to onsite maintenance and onsite support globally.

Gardner: So, we really have a continuum here. We're talking about AppSystems, where we've got appliances running specific apps, some of the Microsoft SQL databases, some of the SAP, ERP implementations, and then we are going in a concerted fashion down into the infrastructure, talking about virtualization, and then right into the facilities, where we have these PODs and modular approaches with efficiencies built in for cooling and energy conservation.

It's sort of end-to-end, but what’s fascinating to me, and I'd like your take on this, Jon, is that it doesn’t have to be adopted all at once. This is something that you have many different entry points.

You're talking about taking that IT and those innovations and then taking it to the next level.



Depending on the specifics of your enterprise, your service provider, whatever stage of development and maturity you are at, there is a way for you to jump on board, but at least you can start taking action. That, I think, is the key here. Jon, can you speak about the ability to jump in at any point, but still makes a significant progress?

Mormile: That’s basically the whole basis of a modular computing portfolio and converged infrastructure. HP can deliver the server, storage, and networking solution. We actually offer these solutions to 8 out of the 10 leading social media companies.

When you combine in-house rack and power engineering, delivering finely tuned solutions to meet customers’ growing power and rack needs, it all comes together. You're talking about taking that IT and those innovations and then taking it to the next level as far as integrating that into a turnkey solution, which should actually be a POD or modular data center product.

You take the POD, and then you talk about the Factory Express services where we are actually able to take the IT integrate it into a POD, where you have the server, storage, and networking. You have integrated applications, and you've cabled and tested it.

The final step in the POD process is not only that we're providing Factory Express services, but we're also providing POD-Works. At POD-Works, we take the integrated racks that will be installed in the PODs and we provide power, networking, as well as chilled water and cooling to that, so that every aspect of the turnkey data center solution is pre-configured and pre-tested. This way, customers will have a fully integrated data center shipped to them. All they need to do is plug-in the power, networking, and/or add chilled water to that.

Game changer

B
eing able to have a complete data center on site up and running in a little as six weeks is a tremendous game changer in the business, allowing customers to be more agile and more flexible, not only with their IT infrastructure needs, but also with their capital and operational expense.

When you bring all that together, PODs offer customers the ability to deploy fully integrated, high performing, efficient scalable data centers at somewhere around a quarter of the cost and up to 95 percent more efficient, all the while doing this 88 percent faster than they can with traditional brick and mortar data center strategies.

Gardner: Jason, going to you now, pretty much the same question. We have this comprehensive ability. We have a much more rapid physical plant capability. This now allows for people to come in at different points in their maturity, but still have a roadmap or vision of how to get to a converged infrastructure, a transformed data center. What’s the process that you encounter at that AppSystem level, where people can get involved quickly? What would you recommend that they do first?

Newton: That depends on the customer. The whole point of the Converged System portfolio is that if you like the concept of a converged infrastructure and you want to get there, we have a very simple, flexible, optimized answer for you, for workload, virtual cloud, and dedicated application environment.

As to where a customer can start, go back and look at what your business priorities are, and your level of maturity. We've got quite a few experts that will sit down to talk to you and assess where you are in that continuum. The best place to start is what is your business asking for and what are the problems that you're trying to solve? What are the outcomes? What can you deliver? That's the place to go.

The best place to start is what is your business asking for and what are the problems that you're trying to solve? What are the outcomes? What can you deliver?



A reason someone would be looking at apps is because someone in the business is saying, "I need to make much better decisions much faster." Maybe it's supply chain decisions or it could be something in retail. Or, "I need to do some better financial analysis or make better offers to my banking customers. And, I need something much more powerful than just the data that I have, and we need to do it very quickly."

I would say to look at a Vertica real-time analytic system or a data warehouse solution that we've co-developed in Microsoft. That would be a perfect place to start. The good news is that if your next priority, after getting that software in the business, is you get that virtual environment more cleaned up and running more efficiently, more optimized and simplified in terms of management, VirtualSystem would be your next step.

If you're already doing a lot of virtualization today with HP on BladeSystem, on 3PAR, or on our LeftHand technology, I would say to build on that same architecture, keep all that in place, and upgrade that to a complete CloudSystem environment.

There are a lot of entry points. It really depends on the business priority at what you are trying to do. The good news of this approach is that you can come in at any point and you can scale and and extend and know that when you solve those different application needs, you're going to be doing it in a common way, not sacrificing best of class.

Gardner: We should also point out, Jason, that at Discover here we're seeing a lot of professional services and support announcements as well that dovetail and supplement these other announcements. Maybe you could give us a very quick recap of where the professional services kick in, and perhaps that's also a starting point.

Start services

Newton: You're right. There is a multitude of those at this show. We have some new professional services. I call them start services. We have an AppStart, a CloudStart, and a VirtualStart service. These are the services, where we can engage with the customer, sit down, and assess their level of maturity -- what they have in place, what their goals are.

These services are designed to get each of these systems into the environment, integrated into what you have, optimized for your goals and your priorities, and get this up and running in days or weeks, versus months and years that that process would have taken in the past for building and integrating it. We do that very quickly and simply for the customer.

We have got a lot of expertise in these areas that we've been building on the last 20 years. Just like we're doing on the hardware-software application side simplifications, these start services do the same thing. That extends to HP Solutions support, which then kicks in and helps you support that solution across that lifecycle.

There is a whole lot more, but those are two really key ones that customers are excited about this week.

Gardner: Brad Parks, you've been hearing from Jason and Jon. They supplement and support what you are doing in storage. But, when it comes to getting started, do you have any recommendations, whether it's professional services or some sort of a path or model for working the storage transformation and modernization process into these other larger activities around, AppSystems and facilities?

The combination of our portfolio and our expertise is really going to help our customers drive that success and embrace convergence.



Parks: The approach is very consistent across the board. Converged Storage is a foundational building block that is materialized inside the VirtualSystem, CloudSystem, and AppSystem, those internal part of that larger converged data center that Jon talked about. Along that way, you can have different entry points ,and we certainly have a full set of services to help people get started.

One of the things that we announced this week is the Technology Services Organization. HP recently did a complete reinvention of their consulting portfolio.

As we've seen customers trying to modernize their storage infrastructure and take advantage of some of these converged storage trends, they have responded with a set of workshops, start services, to get people down that path, as well as enterprise services for those customers who are looking to start to bridge between internal IT and cloud environments that might be hosted externally. Our HP 3PAR Utility Storage platform is now a standard offering as an outsourced storage service within enterprise services.

Last, we know that internal IT folks have to upscale and continually learn these new technologies, so that they can feed those back into their business. HP ExpertONE has recently come out with a full set of training and certification courseware to help our channel partners, as well as internal IT folks that are customers, to learn about these new storage elements and to learn how they can take these architectures and help transform their information management processes.

Gardner: Let's go to Helen Tang for the last word today. Helen, based on the research that you’ve conducted and the fact that we have these large trends, some organizations are working towards cloud-computing models, for example, more rapidly than others. Some organizations focus just on converting apps and modernizing them, or perhaps adopting appliance models.

These services are designed to get each of these systems into the environment, integrated into what you have, optimized for your goals and your priorities, and get this up and running in days or weeks.



What is it about the research and the fact that there are so many different ways the organizations need to react rather to these trends that makes sort of the über view of what's been announced this week such a good fit?

Tang: Clearly, ever since HP launched our Converged Infrastructure strategy and portfolio in 2009, we’ve seen great traction among the analyst community, and more importantly, our customers. We’ve helped over 1,000 customers on different stages of this journey, taking their existing data center environments and transforming them, so they can embrace convergence and be able to maximize the enterprise agility that we talked about earlier.

This set of announcements that we’re talking about in the show this week, and hopefully, for the remainder of this year, are significant additions in each of their own markets, having the potential to transform, for example, storage, shaking up an industry that’s been pretty static for the last 20 years by offering completely new architecture design for the world we live in today.

That’s the kind of innovation we’ll drive across the board with our customers and everybody that talked before me has talked about the service offering that we also bring along with these new product announcements. I think that’s key. The combination of our portfolio and our expertise is really going to help our customers drive that success and embrace convergence.

Gardner: Very good. You’ve been listening to a sponsored podcast discussion in conjunction with the HP Discover 2011 Conference on some major news around Converged Infrastructure and data center transformation. There is lots more information available through the various landing pages, and press reports on these events this week.

I’d like to thank our guests for adding some more context, depth, and analysis. We’ve been joined by Helen Tang, Solutions Lead for Data Center Transformation and Converged Infrastructure Solutions. We’ve also been joined by Jon Mormile, Worldwide Product Marketing Manager for Performance-Optimized Data Centers And, Jason Newton, Manager of Announcements and Events for HP ESSN, and as well as Brad Parks, Converged Infrastructure Strategist for HP Storage. Thanks to you all.

This is Dana Gardner, Principal Analyst at Interarbor Solutions. Thanks also to our listeners, and come back next time.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: HP.

Transcript of a sponsored podcast discussion in conjunction HP Discover 2011 on how HP's converged infrastructure strategy supports data center transformation and applications modernization. Copyright Interarbor Solutions, LLC, 2005-2011. All rights reserved.

You may also be interested in: