Showing posts with label Data Center Transformation. Show all posts
Showing posts with label Data Center Transformation. Show all posts

Monday, December 12, 2011

Efficient Data Center Transformation Requires Consolidation and Standardization Across Critical IT Tasks

Transcript of a sponsored podcast discussion in conjunction with an HP video series on the best practices for developing a common roadmap for DCT.

Listen to the podcast. Find it on iTunes/iPod. Download the transcript. Sponsor: HP.

For more information on The HUB, HP's video series on data center transformation, go to www.hp.com/go/thehub.

Dana Gardner: Hi, this is Dana Gardner, Principal Analyst at Interarbor Solutions, and you’re listening to BriefingsDirect.Today, we present a sponsored podcast discussion on quick and proven ways to attain significantly improved IT operations and efficiency.

We'll hear from a panel of HP experts on some of their most effective methods for fostering consolidation and standardization across critical IT tasks and management. This is the second in a series of podcast on data center transformation (DCT) best practices and is presented in conjunction with a complementary video series. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

Here today we will specifically explore building quick data center project wins, leveraging project tracking and scorecards, as well as developing a common roadmap for both facilities and IT infrastructure. You don’t need to go very far in IT to find people who are diligently working to do more with less, even as they're working to transform and modernize their environments.

One way to keep the interest high and those operating and investment budgets in place is to show fast results and then use that to prime the pump for even more improvement and even more funding with perhaps even growing budgets.

With us now to explain how these solutions can drive successful data center transformation is our panel, Duncan Campbell, Vice President of Marketing for HP Converged Infrastructure and small to medium-sized businesses (SMBs); Randy Lawton, Practice Principal for Americas West Data Center Transformation & Cloud Infrastructure Consulting at HP, and Larry Hinman, Critical Facilities Consulting Director and Worldwide Practice Leader for HP Critical Facility Services and HP Technology Services. Welcome to you all.

Let's go first to Duncan Campbell on communicating an ongoing stream of positive results, why that’s important and necessary to set the stage for an ongoing virtuous adoption cycle for data center transformation and converged infrastructure projects.

Duncan Campbell: You bet, Dana. We've seen that when a customer is successful in breaking down a large project into a set of quick wins, there are some very positive outcomes from that.

Breeds confidence

N
umber one, it breeds confidence, and this is a confidence that is actually felt within the organization, within the IT team, and into the business as well. So it builds confidence both inside and outside the organization.

The other key benefit is that when you can manifest these quick wins in terms of some specific return on investment (ROI) business outcome, that also translates very nicely as well and gets a lot of key attention, which I think has some downstream benefits that actually help out the team in multiple ways.

Gardner: I suppose it's not only getting these quick wins, but effectively communicating them well. People really need to know about them.

Campbell: Right. So this is one of the things that some of the real leaders in IT realize. It's not just about attracting the best talent and executing well, but it's about marketing the team’s results as well.

One of the benefits in that is that you can actually break down these projects just in terms of some specific type of wins. That might be around standardization, and you can see a lot of wins there. You can quickly consolidate to blades. You can look at virtualization types of quick wins, as well as some automation quick wins.

We would advocate that customers think about this in terms of almost a step-by-step approach, knocking that down, getting those quick wins, and then marketing this in some very tangible ways that resonate very strongly.



We would advocate that customers think about this in terms of almost a step-by-step approach, knocking that down, getting those quick wins, and then marketing this in some very tangible ways that resonate very strongly.

Gardner: When you start to develop a cycle of recognition, incentives, and buy-in, I suppose we could also start to see some sort of a virtuous adoption cycle, whereby that sets you up for more interest, an easier time evangelizing, and so on.

Campbell: That’s exactly right. A virtuous cycle is well put. That allows really the team to get the additional green light to go to the next step in terms of their blueprint that they are trying to execute on. It gets a green light also in terms of additional dollars and, in some cases, additional headcount to add to their team as well.

What this does is, and I like this term the virtuous cycle, not only allow you to attract key talent, but it really allows you to retain folks. That means you're getting the best team possible to duplicate that, to get those additional wins, and it really does indeed become a virtuous cycle.

Gardner: I suppose one last positive benefit here might be that, as enterprises adopt more of what we call social networking and social media, the ability for the rank and file, those users involved with these products and services, can start to be your best word-of-mouth marketing internally.

TCO savings

Campbell: That’s right. A good example is where we have been able to see a significant total cost of ownership (TCO) type of savings with one of our customers, McKesson, that in fact was taking one of these consolidated approaches with all their development tools. They saw a considerable savings, both in terms of dollars, over $12.9 million, as well as a percentage of TCO savings that was upwards of 50 percent.

When you see tangible exciting numbers like that, that does grab people’s attention and, you bet, it becomes part of the whole social-media fabric and people want to go to a winner. Success breeds success here.

Gardner: Thank you. Next, we're going to go to Randy Lawton and hear some more about why tracking scorecards and managing expectations through proven data and metrics also contributes to a successful ongoing DCT activity.

Randy, why is it so important to know your baseline tracks and then measure them each and every step along the way?

Randy Lawton: Thank you, Dana. Many of the transformation programs we engage in with our customers are substantially complex and span many facets of the IT organization. They often involve other vendors and service providers in the customer organization.

So there’s a tremendous amount of detail to pull together and organize in these complex engagements and initiatives. We find that there’s really no way to do that, unless you have a good way of capturing the data that’s necessary for a baseline.

It’s important to note that we manage these programs through a series of phases in our methodology. The first phase is strategy and analysis. During that phase, we typically run a discovery on all IT assets that would include the data center, servers, storage, the network environment, and the applications that run on those environments.

During the course of the last few years, our services unit has made investments in a number of tools that help with the capture and management of the data, the scorecarding, and the analytics.



From that, we bridge into the second phase, which is architect and validate, where we begin to solution out and develop the strategies for a future-state design that includes the standardization and consolidation approaches, and on that begin to assemble the business case. In a detailed design, we build out those specifications and begin to create the data that determines what the future-state transformation is.

Then, through the implementation phase, we have detailed scorecards that are required to be tracked to show progress of the application teams and infrastructure teams that contribute to the program in order to guarantee success and provide visibility to all the stakeholders as part of the program, before we turn everything over to operations.

During the course of the last few years, our services unit has made investments in a number of tools that help with the capture and management of the data, the scorecarding, and the analytics through each of the phases of these programs. We believe that helps offer a competitive advantage for us and helps enable more rapid achievement of the programs from our customer perspective.

Gardner: As we heard from Duncan about why it’s important to demonstrate wins, I sense that organizations are really data driven now more than ever. It seems important to have actual metrics in place and be able to prove your work each step of the way.

Complex engagements

Lawton: That’s very true. In these complex engagements, it’s normally some time before there are quick-win type of achievements that are really notable.

For example, in the HP IT transformation program we undertook over several years back through 2008, we were building six new data centers so that we could consolidate 185 worldwide. So it was some period of time from the beginning of the program until the point where we moved the first application into production.

All along the way we were scorecarding the progress on the build-out of the data centers. Then, it was the build-out of the compute infrastructure within the data centers. And then it was a matter of being able to show the scorecarding against the applications, as we could get them into the next generation data centers.

If we didn't have the ability to show and demonstrate the progress along the way, I think our stakeholders would have lost patience or would not have felt that the momentum of the program was going on the kind of track that was required. With some of these tools and approaches and the scorecarding, we were able to demonstrate the progress and keep very visible to management the movements and momentum of the program.During the course of the last few years, our services unit has made investments in a number of tools that help with the capture and management of the data, the scorecarding, and the analytics.

If we didn't have the ability to show and demonstrate the progress along the way, I think our stakeholders would have lost patience or would not have felt that the momentum of the program was going on the kind of track that was required.



Gardner: Randy, I know that many organizations are diligent about the scorecarding across all sorts of different business activities and metrics. Have you noticed in some of these engagements that these readouts and feedback in the IT and data center transformation activities are somehow joined with other business metrics? Is there an executive scorecard level that these feed into to give more of a holistic overview? Is this something that works in tandem with other scorecarding activities in a typical corporation?

Lawton: It absolutely is, Dana. Often in these kind of programs there are business activities and projects that are going on within the business units. There are application projects that work into the program and then there are the infrastructure components that all have to be fit together at some level.

What we typically see is that the business will be reporting its set of metrics, each of the application areas will be reporting their metrics, and it’s typically from the infrastructure perspective where we pull together all of the application and infrastructure activities and sometimes the business metrics as well.

We've seen multiple examples with our customers where they are either all consolidated into executive scorecards that come out of the reporting from the infrastructure portion of the program that rolls it all together, or that the business may be running separate metrics and then application teams and infrastructure are running the IT level metrics that all get rolled together into some consolidated reporting on some level.

Gardner: And that, of course, ensures that IT isn’t the odd man out, when it comes to being on time and in alignment with these other priorities. That sounds like a very nice addition to the way things may have been done five or 10 years ago.

Lawton: Absolutely.

Gardner: Any examples, Randy, either with organizations you could name, or use cases where you could describe, where the use of this ongoing baselining, tracking, measuring, and delivering metrics facilitates some benefits? Any stories that you can share?

Cloning applications

Lawton: A very notable example is one of our telecom customers we worked with during the last year and finished a program earlier this year. The company was purchasing the assets of another organization and needed to be able to clone the applications and infrastructure that supported business processes from the acquired company.

Within the mix of delivery for stakeholders in the program, there were nine different companies represented. There were some outsourced vendors from the application support side in the acquiree’s company, outsourcers in the application side for the acquiring company, and outsourcers in the data centers that operated data center infrastructure and operations for the target data centers we were moving into.

What was really critical in pulling all this together was to be able to map out, at a very detailed level, the tasks that needed to be executed, and in what time frame, across all of these teams.

The final cutover migration required over 2,500 tasks across these 9 different companies that all needed to be executed in less than 96 hours in order to meet the downtime window of requirements that were required of the acquiring company’s executive management.

It was the detailed scorecarding and operating war rooms to keep those scorecards up to date in real-time that allowed us to be able to accomplish that. There’s just no possible way we would have been able to do that ahead of time.

For more information on The HUB, HP's video series on data center transformation, go to www.hp.com/go/thehub.

I think that HP was very helpful in working with the customer and bringing that perspective into the program very early on, because there had been a failed attempt to operate this program prior to that, and with our assistance and with developing these tools and capabilities, we were able to successfully achieve the objectives of that program.

Gardner: One thing that jumped out at me there was your use of the words real time. How important is it to capture this data and adjust it and update it in real-time, where there’s not a lot of latency? How has that become so important?

Lawton: In this particular program, because there were so many activities taking place in parallel by representatives from all over the world across these nine different companies, the real-time capture and update of all of the data and information that went into the scorecarding was absolutely essential.

In some of the other programs we've operated, there was not such a compressed time frame that required real-time metrics, but we, at minimum, often required daily updates to the metrics. So each program, the strategies that drive that program, and some of the time constraints will drive what the need is for the real-time update.

We often can provide the capabilities for the real-time updates to come from all stakeholders in the program, so that the tools can capture the data, as long as the stakeholders are providing the updates on a real-time basis.

Gardner: So as is often the case, good information in, good results back.

Lawton: Absolutely.

Organizing infrastructure

Gardner: Let’s move now to our third panelist today. We're going to hear about why organizing facilities and infrastructure planning in conjunction in relationship to one another is so important.

Now to Larry Hinman. Larry, let’s go historical for a second. Has there usually been a completely separate direction for facilities planning in IT infrastructure? Why was that the case, and why is it so important to end that practice?

Larry Hinman: Hi, Dana. If you look over time and over the last several years, everybody has data centers and everybody has IT. The things that we've seen over the last 10 or 15 years are things like the Internet and criticality of IT and high density and all this stuff that people are talking about these days. If you look at the ways companies organized themselves several years ago, IT was a separate organization, facilities was a separate organization, and that actually still exists today.

One of the things that we're still seeing today is that, even though there is this push to try to get IT groups and facilities organizations to talk and work each other, this gap that exists between truly how to glue all of this together.

If you look at the way people do this traditionally -- and when I say people, I'm talking about IT organizations and facilities organization -- they typically will model IT and data centers, even if they are attempting to try and glue them together, they try to look at power requirements.

One of the things that we spotted a few years ago was that when companies do this, the risk of over provisioning or under provisioning is very high. We tried to figure out a way to back this up a few notches.

What we figured out was that you have to stop and back up a few notches to really start to get all this glued together.



How can we remedy this problem and how can we bring some structure to this and bring some, what I would call, sanity to the whole equation, to be able to have something predictable over time? What we figured out was that you have to stop and back up a few notches to really start to get all this glued together.

So we took this whole complex framework and data center program and broke it into four key areas. It looks simplistic in the way we've done this, and we have done this over many, many years of analysis and trying to figure out exactly what direction we should take. We've actually spun this off in many directions a few times, trying to continually make it better, but we always keep coming back to these four key profiles.

Business and risk is the first profile. IT architecture, which is really the application suite, is the second profile. IT infrastructure is the third. Data center facilities is the fourth.

One of the things that you will start to hear from us, if you haven’t heard it already via the data center transformation story that you guys were just recently talking about, is this nomenclature of IT plus facilities equals the data center.

Getting synchronized

L
ook at that, look at these four profiles, and look at what we call a top-down approach, where I start to get everybody synchronized on what risk profiles are and tolerances for risk are from an IT perspective and how to run the business, gluing that together with an IT infrastructure strategy, and then gluing all that into a data center facility strategy.

What we found over time is that we were able to take this complex program of trying to have something predictable, scalable, all of the groovy stuff that people talk about these days, and have something that I could really manage. If you're called into the boss’s office, as I and others have been over the many years in my career, to ask what’s the data center going to look like over the next five years, at least I would have some hope of trying to answer that question.

That is kind of the secret sauce here, and the way we have developed our framework was breaking this complex program into these four key areas. I'm certainly not trying to say this is an easy thing to do. In a lot of companies, it’s culture changes. It’s a threat to the way the very organization is organized from an IT and a facilities perspective. The risk and recovery teams and the management teams all have to start working together collaboratively and collectively to be able to start to glue this together.

Gardner: You mentioned earlier the issues around energy and the ongoing importance around the cost structure for that. I suppose it's not just fitting these together, but making them fit for purpose. That is to say, IT and facilities on an ongoing basis.

You get it pointing the right direction, collect the data, complete the modeling, put it in the toolset, and now you have something very dynamic that you can manage over time.



It’s not really something that you do and sit still, as would have been the case several years ago, or in the past generation of computing. This is something that's dynamic. So how do you allow a fit-for-purpose goal with data-center facilities to be something that you can maintain over time, even as your requirements change?

Hinman: You just hit a very important point. One of the the big lessons learned for us over the years has been this ability to not only provide this kind of modeling and predictability over time for clients and for customers. We had to get out of this mode of doing this once and putting it on a shelf, deploying a future state data center framework, keep client pointing in the right direction.

The data is, as you said, gets archived, and they pick it up every few years and do it again and again and again, finding out that a lot of times there's an "aha" moment during those periods, the gaps between doing it again and again.

One thing that we have learned is to not only have this deliberate framework and break it into these four simplistic areas, where we can manage all of this, but to redevelop and re-hone our tools and our focus a little bit, so that we could use this as a dynamic ongoing process to get the client pointing the right direction. Build a data center framework that truly is right size, integrated, aligned, and all that stuff. But then, to have something that was very dynamic that they could manage over time.

That's what we've done. We've taken all of our modeling tools and integrated them to common databases, where now we can start to glue together even the operational piece, of data center infrastructure management (DCIM), or architecture and infrastructure management, facilities management, etc., so now the client can have this real-time, long-term, what we call a 10-year view of the overall operation.

So now, you do this. You get it pointing the right direction, collect the data, complete the modeling, put it in the toolset, and now you have something very dynamic that you can manage over time. That's what we've done, and that's where we have been heading with all of our tools and processes over the last two to three years.

EcoPOD concept

Gardner: I also remember with great interest the news from HP Discover in Las Vegas last summer about your EcoPOD and the whole POD concept toward facilities and infrastructure. Does that also play a part in this and perhaps make it easier when your modularity is ratcheted up to almost a mini data center level, rather than at the server or rack level?

Hinman: With the various what we call facility sourcing options, which PODs are certainly one of those these days, we've also been very careful to make sure that our framework is completely unbiased when it comes to a specific sourcing option.

What that means is, over the last 10 plus years, most people were really targeted at building new green-field data centers. It was all about space, then it became all about power, then about cooling, but we were still in this brick and mortar age, but modularity and scalability has been driving everything.

With PODs coming on the scene with some of the other design technologies, like multi-tiered or flexible data center, what we've been able to do is make sure that our framework is targeted at almost a generic framework where we can complete all the growth modeling and analysis, regardless of what the client is going to do from a facilities perspective.

It lays the groundwork for the customer to get their arms around all of this and tie together IT and facilities with risk and business, and then start to map out an appropriate facility sourcing option.

We find these days that POD is actually a very nice fit with all of our clients, because it provides high density server farms, it provides things that they can implement very quickly, and gets the power usage effectiveness (PUE) and power and operational cost down.



We find these days that POD is actually a very nice fit with all of our clients, because it provides high density server farms, it provides things that they can implement very quickly, and gets the power usage effectiveness (PUE) and power and operational cost down. We're starting to see that take a stronghold in a lot of customers.

Gardner: As we begin to wrap up, I should think that these trends are going to be even more important, these methods even more productive, when we start to factor in movement toward private cloud. There's the need to support more of a mobile tier set of devices, and the fact that we're looking for of course even more savings on those long-term energy and operating costs.

Back to you, Randy Lawton. Any thoughts about how scorecards and tracking will be even more important in the future, as we move, as we expect we will, to a more cloud-, mobile-, and eco-friendly world?

Lawton: Yes, Dana. In a lot of ways, there is added complexity these days with more customers operating in a hybrid delivery model, where there may be multiple suppliers in addition to their internal IT organizations.

Greater complexity

Just like the example case I gave earlier, where you spread some of these activities not only across multiple teams and stakeholders, but also into separate companies and suppliers who are working under various contract mechanism, the complexity is even greater. If that complexity is not pulled into a simplified model that is beta driven, that is supported by plans and contracts, then there are big gaps in the programs.

The scorecarding and data gathering methods and approaches that we take on our programs are going to be even more critical as we go forward in these more complex environments.

Operating the cloud environments simplifies things from a customer perspective, but it does add some additional complexities in the infrastructure and operations of the organization as well. All of those complexities add up to, meaning that even more attention needs to be brought to the details of the program and where those responsibilities lie within stakeholders.

Gardner: Larry Hinman, we're seeing this drive toward cloud. We're also seeing consolidation and standardization around data center infrastructure. So perhaps more large data centers to support more types of applications to even more endpoints, users, and geographic locations or business units. Getting that facilities and IT equation just right becomes even more important as we have fewer, yet more massive and critical, data centers involved.

Hinman: Dana, that's exactly correct. If you look at this, you have to look at the data center facilities piece, not only from a framework or model or topology perspective, but all the way down to the specific environment.

You have to look at the data center facilities piece, not only from a framework or model or topology perspective, but all the way down to the specific environment.



It could be that based on a specific client’s business requirements and IT strategy that it will require possibly a couple of large-scale core data centers and multiple remote sites and/or it could just be a bunch of smaller types of facilities.

It really depends on how the business is being run and supported by IT and the application suite, what the tolerances for risk are, whether it’s high availability, synchronous, all the groovy stuff, and then coming up with a framework that matches all those requirements that it’s integrating.

We tell clients constantly that you have to have your act together with respect to your profile, and start to align all of this, before you can even think about cloud and all the wonderful technologies that are coming down the pike. You have to be able to have something that you can at least manage to control cost and control this whole framework and manage to a future-state business requirement, before you can even start to really deploy some of these other things.

So it all glues together. It's extremely important that customers understand that this really is a process they have to do.

Gardner: Very good. You've been listening to a sponsored BriefingsDirect podcast discussion on how quick and proven ways to attain productivity can significantly improve IT operations and efficiency.

This is the second in an ongoing series of podcasts on data center transformation best practices and is presented in conjunction with a complementary video series.

I'd like to thank our guests, Duncan Campbell, Vice President of Marketing for HP Converged Infrastructure and SMB; Randy Lawton, Practice Principal in the Americas West Data Center Transformation & Cloud Infrastructure Consulting at HP, and Larry Hinman, Critical Facilities Consulting Director and Worldwide Practice Leader for HP Critical Facility Services and HP Technology Services. So thanks to you all.

This is Dana Gardner, Principal Analyst at Interarbor Solutions. Also, thanks to our audience for listening, and come back next time.

Listen to the podcast. Find it on iTunes/iPod. Download the transcript. Sponsor: HP.

For more information on The HUB, HP's video series on data center transformation, go to www.hp.com/go/thehub.

Transcript of a sponsored podcast discussion in conjunction with an HP video series on the best practices for developing a common roadmap for DCT. Copyright Interarbor Solutions, LLC, 2005-2011. All rights reserved.

You may also be interested in:

Friday, October 28, 2011

Continuous Improvement And Flexibility Are Keys to Successful Data Center Transformation, Say HP Experts

Transcript of a sponsored podcast in conjunction with an HP video series on how companies can transform data centers productively and efficiently.

Listen to the podcast. Find it on iTunes/iPod. Download the transcript. Sponsor: HP.

For more information on The HUB -- HP's video series on data center transformation, go to www.hp.com/go/thehub.

Dana Gardner: Hi, this is Dana Gardner, Principal Analyst at Interarbor Solutions, and you’re listening to BriefingsDirect.

Today, we present a sponsored podcast discussion on two major pillars of proper and successful data center transformation (DCT) projects. We’ll hear from a panel of HP experts on proven methods that have aided productive and cost-efficient projects to reshape and modernize enterprise data centers.

This is the first in a series of podcasts on DCT best practices and is presented in conjunction with a complementary video series. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

Here today, we’ll learn about the latest trends buttressing the need for DCT and then how to do it well and safely. Specifically, we’ll delve into why it's important to fully understand the current state of an organization’s IT landscape and data center composition in order to then properly chart a strategy for transformation.

Secondly, we'll explore how to avoid pitfalls by balancing long-term goals with short-term flexibility. The key is to know how to constantly evaluate based on metrics and to reassess execution plans as DCT projects unfold. This avoids being too rigidly aligned with long-term plans and roadmaps and potentially losing sight of how actual progress is being made -- or not.

With us now to explain why DCT makes sense and how to go about it with lower risk, we are joined by our panel: Helen Tang, Worldwide Data Center Transformation Lead for HP Enterprise Business; Mark Grindle, Master Business Consultant at HP, and Bruce Randall, Director of Product Marketing for Project and Portfolio Management at HP.

Welcome to you all.

My first question goes to Helen. What are the major trends driving the need for DCT? Also, why is now such a good time to embark on such projects?

Helen Tang: We all know that in this day and age, the business demands innovation, and IT is really important, a racing engine for any business. However, there are a lot of external constraints. The economy is not getting any better. Budgets are very, very tight. They are dealing with IT sprawl, aging infrastructure, and are just very much weighed down by this decade of old assets that they’ve inherited.

So a lot of companies today have been looking to transform, but getting started is not always very easy. So HP decided to launch this HUB project, which is designed to be a resource engine for IT to feature a virtual library of videos, showcasing the best of HP, but more importantly, ideas for how to address these challenges. We as a team, decided to tackle it with a series that’s aligned around some of the ways customers can approach addressing data centers, transforming them, and how to jump start their IT agility.

The five steps that we decided that as keys for the series would be the planning process, which is actually what we’re discussing in this podcast: data center consolidation, as well as standardization; virtualization; data center automation; and last but not least, of course, security.

IT superheroes


T
o make this video series more engaging, we hit on this idea of IT as superheroes, because we’ve all seen people, especially in this day and age, customers with the clean budget, whose IT team is really performing superhuman feats.

We thought we’d produce a series that's a bit more light-hearted than is usual for HP. So we added a superhero angle to the series. That’s how we hit upon the name of "IT Superhero Secrets: Five Steps to Jump Start Your IT Agility." Hopefully, this is going to be one of the little things that can contribute to this great process of data center modernizing right now, which is a key trend.

With us today are two of these experts that we’re going to feature in Episode 1. And to find these videos, you go to hp.com/go/thehub.

Gardner: Now we’re going to go to Mark Grindle. Mark, you've been doing this for quite some time and have learned a lot along the way. Tell us why having a solid understanding of where you are in the present puts you in a position to better execute on your plans for the future.

Mark Grindle: Thank you, Dana. There certainly are a lot of great reasons to start transformation now.

But as you said, the key to starting any kind of major initiative, whether it’s transformation, data center consolidation, or any of these great things like virtualization, technology refresh that will help you improve your environment, improve the service to your customers, and reduce costs, which is what this is all about, is to understand where you are today.

Most companies out there with the economic pressures and technology changes that have gone on have done a lot to go after the proverbial low-hanging fruit. But now it’s important to understand where you are today, so that you can build the right plan for maximizing value the fastest and in the best way.

When we talk about understanding where you are today, there are a few things that jump to mind. How many servers do I have? How much storage do I have? What are the operating system levels and the versions that I'm at? How many desktops do I have? People really think about that kind of physical inventory and they try to manage it. They try to understand it, sometimes more successfully and other times less successfully.

But there's a lot more to understanding where you are today. Understanding that physical inventory is critical to what you need to understand to go forward, and most people have a lot of tools out there already to do that. I should mention that those of you who don’t have tools that can get that physical inventory, it’s important that you do.

I've found so many times when I go into environments that they think they have a good understanding of what they have physically, and a lot of times they do, but rarely is that accurate. Manual processes just can't keep things as accurate or as current as you really need, when you start trying to baseline your environment so that you can track and measure your progress and value.

Thinking about applications


O
f course, beyond the physical portions of your inventory, you'd better start thinking about your applications. What are your applications. What language are they written in? Are those traditional or supportable commercial-off-the-shelf (COTS) type applications? Are they homegrown? That’s going to make a big difference in how you move forward.

And of course, what does your financial landscape look like? What’s going in the operating expense? What’s your capital expense? How is it allocated out, and by the way, is it consistently allocated out.

I've run into a lot of issues where a business unit in the United States has put certain items into an operating expense bucket. In another country or a sub-business unit or another business unit, they're tracking things differently in where they put network cost or where they put people cost or where they put services. So it's not only important to understand where your money is allocated, but what’s in those buckets, so that you can track the progress.

Then, you get into things like people. As you start looking at transformation, a big part of transformation is not just the cost savings that may come about through being able to redeploy your people, but it's also from making sure that you have the right skill set.

If you don’t really understand how many people you have today, what roles and what functions they’re performing, it's going to become really challenging to understand what kind of retraining, reeducation, or redeployment you’re going to do in the future as the needs and the requirements and the skills change.

You really need to understand where they are, so you can properly prepare them for that future space that they want to get into.



You transform, as you level out your application landscape, as you consolidate your databases, as you virtualize your servers, as you use more storage carrying all those great technology. That's going to make a big difference in how your team, your IT organization runs the operations. You really need to understand where they are, so you can properly prepare them for that future space that they want to get into.

So understanding where you are, understanding all those aspects of it are going be the only ways to understand what you have to do to get you in a state. As was mentioned earlier, you know the metrics of measurement to track your progress. Are you realizing the value, the saving, the benefit to your company that you initially used or justified transformation?

Gardner: Mark, I had a thought when you were talking. We’re not just going from physical to physical. A lot of DCT projects now are making that leap from largely physical to increasingly virtual. And that is across many different aspects of virtualization, not just server virtualization.

Is there a specific requirement to know your physical landscape better to make that leap successfully? Is there anything about moving toward a more virtualized future that adds an added emphasis to this need to have a really strong sense of your present state?

Grindle: You're absolutely right on with that. A lot of people have server counts -- I've got a thousand of these, a hundred of those, 50 of those types of things. But understanding the more detailed measurements around those, how much memory is being utilized by each server, how much CPU or processor is being utilized by each server, what do the I/Os look like, the network connectivity, are the kind of inventory items that are going to allow you to virtualize.

Higher virtualization ratios


I
talk to people and they say, "I've got a 5:1 or a 10:1 or a 15:1 virtualization ratio, meaning that you have 15 physical servers and then you’re able to talk to one. But if you really understand what your environment is today, how it runs, and the performance characteristics of your environment today, there are environments out there that are achieving much higher virtualization ratios -- 30:1, 40:1, 50:1. We’ve seen a couple that are in the 60 and 70:1.

Of course, that just says that initially they weren’t really using their assets as well as they could have been. But again, it comes back to understanding your baseline, which allows you to plan out what your end state is going to look like.

If you don’t have that data, if you don’t have that information, naturally you've got to be a little more conservative in your solutions, as you don’t want to negatively impact the business of the customers. If you understand a little bit better, you can achieve greater savings, greater benefits.

Remember, this is all about freeing up money that your business can use elsewhere to help your business grow, to provide better service to those customers, and to make IT more of a partner, rather than just a service purely for the business organization.

Gardner: So it sounds as if measuring your current state isn’t just measuring what you have, but measuring some of the components and services you have physically in order to be able to move meaningfully and efficiently to virtualization. It’s really a different way to measure things, isn’t it?

The more data you have, the better you’re going to be able to figure out your end-state solution, and the more benefit you’re going to achieve out of that end state.



Grindle: Absolutely. And it’s not a one-time event. To start out in the field -- whether transformation is right for you and what your transformations look like -- you can do that one-time inventory, that one-time collection of performance information. But it’s really going to be an ongoing process.

The more data you have, the better you’re going to be able to figure out your end-state solution, and the more benefit you’re going to achieve out of that end state. Plus, as I mentioned earlier, the environment changes, and you’ve got to constantly keep on top of it and track it.

You mentioned that a lot of people are going towards virtualization. That becomes an even bigger problem. At least when you’re standing up a physical server today, people complain about how long it takes in a lot of organizations, but there are a lot of checks and balances. You’ve got to order that physical hardware. You've got to install the hardware. You’ve got to justify it. It's got to be loaded up with software. It’s got to be connected to the network.

A virtualized environment can be stood up in minutes. So if you’re not tracking that on an ongoing basis, that's even worse.

Gardner: Let’s now go to Bruce Randall. Bruce, you’ve been looking at the need for being flexible in order to be successful, even as you've got a long-term roadmap ahead of you. Perhaps you could fill us in on why it’s important to evaluate along the way and not be even blinded by long-term goals, but keep balancing and reassessing along the way?

For more information on The HUB -- HP's video series on data center transformation, go to www.hp.com/go/thehub.

Account for changes

Bruce Randall: That goes along with what Mark was just saying about the infrastructure components, how these things are constantly changing, and there has to be a process to account for all of the changes that occur.

If you’re looking at a transformation process, it really is a process. It's not a one-time event that occurs over a length of time. Just like any other big program or project that you may be managing you have to plan not only at the beginning of that transformation, but also in the middle and even sometimes in the end of these big transformation projects.

If you think about these things that may change throughout that transformation, one is people. You have people that come. You have people that are leaving for whatever reason. You have people that are reassigned to other roles or take roles that they wanted to do outside of the transformation project. The company strategy may even change, and in fact, in this economy, probably will most likely within the course of the transformation project.

The money situation will most likely change. Maybe you’ve had a certain amount of budget when you started the transformation. You counted on that budget to be able to use it all, and then things change. Maybe it goes up. Maybe it goes down, but most likely, things do change. The infrastructure as Mark pointed to is constantly in flux.

So even though you might have gotten a good steady state of what the infrastructure looked like when you started your transformation project, that does change as well. And then there's the application portfolio. As we continue to run the business, we continue to add or enhance existing applications. The application portfolio changes and therefore the needs within the transformation.

Even though you might have gotten a good steady state of what the infrastructure looked like when you started your transformation project, that does change as well.



Because of all of these changes occurring around you, there's a need to plan not only for contingencies to occur at the beginning of the process, but also to continue the planning process and update it as things change fairly consistently. What I’ve found over time, Dana, with various customers, as they are doing these transformation projects and they try to plan, that planning stage is not just the beginning, not just at the middle, and not just the one point. In other words, it makes the planning process go a lot better and it becomes a lot easier.

In fact, I was speaking with a customer the other day. We went to a baseball game together. It was a customer event, and I was surprised to see this particular customer there, because I knew it was their yearly planning cycle that was going on. I asked them about that, and they talked about the way that they had used our tools. The HP tool sets that they used had allowed them to literally do planning all the time. So they could attend a baseball game instead of attend the planning fire-drill.

So it wasn’t a one-time event, and even if the business wanted a yearly planning view, they were able to produce that very, very easily, because they kept their current state and current plans up to date throughout the process.

Gardner: This reminds me that we've spoken in the past, Bruce, about software development. Successful software development for a lot of folks now involves agile principles. There are these things they call scrum meetings, where people get together and they're constantly reevaluating or adjusting, getting inputs from the team.

Having just a roadmap and then sticking to it turns out to not be just business as usual, but can actually be a path to disaster. Any thoughts about learning from how software is developed in terms of planning for a large project like a DCT.

A lot of similarities

Randall: Absolutely. There are a lot of similarities between the new agile methodologies and what I was just describing in terms of planning at the beginning, in the middle, and the end basically constantly. And when I say the word, plan, I know that evokes in some people a thought of a lot of work, a big thing. In reality, what I am talking about is much smaller than that.

If you’re doing it frequently, the planning needs to be a lot smaller. It's not a huge, involved process. It's very much like the agile methodology, where you’re consistently doing little pieces of work, finishing up sub-segments of the entire thing that you needed to do, as opposed to all of it describing it all, having all your requirements written out at the beginning, then waiting for it to get done sometime later.

You’re actually adapting and changing, as things occur. What's important in the agile methodology, as well as in this transformation, like the planning process I talked about for transformation, is that you still have to give management visibility into what's going on.

Having a planning process and even a tool set to help you manage that planning process will also give management the visibility that they need into the status of that transformation project. The planning process, also like the agile, the development methodology allows collaboration. As you’re going back to the plan, readdressing it, thinking about the changes that have occurred, you’re collaborating between various groups in silos to make sure that you’re still in tune and that you’re still doing things that you need to be doing to make things happen.

One other thing that often is forgotten within the agile development methodology, but it’s still very important, particularly for transformation, is the ability to track the cost of that transformation at any given point in time. Maybe that's because the budget needs to be increased or maybe it's because you're getting some executive mandate that the budget will be decreased, but at least knowing what your costs are, how much you’ve spent, is very, very important.

One other thing that often is forgotten within the agile development methodology, but it’s still very important, particularly for transformation, is the ability to track the cost of that transformation.



Gardner: When you say that, it reminds me of something 20 years or more ago in manufacturing, the whole quality revolution, thought leaders like Deming and the Japanese Kaizen concept of constantly measuring, constantly evaluating, not letting things slip. Is there some relationship here to what you’re doing in project management to what we saw during this “quality revolution” several decades ago?

Randall: Absolutely. You see some of the tenets of project management that are number one. You're tracking what’s going on. You’re measuring what’s going on at every point in time, not only with the cost and the time frames, but also with the people who are involved. Who's doing what? Are they fulfilling the task we’ve asked them to do, so on and so forth. This produces, in the end, just as Deming and others have described, a much higher quality transformation than if you were to just haphazardly try to fulfill the transformation, without having a project management tool in place, for example.

Gardner: So we’ve discussed some of these major pillars of good methodological structure and planning for DCT. How do you get started? Are there some resources available to get folks better acquainted with these to begin executing on how to put in place measurements, knowing their current state, creating a planning process that's flexible and dynamic before they even get into a full-fledged DCT? So what resources are available, and I'll open up this to the entire panel.

Randall: One thing that I would start with is to use multiple resources from HP and others to help customers in their transformation process to both plan out initially what that transformation is going to look like and then give you a set of tools to automate and manage that program and the changes that occur to it throughout time.

That planning is important, as we’ve talked about, because it occurs at multiple stages throughout the cycle. If you have an automated system in place, it certainly it makes it easier to track the plan and changes to that plan over time.

Gardner: And then you’ve created this video series. You also have a number of workshops. Are those happening fairly regularly at different locations around the globe? How are the workshops available to folks just to start in on this?

A lot of tools


Grindle: We do have a lot of tools as I was mentioning. One of the ones I want to highlight is the Data Center Transformation Experience workshop. And the reason I want to highlight because it really ties into what we’ve been talking about today. It’s an interactive session involving large panels, very minimal presentation and very minimal speaking by the HP facilitators.

We walk people through all the aspects of transformation and this is targeted at a strategic level. We’re looking at the CIOs, CTOs, and the executive decision makers to understand why HP did what they did as far as transformation goes.

We discuss what we’ve seen out in the industry, what the current trends are, and pull out of the conversation with these people where their companies are today. At the end of a workshop, and it's a full-day workshop, there are a lot of materials that are delivered out of it that not only documents the discussions throughout the day, but really provides a step or steps of how to proceed.

So it’s a prioritization. You have facility, for example, that might be in great shape, but your data warehouses are not. That’s an area that you should go after fast, because there's a lot of value in changing it, and it’s going to take you a long time. Or there's a quick hit in your organization and the way you manage your operation, because we cover all the aspects of program management, governance, management of change. That’s the organizational change for a lot of people. As for the technology, we can help them understand not only where they are, but what the initial strategy and plan should be.

You brought up a little bit earlier, Dana, some of the quality people like Deming, etc. We’ve got to remember that transformation is really a journey. There's a lot you can accomplish very rapidly. We always say that the faster you can achieve transformation, the faster you can realize value and the business can get back to leveraging that value, but transformation never ends. There's always more to do. So it's very analogous to the continuous improvement that comes out of some of the quality people that you mentioned earlier.

We always say that the faster you can achieve transformation, the faster you can realize value and the business can get back to leveraging that value, but transformation never ends.



Gardner: I'm curious about these workshops. Are they happening relatively frequently? Do they happen in different regions of the globe? Where can you go specifically to learn about where the one for you might be next?

Grindle: The workshops are scheduled with companies individually. So a good touch point would be with your HP account manager. He or she can work with you to schedule a workshop and understand that how it can be done. They're scheduled as needed.

We do hold hundreds of them around the world every year. It’s been a great workshop. People find it very successful, because it really helps them understand how to approach this and how to get the right momentum within their company to achieve transformation, and there's also a lot of materials on our website.

Gardner: You've been listening to a sponsored BriefingsDirect podcast discussion on two major pillars of proper and successful DCT projects, knowing your true state to start and then also being flexible on the path to long-term milestones and goals.

I’d like to thank our panel, Helen Tang, Worldwide Data Center Transformation Lead for HP Enterprise Business; Mark Grindle, Master Business Consultant at HP, and Bruce Randall, Director of Product Marketing for Project and Portfolio Management at HP. Thank you to you all.

This is Dana Gardner, Principal Analyst at Interarbor Solutions, and thanks again for our audience and their listening and attention, and do come back next time.

For more information on The HUB -- HP's video series on data center transformation, go to www.hp.com/go/thehub.

Listen to the podcast. Find it on iTunes/iPod. Download the transcript. Sponsor: HP.

Transcript of a sponsored podcast in conjunction with an HP video series on how companies can transform data centers productively and efficiently. Copyright Interarbor Solutions, LLC, 2005-2011. All rights reserved.

You may also be interested in:

Wednesday, July 06, 2011

Case Study: T-Mobile's Massive Data Center Transformation Journey Wins Award Using HP ALM Tools

Transcript of a BriefingsDirect podcast on how awarding-winning communications company T-Mobile improved application quality, while setting up two new data centers.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: HP.

Dana Gardner: Hello, and welcome to a special BriefingsDirect podcast series coming to you from the HP Discover 2011 conference in Las Vegas. We're here on the Discover show floor the week of June 6 to explore some major enterprise IT solution trends and innovations making news across HP’s ecosystem of customers, partners, and developers.

I'm Dana Gardner, Principal Analyst at Interarbor Solutions, and I'll be your host throughout this series of HP-sponsored Discover live discussions.

Our latest user case study focuses on an award-wining migration and transformation and a grand-scale data center transition for T-Mobile. I was really impressed with the scope and size and the amount of time -- in terms of being short -- for you all to do this.

We're here with two folks who are going to tell us more about what T-Mobile has done to set up two data centers, and how in the process they have improved their application quality and the processes behind their application lifecycle management (ALM). [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

So join me in welcoming Michael Cooper, Senior Director of Enterprise IT Quality Assurance at T-Mobile. Welcome, Michael.

Michael Cooper: Thank you.

Gardner: We're also here with Kirthy Chennaian, Director Enterprise IT Quality Management at T-Mobile. Welcome.

Kirthy Chennaian: Thank you. It's a pleasure.

Gardner: People don’t just do these sorts of massive, hundred million dollar-plus activities because it's nice to have. This must have been something that was really essential for you.

Cooper: Absolutely. There are some definite business drivers behind setting up a world-class, green data center and then a separate disaster-recovery data center. Just for a little bit of a clarification. The award that we won is primarily focused on the testing effort and the quality assurance (QA) effort that went into that.

Gardner: Kirthy, tell me why you decided to undertake both an application transformation as well as a data center transformation -- almost simultaneously?

Chennaian: Given the scope and complexity of the initiative, ensuring system availability was primarily the major driver behind this. QA plays a significant role in ensuring that both data centers were migrated simultaneously, that the applications were available in real-time, and that from a quality assurance and testing standpoint we had to meet time-frames and timelines.

Gardner: Let's get a sense of the scope. First and foremost, Michael, tell me about T-Mobile and its stature nowadays.

Significant company

Cooper: T-Mobile is a national provider of voice, data, and messaging services. Right now, we're the fourth largest carrier in the US and have about 33 million customers and $21 billion in revenue, actually a little bit more than that. So, it's a significant company.

We're a company that’s really focused on our customers, and we've gone through an IT modernization. The data center efforts were a big part of that IT modernization, in addition to modernizing our application platform.

Gardner: Let's also talk about the scope of your movement to a new data center, and then we can get into the application transformation parts of that. In a nutshell, what did we do here? It sounds like we've set up two modern data centers, and then migrated your apps and data from an older one into those.

Chennaian: Two world-class data centers, as Michael had pointed out. One in Wenatchee, Washington and the other one is Tempe, Arizona. The primary data center is the one in Wenatchee, and the failover disaster-recovery data center is in Tempe, Arizona.

Cooper: What we were doing was migrating more than 175 Tier 1 applications and Tier 0, and some Tier 2 as well. It was a significant effort requiring quite a bit of planning, and the HP tools had a big part in that, especially in the QA realm.

Gardner: Now, were these customer-facing apps, internal apps, logistics? Are we talking about retail? Give me a sense of the scope here on the breadth and depth of your apps?

Chennaian: Significant. We're talking critical applications that are customer-facing. We're talking enterprise applications that span across the entire organization. And, we're also talking about applications that support these critical front-end applications. So, as Michael pointed out, 175 applications needed to be migrated across both of the data centers.

For example, moving T-Mobile.com, which is a customer-facing critical application, ensuring that it was transitioned seamlessly and was available to the customer in real-time was probably one of the key examples of the criticality behind ensuring QA for this effort.

Gardner: IT is critical for almost all companies nowadays, but I can't imagine a company where technology is more essential and critical than T-Mobile as a data and services carrier.

What's the case with the customer response? Do you have any business metrics, now that you’ve gone through this, that demonstrate not just that you're able to get better efficiency and your employees are getting better response times from their apps and data, but is there like a tangible business benefit, Michael?

Near-perfect availability

Cooper: I can't give you the exact specifics, but we've had significant increases in our system up-time and almost near-perfect availability in most areas. That’s been the biggest thing.

Kirthy mentioned T-Mobile.com. That’s an example where, instead of the primary and the backup, we actually have an active-active situation in the data center. So, if one goes down the other one is there, and this is significant.

A significant part of the way that we used HP tools in this process was not only the functional testing with Quick Test Professional and Quality Center, but we also did the performance testing with Performance Center and found some very significant issues that would have gone on to production.

This is a unique situation, because we actually got to do the performance testing live in the performance environments. We had to scale up to real performance types of loads and found some real issues that -- instead of the customers facing them, they didn’t have to face them.

The other thing that we did that was unique was high-availability testing. We tested each server to make sure that if one went down, the other ones were stable and could support our customers.

We were able to deliver application availability, ensure a timeframe for the migration and leverage the ability to use automation tools.



Gardner: Now, this was a case where not only were you migrating apps, but you were able to go in and make sure that they were going to perform well within this in new environment. As you pointed out, Michael, you were able to find some issues in those apps in the transition, and at the same time simultaneously you upgraded to the more recent refreshes of the HP products to do that.

So, this was literally changing the wings on the airplane when it was still flying. Tell me why doing it all at once was a good thing.

Chennaian: It was the fact that we were able to leverage the additional functionality that the HP suite of products provide. We were able to deliver application availability, ensure a time-frame for the migration and leverage the ability to use automation tools that HP provides. With Quick Test Professional, for example, we migrated from version 9.5 to 10.0, and we were able to leverage the functionality with business process testing from a Quality Center standpoint.

As a whole, from an application lifecycle management and from an enterprise-wide QA and testing standpoint, it allowed us to ensure system availability and QA on a timely basis. So, it made sense to upgrade as we were undergoing this transformation.

Cooper: Good point, Kirthy. In addition to upgrading our tools and so forth, we also upgraded many of the servers to some of the latest Itanium technology. We also implemented a lot of the state-of-the-art virtualization services offered by HP, and some of the other partners as well.

Streamlined process

Using HP tools, we were able to create a regression test set for each of our Tier 1 applications in a standard way and a performance test for each one of the applications. So, we were able to streamline our whole QA process as a side-benefit of the data migration, building out these state-of-the-art data centers, and IT modernization.

Gardner: So, this really affected operations. You changed some platforms, you adopted the higher levels of virtualization, you're injecting quality into your apps, and you're moving them into an entirely new facility. That's very impressive, but it's not just me being impressed. You've won a People's Choice Award, voted by peers of the HP software community and their Customer Advisory Board. That must have felt pretty good.

Cooper: It feels excellent. In 2009, we won the IT Transformation Award. So, this isn't our first time to the party. That was for a different project. I think that in the community people know who we are and what we're capable of. It's really an honor that the people who are our peers, who read over the different submissions, decided that we were the ones that were at the top.

Gardner: And I hear that you've won some other awards as well.

Cooper: We've won lots of awards, but that's not what we do it for. The reason why we do the awards is for the team. It's a big morale builder for the team. Everybody is working hard. Some of these project people work night and day to get them done, and the proof of the pudding is the recognition by the industry.

Our CIO has a high belief in quality and really supports us in doing this. It's nice that we've got the industry recognition as well.



Honestly, we also couldn't do without great executive support. Our CIO has a high belief in quality and really supports us in doing this. It's nice that we've got the industry recognition as well.

Gardner: Of course, the proof of the pudding is in the eating. You've got some metrics here. They were pretty impressive in turns of availability, cost savings, reduction in execution time, performance and stability improvements, and higher systems availability.

Give me a sense, from an IT perspective, if you were to go to some other organization, not in the carrier business, of course, and tell them what this really did for you, performance and in the metrics that count to IT, what would you tell them?

Cooper: The metrics I can speak to are from the QA perspective. We were able to do the testing and we never missed one of the testing deadlines. We cut our testing time using HP tools by about 50 percent through automation, and we can pretty accurately measure that. We probably have about 30 percent savings in the testing, but the best part of it is the availability. But, because of the sensitive nature and competitive marketplace, we're not going to talk exactly about what our availability is.

Gardner: And how about your particular point of pride on this one, Kirthy?

Chennaian: For one, being able to get recognized is an acknowledgement of all the work you do, and for your organization as a whole. Mike rightly pointed out that it boosts the morale of the organization. It also enables you to perform at a higher level. So, it's definitely a significant acknowledgment, and I'm very excited that we actually won the People's Choice Award.

Gardner: A number of other organizations and other series of industries are going to be facing the same kind of a situation, where it's not just going to be a slow, iterative improvement process,. They're going to have to go catalytic and make wholesale changes in the data center, looking for that efficiency benefit.

You've done that. You've improved on your QA and applications lifecycle benefits at the same time. With that 20-20 hindsight, what would you have done differently, or at least what could you advise people who are going to face a similar large, complex, and multifaceted undertaking?

Planning and strategy

Chennaian: If I were to do this again, I think there is definitely a significant opportunity with respect to planning and investing in the overall strategy of QA and testing for such a significant transformation. There has to be a standard methodology. You have to have the right toolsets in place. You have to plan for the entire transformation as a whole. Those are significant elements in successful transformation.

Gardner: Monday morning quarterback for you, Michael?

Cooper: We did a lot of things right. One of the things that we did right was to augment our team. We didn’t try to do the ongoing work with the exact same team. We brought in some extra specialists to work with us or to back-fill in some places. Other groups didn’t and paid the price, but that part worked out for us.

Also, it helped to have a seat at the table and say, "It's great to do a technology upgrade, but unless we really have the customer point of view and focus on the quality, you're not going to have success."

We were lucky enough to have that executive support and the seat at the table, to really have the go/no-go decisions. I don't think we really missed one in terms of ones that we said, "We shouldn't do it this time. Let's do it next time." Or, ones where we said, "Let's go." I can't remember even one application we had to roll back. Overall, it was very good. The other thing is, work with the right tools and the right partners.

Gardner: With data center transformation, after all, it's all about the apps. You were able to maintain that focus. You didn’t lose focus of the apps?

It's great to do a technology upgrade, but unless we really have the customer point of view and focus on the quality, you're not going to have success.



Cooper: Definitely.The applications do a couple of things. One, the ones that support the customers directly. Those have to have really high availability, and we're able to speed them up quite a bit with the newest and the latest hardware.

The other part are the apps that people don't think about that much, which are the ones that support the front lines, the ones that support retail and customer care and so forth. I would say that our business customers or internal customers have also really benefited from this project.

Gardner: Well great. We've been talking about a massive undertaking with data center transformation and application QA and lifecycle improvements and the result was a People's Choice Award won here at the Discover Show in Las Vegas. It's T-Mobile, the winner. We've been talking with their representatives here. Michael Cooper, the Senior Director of Enterprise IT Quality Assurance. Thanks again, Michael.

Cooper: Thank you, and we're very proud of the team.

Gardner: We are also here with Kirthy Chennaian, the Director of Enterprise IT Quality Management at T-Mobile. Thanks.

Chennaian: Thank you. Very excited to be here.

Gardner: And thanks to our audience for joining this special BriefingsDirect podcast coming to you from the HP Discover 2011 Conference. I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host for this series of User Experience Discussions. Thanks again for listening, and come back next time.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: HP.

Transcript of a BriefingsDirect podcast on how awarding-winning communications company T-Mobile improved application quality, while setting up two data centers. Copyright Interarbor Solutions, LLC, 2005-2011. All rights reserved.

You may also be interested in: