Wednesday, April 07, 2010

Well-Planned Data Center Transformation Effort Delivers IT Efficiency Paybacks, Green IT Boost for Valero Energy

Transcript of a BriefingsDirect podcast on how upgrading or building new data centers can address critical efficiency, capacity, power and cooling requirement and concerns.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: HP.

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you’re listening to BriefingsDirect.

Today, we present a sponsored podcast discussion on the huge drive for improvement around enterprise data centers. Many enterprises, if not nearly all, are involved nowadays with some level of data-center transformation either in the planning stages or in outright execution. The heightened activity runs the gamut from retrofitting and designing new data centers to then building and occupying them.

We're seeing many instances where numerous data centers are being consolidated into a powerful core few, as well as completely green-field data centers -- with modern design and facilities -- are coming online.

These are, by no means, trivial projects. They often involve a tremendous amount of planning and affect IT, facilities, and energy planners, as well as the business leadership and line of business managers. The payoffs are potentially huge, as we'll see, from doing data center design properly, but the risks are also quite high, if things don't come out as planned.

The latest definition of data center is focused on being what's called fit-for-purpose, of using best practices and assessments of existing assets and correctly projecting future requirements to get that data center just right -- productive, flexible, efficient and well-understood and managed.

The goal through these complex undertakings at these data centers is to radically improve how IT can deliver its services and be modern, efficient, and flexible.



Today, we're going to examine the lifecycle of data-center design and fulfillment through migration and learn about some of the payoffs when this goes as planned. We're going to learn more about a successful project at Valero Energy Corp. The goal through these complex undertakings at these data centers is to radically improve how IT can deliver its services and be modern, efficient, and flexible.

We're here with two executives from Hewlett-Packard to look at proper planning and data center design, as well as build and migration. And we'll learn from an IT leader at Valero how they managed their project.

Please join me in welcoming our guests today. We're here with Cliff Moore, America’s PMO Lead for Critical Facilities Consulting at HP. Welcome to the show, Cliff.

Cliff Moore: Thanks, Dana.

Gardner: We're also here with John Bennett, Worldwide Director of Data Center Transformation Solutions at HP. Hello, John.

John Bennett: Hi, Dana.

Gardner: We're also here with John Vann, Vice President of Technical Infrastructure and Operations at Valero Energy Corp. Welcome to the show, John.

John Vann: Hello, Dana. Thanks a lot.

Gardner: Let's go to you, John Bennett. Tell us why data center transformation is at an inflection point, where data centers are in terms of their history, and what are the new requirements. It seems to be somewhat of a perfect storm in terms of there's a need to move, and things still are really not acceptable?

Modern and efficient

Bennett: You're right on that front. I find it just fascinating that if you had spoken four years ago and dared to suggest that energy, power, cooling, facilities, and buildings were going to be a dominant topic with CIOs, you would have been laughed at. Yet, that's definitely the case today, and it goes back to the point you made about IT being modern and efficient.

Data-center transformation, as we've spoken about before, really is about not only significantly reducing cost to an organization, not only helping them shift their spending away from management and maintenance and into business projects and priorities, but also helping them address the rising cost of energy, the rising consumption of energy and the mandate to be green or sustainable.

The issues that organizations have in trying to address those mandates, of course, is that the legacy infrastructure and environments they have, the applications portfolio, the facilities, etc., all hinder their ability to execute on the things they would like to do.

Data-center transformation tries to take a step back, assess the data center strategy and the infrastructure strategy that's appropriate for a business, and then figure how to get from here to there. How do you go from where you are today to where you need to be?

It turns out that one of the things that gets in the way, both from a cost perspective and from a supporting the business perspective is the data centers themselves. Customers can find themselves, as HP did, having a very large number of data centers. We had 85 around the world, because we grew through acquisition, we grew organically, and we had data centers for individual lines of business.

We had data centers for individual countries and regions. When you added it up, we had 85 facilities and innumerable server rooms, all of them requiring administrative staff, data center managers, and a lot of overhead. As part of our own IT transformation effort, we've brought that down to six.

You have organizations that discover that the data centers they have aren't capable of meeting their future needs. One wag has characterized this as the "$15 million server," where you keep needing to grow and support the business. All of a sudden, you discover that you're bursting at the themes.

Or, you can be in California or the U.K. The energy supply they have today is all they’ll ever have in their data center. If they have to support business growth, they're going to have to deal it by addressing both their infrastructure strategies, but probably also by addressing their facilities. That's where facilities really come into the equation and have become a top-of-mind issue for CIOs and IT executives around the world.

Gardner: John, it also strikes me that the timing is good, given the economic cycle. The commercial market for land and facilities is a buyer's market, and that doesn’t always happen, especially if you have capacity issues. You don’t always get a chance to pick when you need to rebuild and then, of course, money is cheap nowadays too.

Bennett: If you can get to it.

Gardner: The capital markets are open for short-intervals.

Signs of recovery

Bennett: We certainly see, and hope to see, signs of recovery here. Data center location is an interesting conversation, because of some of the factors you named. One of the things that is different today than even just 10 years ago is that the power and networking infrastructure available around the world is so phenomenal, there is no need to locate data centers close to corporate headquarters.

You may choose to do it, but you now have the option to locate data centers in places like Iceland, because you might be attracted to the natural heating of their environment. Of course, you might have volcano risk.

You have people who are attracted to very boring places, like the center of the United States, which don't have earthquakes, hurricanes, wildfires and things that might affect facilities themselves. Or, as I think you'll discover with John at Valero, you can choose to build the data center right near corporate headquarters, but you have a lot of flexibility in it.

The issue is not so much access to capital markets as it is that any facility’s project is going to have to go through not just the senior executives of the company, but probably the board of directors. You'll need a strong business case, because you're going to have to justify it financially. You're going to have to justify it as an opportunity cost. You're going to have to justify in terms of the returns on investment (ROIs) expected in the business, if they make choices about how to manage and source funds as well.

So, it's a good time from the viewpoint of land being cheap, but it might be a good time in terms of business capital be available. It might not be a good time in terms of investment funds being available, as many banks continue to be reluctant to loan than it appears.

The majority of the existing data centers out there today were built 10-15 year ago, when power requirements and densities were lot lower.



Gardner: The variables now for how you would consider, plan, and evaluate are quite different than even just a few years ago.

Bennett: It's certainly true, and I probably would look to Cliff to say more about that.

Gardner: Cliff Moore, what's this notion of fit-for-purpose, and why do you think that variables for deciding to move forward with data center transformation of redesigned activities is different nowadays? Why we are in a different field, in terms of decisions around these issue?

Moore: Obviously, there's no such thing as a one-size-fits-all data center. It's just not that way. Every data center is different. The majority of the existing data centers out there today were built 10 to 15 years ago, when power requirements and densities were a lot lower.

No growth modeling

It's also estimated that, at today's energy cost, the cost of running a server from an energy perspective is going to exceed the cost of actually buying the server. So that's a major consideration. We're also finding that many customers have done no growth modeling whatsoever regarding their space, power, and cooling requirements for the next 5, 10, or 15 years -- and that's critical as well.

Gardner: We should explain the notion of fit for purpose upfront for those folks who might not be familiar with it.

Bennett: With fit for purpose, the question in mind is the strategic one of the data center strategy for an organization in particular. If you think about the business services that are being provided by IT, it's not only what those business services are, but how they should be sourced. If they’re being provided out of entity-owned data centers, how many and where? What's the business continuity strategy for those?

It needs to take into account, as Cliff has highlighted, not only what I need today, but that buildings typically have an economic life of 15 to 25 years. Technology life cycles for particular devices are two or three years, and we have ongoing significant revolutions in technology itself, for example, as we moved from traditional IT devices to fabric infrastructures like converged infrastructure.

You have these cycles upon cycles of change taking place. The business forecasts drive the strategy and part of that forecasting will be sizing and fit for purpose. Very simply, are the assets I have today capable of meeting my needs today, and in my planning horizon? If they are, they’re fit for purpose. If they’re not, they’re unfit for purpose, and I'd better do something about it.

Gardner: We're in a bit of a time warp, Cliff. It seems that, if many were built 15 years and we still don't have the sense of where we'll be in 5 or 10 years, we seem to be caught between not fitting into the past but not quite fitting or knowing what the future is. How do you help people smooth that out?

When a customer is looking to spend $20 million, $50 million, or sometimes well over a $100 million, on a new facility, you’ve got to make sure that it fits within the strategic plan for the business.



Moore: Obviously, we’ve got to find out first off what they need -- what space, power, and cooling requirements. Then, based on the criticality of their systems and applications, we quickly determine what level of availability is required, as well. This determines the Uptime Institute Tier Level for the facility. Then, we go about helping the client strategize on exactly what kinds of facilities will meet those needs, while also meeting the needs of the business that come down from the board.

When a customer is looking to spend $20 million, $50 million, or sometimes well over a $100 million, on a new facility, you’ve got to make sure that it fits within the strategic plan for the business. That's exactly what boards of directors are looking for, before they will commit to spending that kind of money.

Gardner: What does HP bring to the table? How do you start a process like this and make it a lifecycle, where that end goal and the reduce risk play out to get the big payoffs that those boards of directors are interested in?

Moore: Well, our group within Critical Facilities Services actually comes to the table with company's executives and not only looks at what are their space, power, and cooling requirements, but what are the strategies of the business. What are the criticality levels of the various mission-critical applications that they run? What are their plans for the future? What are their mergers and acquisitions plans, and so on and so forth. We help them collaboratively develop that strategy in the next 10 to 15 years for the data center future.

Gardner: It was pointed out earlier that one size doesn't fit all. From your experience, Cliff, what are the number one or two reasons that you’re seeing customers go after a new design for the data center, and spend that large sum of money?

Power and cooling

Moore: Probably the biggest reason we're seeing today is power and cooling. Of course, cooling goes along with power. We see more of that than anything else. People are simply running out of power in their data centers. The facilities today that were built 5, 10, or 15 years ago, just do not support the levels of density in power and cooling that clients are asking for going to the future, specifically for blades and higher levels of virtualization.

Gardner: So higher density requires more energy to run the servers and more energy to cool them, but you have a higher efficiency, utilization, and productivity as the end result, in terms of delivering on the requirements. Is there a way for designing the data center that allows you to cut cost and increase capacity or you are asking too much of this process?

Moore: There certainly are ways to do that. We look at all of those different ways with the client. One of the things we do, as part of the strategic plan, is help the client determine the best locations for their data centers based on the efficiency in gathering free cooling, for instance, from the environment. It was mentioned that Iceland might be a good location. You'd get a lot of free cooling there.

Gardner: What are some of the design factors? What are the leading factors that people need to look at? Perhaps, you could start to get us more familiar with Valero and what went on with them in the project that they completed not too long ago.

Moore: I'll defer to John for some of that, but the leading factors we're seeing again are our space, power, and cooling, coupled with the tier level requirement. What is the availability requirement for the facility itself? Those are the biggest factors we're seeing.

Some data centers we see out there use the equivalent of half of a nuclear power plant to run.



Marching right behind that is energy efficiency. As I mentioned before, the cost of energy is exorbitant, when it comes to running a data center. Some data centers we see out there use the equivalent of half of a nuclear power plant to run. It's very expensive, as I'm sure John would tell you. One of the things that the Valero is accomplishing is the lower energy costs, as a result of building their own.

Gardner: Before we go to Valero, I have one last question on the market and some of the drivers. What about globalization? Are we seeing emerging markets, where there is going to be many more people online and more IT requirements? Is that a factor as well?

Moore: Absolutely.

Bennett: There are a number of factors. First of all, you have an increasing access of the Internet and the increasing generation of complex information types. People aren't just posting text anymore, but pictures and videos. And, they’re storing those things, which is feeding what we characterize as an information explosion. The forecast for storage consumption over the next 5 to 10 years is just phenomenal.

Perfect storm

On top of that, you have more and more organizations and businesses providing more of their business services through IT-based solutions. You talked about a perfect storm earlier with regard to the timing for data centers. Most organizations are in a perfect storm today of factors driving the need for ongoing investments and growth out of IT. The facilities have got to help them grow, not limit their growth.

Gardner: John Vann, you’re up. I'm sorry to have left you off on the sidelines there for so long. Tell us about Valero Energy Corp., and what it is that drove you to bite off this big project of data-center transformation and redesign?

Vann: Thanks a lot, Dana. Just a little bit about Valero. Valero is a Fortune 500 company in San Antonio, Texas and we're the largest independent refiner in the North America. We produce fuel and other products from 15 refineries and we have 10 ethanol plants.

We market products in 44 states with large distribution network. We're also into alternative fuel with renewables and one of the largest ethanol producers. We have a wind farm up in northern Texas, around Amarillo, that generates enough power to fuel our McKee refinery.

So what drove us to build? We started looking at building in 2005. Valero grew through acquisitions. Our data center, as Cliff and John have mentioned, was no different than others. We began to run into power,space, and cooling issues.

Even though we were doing a lot of virtualization, we still couldn't keep up with the growth. We looked at remodeling and also expanding, but the disruption and risk to the business was just too great. So, we decided it was best to begin to look for another location.

Our existing data center is on headquarters’ campus which is not the best place for the data center, because it's inside one of our office complexes. Therefore, we have water and other potentially disruptive issues close to the data center -- and it was just concerning considering where the data center is located.

We began to look for alternative places. We also were really fortunate in the timing of our data center review. HP was just beginning their build of the six big facilities that they ended up building or remodeling, and so we were able to get good HP internal expertise to help us as we were beginning our decision of design and building our data center.

The problem with collocation back in those days of 2006, 2007, and 2008, was that there was a premium for space.



So, we really were fortunate to have experts give us some advice and counsel. We did look at collocation. We also looked at other buildings, and we even looked at building another data center on our campus.

The problem with collocation back in those days of 2006, 2007, and 2008, was that there was a premium for space. As we did our economics, it was just better for us to be able to build our own facility. We were able to find land northwest of San Antonio, where several data centers have been built. We began our own process of design and build for 20,000 square feet of raised floor and began our consolidation process.

Gardner: What, in your opinion, was more impactful -- the planning the execution, the migration? I guess the question should be, what ended up being more challenging than you expected initially? Where do you think, in hindsight, you’d put more energy and more planning, if you had to do it all again?

Solid approach

Vann: I think our approach was solid. We had a joint team of HP and the Valero Program Management Office. It went really well the way that was managed. We had design teams. We had people from networking architecture, networking strategy and server and storage, from both HP and Valero, and that went really well. Our construction went well. Fortunately, we didn’t have any bad weather or anything to slow us down; we were right on time and on budget.

Probably the most complex was the migration, and we had special migration plans. We got help from the migration team at HP. That was successful, but it took a lot of extra work.

If we had one thing to do over again, we would probably change the way we did our IP renumbering. That was a very complex exercise, and we didn’t start that soon enough. That was very difficult.

Probably we'd put more project managers on managing the project, rather than using technical people to manage the project. Technical folks are really good at putting the technology in place, but they really struggle at putting good solid plans in place. But overall, I'd just say that migration is probably the most complex.

Power and cooling are just becoming an enormous problem.



Gardner: Thank you for sharing that. How old was the data center that you wanted to replace?

Vann: It's about seven years old and had been remodeled once. You have to realize Valero was in a growth mode and acquiring refineries. We now have 15 refineries. We were consolidating quite a bit of equipment and applications back into San Antonio, and we just outgrew it.

We were having hard time keeping it redundant and keeping it cool. It was built with one foot of raised floor and, with all the mechanical inside the data center, we lost square footage.

Gardner: Do you agree, John, that some of the variables or factors that we discussed earlier in the podcast have changed, say, from just as few as six or seven years ago?

Vann: Absolutely. Power and cooling are just becoming an enormous problem and most of this because virtualization blades and other technologies that you put in a data center just run a little hotter and they take up the extra power. It's pretty complex to be able to balance your data center with cooling and power, also UPS, generators, and things like that. It just becomes really complex. So, building a new one really put us in the forefront.

Gardner: Can you give us some sense of the metrics now that this has gone through and been completed? Are there some numbers that you can apply to this in terms of the payback and/or the efficiency and productivity?

Potential problems

Vann: Not yet. We've seen some recent things that have happened here on campus to our old data center, because of weather and just some failures within the building. We’ve had some water leaks that have actually run into the data center floor. So that's a huge problem that would have flooded our production data center.

You can see the age of the data center beginning to have failures. We've had some air-conditioner failures, some coolant leaking. I think our timing was just right. Even though we have been maintaining the old data center, things were just beginning to fail.

Gardner: So, certainly, there are some initial business continuity benefits there.

Vann: Exactly.

Gardner: Going back to Cliff Moore. Does anything you hear from John Vann light any light bulbs about what other people should be considering as a step up to the plate on these data center issues?

Moore: They certainly should consult John's crystal ball regarding the issues he's had in his old data center, and move quickly. Don’t put it off. I tell people that these things do happen, and they can be extremely, costly when you look at the cost of downtime to the business.

You’ve got to know precisely what you are going to move, exactly what it's going to look like half a year or a year from now when you actually move it, and focus very heavily on the dependencies between all of the applications.



Gardner: Getting started, we talked about migration. It turns out that we did another podcast that focused specifically on data-center migration and we can reference folks to that that easily. What is it about planning, getting started as you say, when people recognize that the time might not be on their side? What are some of the initial steps, and how might they look to HP for some guidance?

Moore: We focus entirely on discovery early on. You’ve got to know precisely what you are going to move, exactly what it's going to look like half a year or a year from now when you actually move it, and focus very heavily on the dependencies between all of the applications, especially the mission-critical applications.

Typically, a move like John’s requires multiple, what we call, move groups. John’s company had five or six, I believe. You simply cannot divide your servers up into these move groups, without knowing what you might break by dividing them up. Those dependencies are critical, and that's probably the failing point.

Vann: We had five move groups. Knowing what applications go with what is a real chore in making sure that you have the right set of servers you can move on a particular weekend. We also balanced it with downtime from the end customers, so we’re going to make sure that we were not in the middle of a refinery turnaround or a major closing. Being able to balance those weekends, so we had enough time to be able to make the migration work was quite a challenge.

Gardner: John Vann, did you take the opportunity to not only redesign and upgrade your data center facilities, but at the same time, did you modernize your infrastructure or your architecture? You said you did quite a bit with virtualization already, was this a double whammy in terms of the facilities as well as the architecture?

Using opportunities

Vann: Yes. We took the opportunity to upgrade the network architecture. We also took the opportunity to go further with our consolidation. We recently finished moving servers from refineries into San Antonio. We took the opportunity to do more consolidation and more virtualization, upgrade our blade farm, and just do a lot more work around improving the overall infrastructure for applications.

Gardner: I'd like to take that back to John Bennett. I imagine you're seeing that one of the ways you can rationalize the cost is that you're not just repaving a cow path, as it were. You're actually re-architecting and therefore getting a lot greater efficiency, not only from the new facility, but from the actual reconstruction of your architecture, or the modernization and transformation of your architecture.

Bennett: There are several parts to that, and getting your hands around it can really extend the benefits you get from these kinds of projects, especially if you are making the kind of investment we are talking about in new data center facilities. Modernizing your infrastructure brings energy benefits in its own right, and it enhances the benefits of your virtualization and consolidation activities.

It can be a big step forward in terms of standardizing your IT environment, which is recommended by many industry analysts now in terms of preparing for automation or to reduce management and maintenance cost. You can go further and bring in application modernization and rationalization to take a hard look at your apps portfolio. So, you can really get these combined benefits and advantages that come from doing this.

We certainly recommend that people take a look at doing these things. If you do some of these things, while you're doing the data center design and build, it can actually make your migration experience easier. You can host your new systems in the new data center and be moving software and processes, as opposed to having to stage and move servers and storage. It's a great opportunity.

It's a great chance to start off with a clean networking architecture, which also helps both with continuity and availability of services, as well as cost.



John talked about dealing with the IP addresses, but the physical networking infrastructure and a lot of old data centers is a real hodgepodge that's grown organically over years. I guess you can blame some of our companies for having invented Ethernet a long time ago. But, it's a great chance to start off with a clean networking architecture, which also helps both with continuity and availability of services, as well as cost. They all come in there.

I actually have a question for John Vann as well. Because they had a pretty strong focus around governance, and especially in handling change request, I'm hoping he might talk a little bit about that process of the design and build project.

Vann: Our goal was to hold scope creep to a minimum. We had an approval process, where it had to be a pretty good reason for a change and for a server not to move. We fundamentally used the word "no" as much as we could to avoid not getting the right applications in the right place. Any kind of approval had to go through me. If I disagreed, and they still wanted to escalate it, we went to my boss. Escalation was rarely used. We had a pretty strong change management process.

Gardner: I can see where that would be important right along the way, not something you want to think about later or adding onto the process, but something to set up right from the beginning.

We’ve had a very interesting discussion about the movement in enterprise data centers where folks are doing a lot more transformation, moving and relocating their data centers, modernizing them, and finding ways to eke out efficiencies, but also trying to reduce the risk of moving in the future and looking at those all important power and energy consumption issues as well.

I want to thank our guests. We've been joined today by Cliff Moore, America’s PMO Leads for Critical Facilities Consulting at HP. Thank you, Cliff.

Moore: Thanks, Dana. Thanks, everybody.

Gardner: John Bennett, Worldwide Director, Data Center Transformation Solutions at HP. Thank you, John.

Bennett: Thank you, Dana.

Gardner: And lastly, John Vann, Vice President, Technical Infrastructure and Operations at Valero Energy. John, I really appreciate your frankness and sharing your experience and I will certainly wish you all in that.

Bennett: Thank you very much, Dana, I appreciate it.

Gardner: This is Dana Gardner, principal analyst at Interarbor Solutions. You've been listening to a sponsored BriefingsDirect podcast. Thanks for listening, and come back next time.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: Hewlett-Packard.

Transcript of a BriefingsDirect podcast on how upgrading or building new data centers can address critical efficiency, capacity, power requirement and cooling concerns. Copyright Interarbor Solutions, LLC, 2005-2010. All rights reserved.

You may also be interested in:

Monday, April 05, 2010

Case Study Shows How HP Data Protector Notebook Extension Provides Constant Backup for Expanding Mobile Workforce

Transcript of a sponsored BriefingsDirect podcast on how data protection products and services can protect against costly data loss with less hassle for users.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: HP.

Gain more information on HP Data protection Notebook Extension. Follow on Twitter.
Access a Webcast with IDC's Laura DuBois on Avoiding Risk and Improving Productivity on PCs and Laptops.

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you’re listening to BriefingsDirect.

Today we present a sponsored podcast discussion on protecting PC-based data in an increasingly mobile world. We'll look at a use case -- at Roswell Park Cancer Institute in Buffalo, NY -- for HP Data Protector Notebook Extension (DPNE) software and examine how backup and recovery software has evolved to become more transparent, reliable, and fundamentally user-driven.

Using that continuous back-up principle, the latest notebook and PC backup software captures every saved version of a file, efficiently transfers it all in batches to a central storage location, and then makes it easily and safely accessible for recovery by user from anywhere. That's inside or outside of the corporate firewall.

We'll look at how DPNE slashes IT recovery chores, allows for managed policies and governance to reduce data risks systemically, while also downsizing backups, the use of bandwidth, and storage.

The economies are compelling. The cost of data lost can be more than $400,000 annually for an average-sized business with 5,000 users. Getting a handle on recovery cost, therefore, helps reduce the total cost of operating and supporting mobile PCs, both in terms of operations and in the cost of lost or poorly recovered assets.

To help us better understand the state of the art remote in mobile PC data protection, we're joined by an HP executive and a user of HP DPNE software. Please join me in welcoming Shari Cravens, Product Marketing Manager for HP Data Protection. Welcome to the show, Shari.

Shari Cravens: Hi, Dana. Thanks for having me.

Gardner: We're also here with John Ferguson, Network Systems Specialist at Roswell Park Cancer Institute in Buffalo, NY. Welcome to the show, John.

John Ferguson: Hi, Dana. Thank you.

Gardner: Let's start with you, Shari. Tell me about the general state of the mobile workforce. Are we getting to the point where we're almost more mobile than stationary these days?

Backup increasingly important

Cravens: It's true. We started hearing from our customers a couple of years ago that PC backup was becoming increasingly important in their lives. Part of that's because the workforce is increasingly mobile and flexibility for the workforce is at an all time high. In fact, we found that 25 percent of staff in some industries operates remotely and that number is growing pretty rapidly.

In fact, in 2008, shipments of laptops overtook desktops for the very first time. What that really means for the end user or for IT staff is that vast amounts of data now live outside the corporate network. We found that the average PC holds about 55,000 files. Of those 55,000, about 4,000 are unique to that user on that PC. And, those files are largely unprotected.

Gardner: Of course, we're also in a tough economic climate, and productivity is paramount. We've got more people doing more work across different locations. What is the impetus for IT to be doing this? Is there a real economic challenge here?

Cravens: The economics of PC backup are really changing. We're finding that the average data loss incident costs about $2,900, and that's for both IT staff time and lost end user productivity. Take that $2,900 figure and extrapolate that for an average company of about 5,000 PCs. Then, look at hard drive failures alone. There will be about 150 incidents of hard drive failure for that company every year.

If you look at the cost to IT staff to recover that data and the loss in employee productivity, the annual cost to that organization will be over $440,000 a year.



If you look at the cost to IT staff to recover that data and the loss in employee productivity, the annual cost to that organization will be over $440,000 a year. If that data can't be recovered, then the user has to reconstruct it, and that means additional productivity loss for that employee. We also have legal compliance issues to consider now. So if that data is lost, that's an increased risk to the organization.

Gardner: I suppose security also plays a role here. We want to make sure that when we do back up, it's encrypted, it's compressed, and we're not wasting valuable assets like bandwidth. Are there economic issues around that as well?

Cravens: Sure. We all have very sensitive files on our laptops, whether it's competitive information or your personal annual review. One of the things that's been a suggestion in the past was, "Well, we'll just save it to the corporate network." The challenge with that is that people are really concerned about saving these very sensitive files to the corporate network.

What we really need is a solution that's going to encrypt those files, both in transit and at rest, so that people can feel secure that their data is protected.

Gardner: Encryption doesn’t necessarily really mean big hogging files. You can do it with efficiency as well.

Cravens: Absolutely, with changed blocks only, which is what DPNE does.

Gardner: I think we understand the problem in terms of the economics, requirements for the data, sensitivity, and the mobility factors, but what about the problem from a technical perspective? What does it take in order to do something that’s simple and straightforward for the end user?

Historical evolution

Cravens: Let me back up a little bit and talk about how we got here. Historically, PC backup solutions have evolved from more traditional backup and recovery solutions, and there are a couple of assumptions there.

One, they employ things like regularly scheduled backups that happen every 24 hours, and sometimes once a week. They assume that bandwidth concerns aren't necessarily much of an issue. This creates some problems in an increasingly mobile workforce. People are generally not regularly connected to the network. They are at coffee shops, at home, or in airports. They're often anywhere but the office, and it's entirely too easy to opt out of that scheduled backup.

We’ve all had this experience. You're on a deadline, it's 10:00 a.m., and your backup window has popped up. You immediately hit "cancel," because you just can't afford the performance degradation on your PC, and it's really not an option anymore. HP has a unique approach to protecting information on desktops and PCs. Some data loss is going to be inevitable -- laptops get stolen or files are deleted -- but we don't think that means it has to be serious and expensive.

The concept behind HP Data Protector Notebook Extension is that we're trying to minimize the risk of that PC data loss, but we're also trying to minimize the burden to IT staff. The solution is to extend some of the robust backup policies from the enterprise to the client environment.

We’re protecting data no matter where the user is -- the home, the coffee shop, the airport.



DPNE does three things. One, it's always protecting data, and it's transparent to the user. It's happening continuously, not on a fixed schedule, so there is no backup window that's popping up.

We’re protecting data no matter where the user is -- the home, the coffee shop, the airport. Whether they are online or offline, their data is being protected, and it's happening immediately. The instant that files are created or changed, data is being protected.

Continuous file protection is number one. Backup policies are centralized and automated by the IT staff. That means that data is always protected, and the IT staff can configure those policies to support their organization's particular data protection goals.

Number two, no matter where they are, users can easily recover their own data. This is a really important point. Getting back to the concept of minimizing the burden to IT staff, DPNE has a simple, single-click menu. Users can recover multiple versions of a file without ever involving IT. They don't ever have to pick up the phone and call the Help Desk. That helps keep IT costs low.

Then, also by optimizing performance, we're eliminating that desire to opt out of your scheduled backup. The process is transparent to the user. It doesn’t impact their day, because DPNE saves and transmits only the changed data. So, the impact to performance is really minimized.

Gardner: What about those times when folks are offline and are no longer connected, perhaps at a customer site or at that coffee shop? What's the process then?

Local repository

Cravens: That's a good question. DPNE has a local repository on each client and we established that to store active files. Whether you're connected to the network or not, data is captured and backed up locally to this local repository. This is important for accidental deletions or changes or even managing multiple versions of a file. You're able to go to the menu, click, and restore a file from a previous version at any point in time, without ever having to call IT.

Each client is then assigned to a network repository or data vault inside the network. That holds the backup files that are transferred from the client, and that data vault uses essentially any Windows file share.

The third element is a policy server. We talked about this a little before. The policy server allows IT staff to administer the overall system management from just a single web interface, and the centralized administration allows them to do file protection policies and set encryption policies, data vault policies, to their particular specifications.

It also provides centralized reporting. Data vault usage, agent status, agent deployments, and license issues can be tracked through the policy server.

The lack of open file protection in a lot of PC backup solutions is a huge gap that we can't ignore. Doing that in a way that doesn't overwhelm the system or create a lot of duplication is the way to go.



Gardner: I really like this idea of taking what's going on centrally in terms of a life-cycle approach to data management, storage, and whatnot. Now, extending that out to these edges, regardless of where they are, really cuts down on the duplication. We have seen instances in the past where so much money is wasted because of duplication of data. This allows for much more streamlined managed and governed approach.

Cravens: Absolutely. It's filling a gap that has been out there for a while in addressing things like open file protection. This is one thing for DPNE that's very important. Email is a really critical application for most organizations now.

The lack of open file protection in a lot of PC backup solutions is a huge gap that we can't ignore. Doing that in a way that doesn't overwhelm the system or create a lot of duplication is the way to go. It's really good for email PST files. DPNE ensures that PST files are saved and snapped, so we always have a copy of them. That works for not just Exchange, but also for Sage, big financial applications, or MySQL. Companies are using those to build home-grown applications. It works for pretty much any open file.

Gardner: Okay, let's go to John Ferguson and learn a little bit about how this has been applied in the real world. Tell me first about Roswell Park Cancer Institute, so we have a sense of the type organization that you are dealing with.

Finding the cure

Ferguson: Roswell Park Cancer Institute is the oldest cancer research center in the United States. We're focused on understanding, preventing, and eventually finding the cure for cancer. We're located in downtown Buffalo, NY. We have research, scientific, and educational facilities, and we also have a 125-bed hospital here.

Our researchers and scientists are frequently published in major studies, reported globally, for various types of cancers, and with related research studies. A number of breakthroughs in cancer prevention and treatment have been developed here. For example, the PSA test, which is used for detecting prostate cancer, was invented here.

Gardner: Tell me about the challenges you have. It seems with all that research, a great deal of data, a lot of people are moving around between your hospital and research facilities. What was the challenge that you've been grappling with in terms of the data?

Ferguson: Well, the real challenge as you mentioned, is that data is moving around. When you are dealing with researchers and scientists, they work at different schedules than the rest of us. When they are working, they are focused and that might be here, off campus, at home, whatever.

They've got their notebook PCs, their data is with them and they're running around and doing their work and finding their answers. With that data moving around and not always being on the network, the potential for the data loss of something that could be the cure for cancer is something that we take very seriously and very important to deal with.

With that data moving around and not always being on the network, the potential for the data loss of something that could be the cure for cancer is something that we take very seriously and very important to deal with.



Gardner: So, when you decided that this mobility issue was really something you couldn't ignore anymore, what was it that you looked for in a solution? What were some of the top requirements in terms of being able to solve this on the terms that you needed?

Ferguson: One of the big things was transparency to the user and being simple to use if they do need to use it. We were already in the process of making a decision to replace our existing overall backup solution with HP's Data Protector. So, it was just a natural thing to look at DPNE and it really fits the need terrifically.

There's total transparency to the user. Users don't even have to do anything. They're just going along, doing their work, and everything is going on in the background. And, if they need to use it, it's very intuitive and simple to use.

Gardner: Tell me about the implementation. How far in are you and to what degree do you expect to get to -- the number of seats, etc.?

Ferguson: In terms of the overall Data Protector implementation, we're probably about 40 percent complete. The DPNE implementation will immediately follow that.

A good test run

We anticipate initially just getting our IT staff using the application and giving it a good test run. Then we'll focus on key individuals throughout the organization, researchers, the scientists, the CEO, CIO, the people with all the nice initials after their name, and get them taken care of. We'll get a full rollout after that.

Gardner: It might be a little bit premature, as you're about 40 percent in, but do you have any sense of where this is going to take you on a total cost basis for the PCs and mobile notebooks themselves, or perhaps even applying that to the larger overall lifecycle data cost?

Ferguson: I don't think I can come up with actual cost numbers, but I do know that covering the exposure that we have for the possibility of losing critical data is enormous. You can't put a price tag on saving the potential possibility that someone who has a cure for cancer on their laptop says, "Oh, we lost it, sorry." It doesn’t work that way.

Gardner: I suppose another intangible, but nonetheless powerful benefit, is this element of trust that people will be more trusting of these devices. Therefore, they'll become more productive in the way they use them, when you have given them this sense of a backup and insurance policy, if you will, on their work.

When people are working on something, they don't think to “save it,” until they're actually done with it.



Ferguson: Absolutely. In the past, we've told people to follow best practices. Make sure that when you want to save your data, save it on the network drive. That, of course, requires them to be on campus or connected remotely. A lot of thought that has to go into that. When people are working on something, they don't think to “save it,” until they're actually done with it. And, DPNE provides us that versioning saving. You can get old versions of documents. You can keep track of them. That's the type of thing that's not really done, but it's really important, and they don't want to lose it.

Gardner: John Do you have folks that are in the legal department or proprietary, intellectual property minded folks who have some understanding of some of the benefits of this system?

Ferguson: We have plenty of people in our legal department, auditors, and all kinds of federal regulations that we have to adhere to. When it comes down to keeping track of data, keeping versions, and that type of thing, it's definitely important.

Gardner: Shari, as you're listening to John, is there anything that jumps out at you about how this is being implemented that you think highlights some of the values here?

Nothing more compelling

Cravens: John's comment about losing a laptop where you have a researcher working on a cure for cancer. I can't think of anything that's more compelling in terms of how important it is to save the data that's out there on notebooks and laptops.

I don't think it matters how big your organization is -- small, medium, large -- a lot of that data is very valuable, and most of it is running around outside the network now. Even for an average-sized organization, they could be spending hundreds of thousands of dollars in a year that they shouldn't have to in IT support and lost productivity.

Gardner: Very good. Let me take a quick peek at the future. Most people seem to agree that the amount of data is going to continue to explode for some time. Certainly, regulations and requirements for these legal issues don’t go away. John, is this problem something that from your perspective is going to be solved or is this sort of an ongoing rising tide that you have to fight to keep up with?

Ferguson: When it comes to federal regulations, it always is a rising tide, but we've got a good solution that we are implementing and I think it puts us ahead of the curve.

Gardner: Shari, how about you? Do you see a trend in the future in terms of data quantity, quality, and security issues that will give us a sense of where this problem is headed?

Information is continuing to explode and that's not going to stop. In addition to that, the workforce is only going to get more mobile.



Cravens: Absolutely. Information is continuing to explode and that's not going to stop. In addition to that, the workforce is only going to get more mobile. This problem definitely isn’t going to go away, and we need solutions that can address the flexibility and mobility of the workforce and be able to manage, as John mentioned, the increase in regulations.

Gardner: Of course, there's that old important issue about getting those costs under control at the same time.

Cravens: Absolutely. Going back to the possibility that there are organizations spending hundreds and thousands of dollars now that they don’t need to, with HP DPNE, they can actually avoid that.

Gardner: One thing I want to also hit on, Shari, is how you get started. If folks are interested in maybe doing a trial or trying out with this, what are some steps to get some hands-on experience?

Simple implementation

Cravens: HP Data Protector is very simple to implement. It snaps into your existing infrastructure. You don’t need any specialized hardware. All you need is a Windows machine for the policy server and some disk space for the data vault. You can download a 60-day trial version from hp.com. It's a full-featured version, and you can work with that.

If you have a highly complex multi-site organization, then you might want to employ the services of HP’s Backup and Recovery Fast Track Services for Data Protector. They can help get a more complex solution up and running quickly and reduce the impact on your IT staff just that much sooner.

Gardner: We've been looking at a use case for HP Data Protector Notebook Extension software and at how backup and recovery software have evolved. And, we have a better understanding of this transparency and reliability. I particularly liked that integration with the backend policies and governance across the lifecycle of data. I think that's really going to be the big cost saver over time.

I want to thank our guests who are joining us in our discussion. We are here with Shari Cravens, Product Marketing Manager for HP Data Protection. Thank you so much, Shari.

Cravens: Thank you.

Gardner: And John Ferguson. I appreciate your input. He is the Network Systems Specialist at Roswell Park Cancer Institute in Buffalo. Thank you, sir.

Ferguson: Thank you. It's been a pleasure.

Gardner: This is Dana Gardner principal analyst at Interarbor Solutions. You’ve been listening to a sponsored BriefingsDirect podcast. Thanks for listening, and come back next time.

Transcript of a sponsored BriefingsDirect podcast on how data protection products and services can protect against costly data loss with less hassle for users. Copyright Interarbor Solutions, LLC, 2005-2010. All rights reserved.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: HP.

Gain more information on HP Data protection Notebook Extension. Follow on Twitter.
Access a Webcast with IDC's Laura DuBois on Avoiding Risk and Improving Productivity on PCs and Laptops.

You may also be interested in:

BriefingsDirect Analysts Pick Winners and Losers of Cloud Computing's Economic Disruption and Enterprise Impact

Edited transcript of BriefingsDirect Analyst Insights Edition podcast, Vol. 51 on the impact of economics and business model disruption from cloud computing.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Charter Sponsor: Active Endpoints.

Special offer: Download a free, supported 30-day trial of Active Endpoint's ActiveVOS at www.activevos.com/insight.

Dana Gardner: Hello, and welcome to the latest BriefingsDirect Analyst Insights Edition, Volume 51. I'm your host and moderator Dana Gardner, principal analyst at Interarbor Solutions.

This periodic discussion and dissection of IT infrastructure related news and events, with a panel of industry analysts and guests, comes to you with the help of our charter sponsor, Active Endpoints, maker of the ActiveVOS business process management system.

Our topic this week on BriefingsDirect Analyst Insights Edition, and it is the week of March 8, 2010, focuses on cloud computing and dollars and cents. We'll dive into more than the technology, security, and viability issues that have dominated a lot of cloud discussions lately and move to the economics and the impact on buyers and sellers of cloud services.

When you ask any one person how cloud will affect their costs, you're bound to get a different answer each time. No one really knows, but the agreement comes when the questions move to, "Will cloud models impact how buyers and providers price their technology? And over the long-term what will buyers come to expect in terms of IT value?"

The agreement is that things are about to change, probably a lot and probably for good. The ways in which IT has been bought and sold will never be the same. So how does it become a different model? What comes when we move to a cloud based pay-per value pricing, buying, and budgeting for IT approach? How does the shift to high-volume, low-margin services and/or subscription models affect the IT vendor landscape? How does it affect the pure cloud and software-as-a-service (SaaS) providers, and perhaps most importantly, how do cloud models affect the buy side?

Not many enterprises and their IT budgets are yet set up for this shift. Who is in charge of the budget structure, and the changes, and what does that portend for a corporate power or politics shift around IT procurement?

These are just a few of the easy questions we have for our panel today. Please join me now in welcoming, Dave Linthicum, CTO of Bick Group, a cloud computing and data-center consulting firm. Welcome, Dave,

Dave Linthicum: Hey Dana, thank you for having me on.

Gardner: We're here also with Michael Krigsman. He is the CEO of Asuret and a blogger on ZDNet on IT failures. He also writes analyst reports for IDC. Welcome, Michael.

Michael Krigsman: Hey, Dana, how are you?

Gardner: Very good, thanks. We are also here with Sandy Rogers, an independent industry analyst. Welcome back, Sandy,

Sandy Rogers: Thanks, Dana, great to be here and participating in this timely discussion.

Gardner: Let me take my first question to you, Dave Linthicum. How much change should we expect in terms of IT economics as a result of cloud computing?

More money up front

Linthicum: We're probably going to have to spend more money initially. That's really what the takeaway is from the initial cloud-computing projects that I am involved in. At the end of the day, it's about strategic use of technology. Ultimately, cost reduction should be part of the result, but in getting there, we're going to have to spend additional dollars.

I was listening to your podcast with Peter Coffee, talking about service oriented architecture (SOA) and cloud computing, and he said something that was very profound. The fact of the matter is that, if you're looking for cheap IT, we can give you cheap IT. However, you're not going to be able to keep up with the competitive value that IT needs to bring to your enterprise. To get that competitive value, you're going to have to spend additional money.

The myth is that cloud computing is always going to be less expensive. I think cloud computing typically is going to be a better, more strategic, more agile architecture, but it's also typically going to be more expensive, at least on the outcome.

Gardner: Why is that, why is it more expensive on the outcome?

Linthicum: Because it does require lots of changes. You're going to have to redo your infrastructure, as I write in my book, to leverage newer architectural patterns, such as SOA, and that's typically very expensive to get out and access the services that are available to you on demand, out of the cloud. So that's an expense onto itself.

You're going to have to retrain and re-skill your people within your data center, all the way up into your executive ranks, on what cloud is able to do and how to manage, govern, and secure cloud. You're going to have to pay for the cloud computing providers, which in many instances are going to be less expensive than on-premise systems, but in many other instances are going to be much more costly than on-premise systems.

Ultimately, you get to a much better, higher value strategic architecture which is going to add more value to the business, but it's going to cost you some additional dollars to get there.

People and companies who are strategically thinking typically find cloud computing to be an easy sell.



Gardner: So it seems once again we are asking IT planners and those in charge of IT direction and strategy to make a bet along with the industry, which is to say, you will have to spend upfront but you will get something far better further down the road. Do you think they are receptive to that? Is there enough trust in cloud computing to make that kind of bet?

Linthicum: Some are receptive to that and they understand a long-term strategic direction of the technology can bring value to the business. People and companies who are strategically thinking typically find cloud computing to be an easy sell, and it's something they want to buy to basically add value to the existing IT.

Companies that think tactically, in quarter to quarter expenses, and consider IT kind of an expense that they rather not have to spend money on are going to fall by the wayside within cloud computing. They're just not going to get it.

It's very much like the Internet was in the mid-'90s. Suddenly, it's a big huge deal, and companies that got on board four or five years ago are leading the market, where companies that suddenly were trying to play catch-up football in 1999, 2000, found that the market left them behind. Many of those companies just went out of business, because they didn’t see the wave coming. Cloud computing is going to be very much like that.

Little bit of confusion

Gardner: Sandy Rogers, it sounds like there is a little bit of confusion. Dave Linthicum says it's the strategic long-term thinkers that get cloud and yet some of the discussions that I hear, it's the people who want to get out of the IT business in enterprises that are attracted to cloud. Can you have it both ways?

Rogers: One of the things that Dave noted was the idea of architecture, which is very important. There are particular use cases that are building with respect to leveraging cloud now, yet the technology and the architectures need to mature in order to really think about transitioning all of your IT into the cloud.

It's learning where you're going to need that elasticity, what is the short-term and long-term outlook for the types of solutions that are being built, and -- to Dave’s point -- what might be strategic to your organization in the long run versus what you might need to get tactically out the door.

It's a balancing act. It's one of many options to organizations to turn to cloud, and it's learning when to use it right and then how to use it correctly.

Gardner: So it sounds as if it's possible that some of those people who want to offload things, perhaps a certain set of applications, thinking about SaaS perhaps more than what we might consider pure cloud or infrastructure related cloud, might get what they want, which is to offload apps and maybe cut costs, but the other long-term thinkers, the other strategists could also look architecturally at a cloud and see a much more agile IT capability in the future.

Rogers: Right. Again, it's being able to look at all the different implications, as you scale out, and who are going to be the users of the technology? A lot of the innovation that we see happening on the cloud is really other providers that are starting to build their businesses on the cloud to leverage that partnership and the network that's starting to be created there.

They're learning that there is a web-scale business to be obtained out there, and that's really where we are seeing the biggest innovation.



They're learning that there is a web-scale business to be obtained out there, and that's really where we are seeing the biggest innovation. A lot of the enterprises are going to then learn from those organizations that have to act at web scale and understand which are the right use cases to put out there and how to leverage it.

What is also really interesting to know is that it's more than just technology. It's really transitioning to engage with services and services providers. Those who are attempting to move out there onto the cloud are learning that that is a big piece of the puzzle. Many technology providers have to grow into the role of a service provider.

Gardner: Sandy, we saw a lot of similar promises or expectations around SOA, say four or five years ago -- if you do this, later on you will really benefit. That was a hard sell. Now we're in a recession and we have tight budgets. Isn’t cloud going to be a hard sell as well?

Rogers: In some instances, in the short-term, it's an easier sell for those organizations that are looking very tactically at what they are going to be able to relinquish out of their capital expenditures. But, those who are looking at, "I need to grow, I need to be more agile," they are the ones who are really looking at the long-term benefits of cloud.

Gardner: Michael Krigsman, you've been studying how IT failures manifest themselves and are difficult to understand. When you listen to these discussions about cloud computing and you hear things about shifts in cultures and budgets, no one really knows what the cost implications would be, but we are pretty sure that over time it's going to be better. Do you have any kind of tingles in your antenna, any lights flashing? What do you sense?

Hopes and dreams

Krigsman: Hopes and dreams are always delightful, in a sense, but we've been talking so far about IT value. Dave brought up the notion of cheap IT, and I ask the question, is cheap IT really the goal? I realize that that's not what Dave was proposing, but I think for many organizations there is this tactical sense, as Sandy was saying, to embrace the cloud, because they say, "Our costs are going to go down in this very short-term."

To me, the real question, the longer term strategic question, is "How does this new IT infrastructure map onto our business processes and our business requirements looking forward into the long-term?" There are some mismatches and mismatched expectations in that domain.

Gardner: What happens when we have mismatched expectations?

Krigsman: When you have one group that is expecting certain types of outcomes and results and you have another group that is capable of delivering results that don’t match the first, namely between buyers and sellers, then the end result is predictable failure or disappointment somewhere down the line.

Gardner: With that knowledge and hindsight into other business and IT activities, Dave Linthicum, what do you think we need to do to prevent that potential failure? How do we match expectations of buyers and sellers? How do we accomplish a sense of value throughout the progression to cloud?

First and foremost is to set the expectations as to what the value is going to bring and where the value is going to come from.



Linthicum: First and foremost is to set the expectations as to what the value is going to bring and where the value is going to come from. We've had a tendency to focus on reducing cost over the last few years, with the recession and all, and ultimately cloud computing and SOA is about bringing strategic value back into the business in the form of IT. The ability to align your IT resources to the needs of the business quickly, get into markets fast, delight customers, sell more, and create supply chain integration systems that provide with frictionless commerce is really where the value is in this.

It's not about the number of servers you can save, the number of virtualized systems you can have, or the number of public clouds you can sign up with. It's about moving to that kind of an architecture. The traditional architectures that exist in enterprises today are highly dysfunctional. They're very fragile. They're overly complex. They can't be changed easily, and that puts a limitation on the business.

What we're moving to is something that's much more configurable and agile. Leveraging cloud computing is just a target architecture to get there. It doesn’t necessarily mean that it's going to provide less expensive IT, but it always should bring potential value to the business.

The first thing I do is walk someone through what this really means to your IT architecture, what this really means to the business. And, how this is going to provide value back into the business, both in hard costs, in other words, the dollars and cents around the IT expenditures, which typically may bolt up a bit initially as you do the strategic stuff.

Where the money is

More importantly, the strategic cost, the ability to get more customers, get more supply chains, add more value to the core business is really where the money is made. Those expectations need to be set in the minds of the people who are going to pay the bills, the stakeholders. We are just doing an awful job in that right now.

Gardner: It strikes me that it's hard enough with on-premises IT to get a handle on what your actual costs are. There is the short-term cost. There is cost over time. There are four- to five-year cycles of maturation around technologies. There are the operations and maintenance budgets on top of the initial. It's really hard to ask any enterprise to say precisely what IT costs.

Don’t we muddy the waters even further when we start doing cloud computing in addition to on-premises, hybrid types of models, mixing of on-premise traditional pricing as well as subscription model pricing? Will we lose track of how to even know what IT costs as a result of cloud?

Krigsman: You're talking about muddying the waters and bringing complexity. Complexity exists. It's here today. If you're not able to keep track of your costs and expenses now, and you add in new elements, the situation is going to get worse. But, the situation is pretty bad right now. So, from that standpoint, what's the big deal?

If you're not able to keep track of your costs and expenses now, and you add in new elements, the situation is going to get worse.



Gardner: Is there an opportunity to make it better?

Rogers: In some respects cloud providers, because they are in the business of providing a service, are starting to become much more transparent regarding the usage in order to help their customers make decisions and plan for the future. In a way, the ability to correlate the computing that's being used with what the value is to the business may actually move forward with cloud and put a lot of pressure on those that are providing computing resources on-premise to provide the same kind of metrics.

Gardner: Dave Linthicum, does a service level agreement (SLA) model with a price card the size of a postcard help compared to the licensing and maintenance and other hidden gotchas that companies have been unfortunately accustomed to on the on-premises procurement process?

Linthicum: It actually does help. Even some of the on-premise guys sold services, basically renting software. That was a nice, innovative model. Now, we have the same sort of thing within the cloud computing universe.

The pay-as-you-go model of cloud computing, even though it can be more expensive in many instances, when you really kind of amortize the cost over many years, is something that's attractive to at least United States IT. It's not always to foreign corporations, but definitely in the United States.

We like the pay-as-you-go cable bill kind of thing that we get, and also the ability to turn the stuff off or move away from it, if we need to, without having a big footprint already in the data center and things we need to deinstall and millions of dollars of hardware that we have to sell on Craigslist if the thing doesn’t work out.

The selling point

That becomes a selling point and really is part and parcel of value of cloud computing. But, it also can be the Achilles' heel of cloud computing, because ultimately people are going to make decisions around financial metrics that may not be realistic. If you look at those financial metrics in light of the requirements of the business, in many instances people are buying cloud computing because of the cost model and not necessarily the strategic value it's going to have to the architecture and therefore have to the business.

Gardner: In talking with some folks recently, I've heard them say that the move to cloud computing -- planning and thinking and architecting for cloud computing -- actually helps companies discipline themselves and modernize themselves around security, green and energy costs, governance and management, and even into business process management and people productivity issues. Does that make sense to anyone that there are these other paybacks from cloud computing beyond the dollars

Rogers: What I have seen is that immediately upon speaking about cloud computing, the idea of SOA has been brought up much more. What was very difficult to get across to the business and to varying roles within the IT ecosystem was what SOA and service orientation is all about. Cloud is really giving that use case to organizations, and a better way for them to understand it and embrace it. In that respect, it definitely is moving the bar forward and with a set goal in mind.

Gardner: Wait a minute. So what you are saying is that cloud computing is the killer application of SOA.

Rogers: It always has been when you think about cloud computing in a broader sense. It's really about taking advantage of network resources and services, wherever they exist, and to be able to provide and scale to whatever is necessary to support a business process or business function. It's the sharing of the resources. It's that next step in the shared services model, only at a much grander scale.

There really are no cloud vendors today that I can think of that can support the backbone infrastructure of a large corporation across all the functions.



Gardner: Michael Krigsman, do you see any indications from your work that the conceptualization, even the theory, around cloud computing is helping organizations tidy up areas that have been messy, such as how to make security holistic across IT activities and governance as more of a holistic undertaking? What do you think?

Krigsman: I think that for those processes that are automated with cloud, the cloud vendors have security built-in, for example. From that standpoint, security becomes easier. But, there really are no cloud vendors today that I can think of that can support the backbone infrastructure of a large corporation across all the functions.

You end up today, still having the security problems and so forth, but now your environment is mixed. Some of your functions are in cloud and some are on-premise, and you add it all together and, at least in the short run, there is this greater complexity.

Ultimately, if you had a theoretical vendor that could support a wide range of processes and they had a unified security model and so forth, in that case it would be simpler, but for large organization today, I don't see that simplicity.

Gardner: Dave Linthicum, any thoughts on this notion that cloud computing provides an on ramp to discipline for greater scale across some of these other major challenges, like governance, security, energy consumption, and the business process productivity?

Linthicum: Actually it does. I'm bullish on cloud computing being a catalyst for architectural change and typically for the better. So cloud, to the point just made, is not great at security and governance as of yet, but in many instances it's much better than the current security and governance in lots of these existing enterprises, which is poorly defined or nonexistent.

Improvement model

Ultimately, as people revamp their architectures to leverage cloud, moving into SOA, looking at cloud as an architectural option for bit pieces of parts of their data and parts of their processes, they go through an improvement model.

They go through some architectural changes, create new governance models, and create new security models. They leverage identity management versus simple encryption. They learn to be more secure. If they didn't have a chief security officer, they may now have a one, if they are moving into cloud.

Special offer: Download a free, supported 30-day trial of Active Endpoint's ActiveVOS at www.activevos.com/insight.

The target systems that are using cloud computing, the target architectures that are leveraging cloud computing, are almost always more secure than the traditional systems from which they came. That doesn't mean they're completely secure and without issues, especially in the cloud computing side. But, people make logical choices about what pieces of information and what processes to run in the cloud and which ones to run on-premise based on security models, and typically, if they are revamping into a new architecture, they are always going to be more secure and better governed, if the architects know what they're doing.

Gardner: I subscribe to that as well. This is an opportunity for a catalyst, for some of the shifts that we have been looking for, for some time, this sort of pulls it together. It's like the web. As the word pulled people together to understand the Internet, cloud is a word that helps pull people together to understand some of the major shifts in IT across a number of different aspects of IT.

Let's look at some of the shifts. If we move into IT as an organization, in many enterprises, it strikes me that one of the things it's going to have to shift is the way that IT organizations behave and the role that they play. While many IT organizations focus on operations and maintenance as sort of their core reason for being and the procurement and development and customization as an add-on, it strikes me that cloud and the movement to cloud can shift that.

The operations on maintenance side is more of an add-on, and that the role of IT becomes more as the definer of the SLAs, the enforcer and governor and policy manager for what constitutes good computing, while opening up more customization and flexibility and agility to the business users.

The lack of governance that exists today across the industry is pretty startling.



If you subscribe to that, or even if you don't, Michael Krigsman, what are your thoughts, about the shift within IT culture and isn't that an important necessary step to prevent failure as you move towards cloud?

Krigsman: Well, I think that there is no question, as Dave just mentioned, that driving toward cloud changes the architecture and requires proper governance. The lack of governance that exists today across the industry is pretty startling. So, as organizations move in this direction, there is simply no question that the cultural dimension of getting IT to work more effectively with the business side and so forth must drive with it.

If it doesn't, then, in the end, the solutions that are built with cloud will still have the same set of problems from a business standpoint that current IT solutions have today. This has nothing to do with technology. This is a matter of collaboration and communication across these various information silos.

Gardner: So to your company, Asuret, it's important to measure the progress you might be making at that cultural level, if you want to recover the investments that you have made on the technology side?

Data to work with

Krigsman: Regardless of architecture, you need to have data to work with, if you want to make decisions that are the right decisions, not just based on ad hoc guesses, and you want to get people on the same page. This aspect of it has nothing to do with technology, however the drive toward a unified architecture can be a great motivator to get different parts of an organization focused on these very important issues.

Gardner: Sandy Rogers, you worked with a lot of enterprise IT departments back when you were at IDC in another capacity. Do you see them making this cultural shift, and do you think it would be welcomed by IT to move beyond maintenance and red light-green light management into, "Let's define the best way to do IT and then give those tools to the business that they need to succeed?"

Rogers: A lot of organizations have already been trying to do that, but they were constrained by their own internal resources. In a way, it opened up the toolbox for them to help solve and help be a partner of the business.

One thing that we're finding from those cloud service providers that had originally targeted the end business customer, is that they're working with the CIOs and the IT departments more. They're working through those issues of security and having backup contingency plans.

It's just a state of education that varying parties within the IT ecosystem have to come on board and understand how to leverage this.



It's just a state of education that varying parties within the IT ecosystem have to come on board and understand how to leverage this.

One of the biggest points -- and Mike brought up the issue -- if there is a vendor out there that can support all of this, it's still a mixture of different technologies that have to come together. That’s always been one of the biggest, complex roles that IT needs to serve.

Right now, there are a lot of dependencies on specific technologies internally. A lot of organizations do not want to make those same mistakes with external providers. They're really looking to the IT group as an advisor to guide them and help them in the decisions moving forward.

Gardner: Dave Linthicum, in your work with Bick Group, in terms of your customer base, are there any lessons or wisdom to be drawn from how IT perceives itself and may shift in its definition of its role and character as it moves to cloud? Are there any success indicators in terms of the culture?

Linthicum: The culture that can adapt to these changes and understand that the investment in these changes is strategic to the business are really the cultures that are going to win the game.

In moving to cloud computing, the patterns are very similar to the on-premise systems. We store stuff, we have databases, and it's just a matter of where they reside. Through economies of scale, shareability, and also very innovative mechanisms that would be in the cloud, we can get a hand up in the way in which architecture can be constructed.

Evolution not revolution

I'm seeing more of an evolution than revolution. People who are evolving to leverage bits and pieces of their architecture to move into the cloud computing, where it make sense, around the innovative nature of the cloud and embracing what's going to work for them are the ones who win.

The ones who are going to lose are those with the same kind of behavior I saw in the early and mid-'90s, when the web started to pop-up Those who folded their arms and said, "We're never going to have the Internet running in this organization. We're never going to be on the web," ultimately missed out. We're seeing a very similar revolution today.

It's a little different. It's more IT resources versus content, but, in five years, if you go forward, you are going to find that the winning companies are the ones that are more innovative around their IT infrastructure, inclusive of cloud computing.

They're going to be more integrated with their suppliers and more integrated with the vendors and the customers that they are dealing with. Their customers are going to be delighted with them, because the speed of response, they're able to get information to them and able to get their products, goods, and services to them. They're going to be the real winners there.

Those people who have that kind of innovative nature around these changes, can figure out how it functionally works within the enterprise and then how to evolve their architecture in that direction are going to be the winners.

The losers are the arm folders and the guys who push back on any kind of technology that’s out there, without understanding the value of that technology and how it fits in the context of their business.

In five years, you are going to find that the winning companies are the ones that are more innovative around their IT infrastructure, inclusive of cloud computing.



Gardner: I suppose there's another constituency in the enterprise that needs to be brought on board with all of this and that would be the budgeters, those who are in-charge of the money and how its spent and even the planning about how budgets operate and/or change.

The reason I bring that up is that I have a cousin who is an IT executive at a telecommunications company. I can't describe it more than that, given that he doesn’t have permission to talk in public.

One of his frustrations as a IT person is that he wants to go out and explore and experiment with cloud and services, but the budgets don’t have any line items for him to do that. The people he reports to financially don’t like the idea of opening up a whole new set of line items around “cloud” when you can't take from Peter, that being the already existing IT budget, to pay Paul.

So, to your point Dave, you need to spend a little and invest a little more to get somewhere. He doesn’t have the means to get those monies and he is having a hard time explaining to the budget people why they should.

What message can we take in terms of cloud computing for the business accounting, CFO types, whose job of course it is to keep costs as low as possible?

Fundamental point

Krigsman: Can I just make a comment responding both to this and amplifying something that Dave just said. This is a fundamental point -- the cloud computing winners are going to be those who combine architectural vision and discipline with superior governance and who are also capable of making the adaptive cultural and business transformation changes, such as you were just talking about, things like budgeting, for example. Success in the cloud will require a mixture of all of these things together.

Gardner: So, perhaps this is a board-level discussion and not an IT discussion or even a business management or process management or organizational discussion. It has to go pretty much at the top, which I guess is what we used to say about SOA as well.

Rogers: Exactly, One of the things that we saw was that there needs to be a real business case to utilize any new technology and any new architecture. Nothing is really different here. It's really looking into what the organization requires in targeting those use cases that are tested out and tested out when it needs to scale.

Gardner: I suppose another way to get the attention of the board is through the competitive issues. If your competitor does cloud computing really well and you don’t -- you're folding your arms, as Dave says -- what's the likely competitive result?

Dave Linthicum, any thoughts about a bifurcation in the market between those who embrace cloud computing and those who don't, and whether that will get the attention of boards across the board?

I think that they're waiting for 100 companies to be successful with the technology, before they start investing and moving forward. I keep hearing that over and over again.



Linthicum: It's going to be those people who win in the market around cloud computing, very much like -- I keep going back 10 years -- those who won in the web. They're going to really get the attention of the Board. Everybody has a tendency to follow, not necessarily lead. I think that they're waiting for 100 companies to be successful with the technology, before they start investing and moving forward. I keep hearing that over and over again.

It's going to be rather a tough sell within these boards of directors and people who are making critical decisions around utilization and new technologies, strategic technology, inclusive of cloud computing. You really need to lead with the value. You need to understand, there has to be a commitment for return on investment (ROI) from IT.

They need to put some skin in the game, as to the fact that this is going to actually have some kind of a core benefit to the business. And if they are not willing to do that and not wiling to take the risk, I don’t think that the stakeholders are going to sign off on it.

If you are in the IT world today, you need to understand that if you are moving to a new architecture, you have to commit to a certain amount of value that comes back to the business. Typically, it's going to be a five-year horizon in the United States, perhaps a ten year horizon in the Asia-Pacific. But, that has to be shown and that has to be returned. If it's not returned, then ultimately it's going to be considered a failure.

Start now

You need to start committing to this stuff right now and putting some skin in the game, and I think a lot of people in these IT organizations are very politically savvy and want to protect their positions. There are a few of them who want to put that skin in the game right now.

Gardner: Another way that we have seen disruption in the market -- and I'll double down on your comparison to the early days of the web 15 years ago -- was that newcomers to the market, whether they are startups or they are existing businesses that changed their strategy and entered new and different markets, is there a greenfield advantage here that’s significant enough to convince people of the power and value of cloud computing as a productivity and agility enhancer?

Will newcomers to some markets that embrace cloud efficiencies get such a leg up? We've seen this happen with Amazon in retail, Google in advertising, and of course there are a number of other examples across ecommerce. Is that perhaps a likely way in which cloud computing from a business standpoint will become a bit more of a priority?

Linthicum: I think we are going to see kind of an unfairness in business. People who are starting businesses these days and building it around cloud infrastructures are learning to accept the fact that a lot of their IT is going to reside out on the Internet and the cost effective nature of that. They're going to have a huge strategic advantage over legacy businesses, people who've been around for years and years and years.

There are going to be a lot of traditional companies out there that are going to be looking at these vendors and learning from them.



As they grow and they start to go public and they start to grow as a business, they get up to a half a billion mark, they are going to find that they are able to provide a much more higher cost and price advantage over their competitors and just eat their lunch ultimately.

We're going to see that, not necessarily now, because those guys are typically smaller and just up and coming, but in five years, as they start to grow up, their infrastructure is just going to be much more cost effective and they are just going to run circles around the competition.

Gardner: So I guess we could call this the Barbarians at the Gate effect. How do you see that, Sandy?

Rogers: What's really different is that startups in the space are starting to learn how to run their businesses beyond their technology much earlier. How to manage that partnership ecosystem is really important to how they can capitalize and grow their business. Given the low cost dimension, the per usage type of charging that cloud providers initially engage in, they have a lot of startup cost. It's what I've heard venture funders call "getting that flywheel going."

They're looking at both the short term to ramp up and promote that agility, and get that low hanging fruit and then move into sort of that broader scale. There are going to be a lot of traditional companies out there that are going to be looking at these vendors and learning from them, and it's really about being able to garner that trust.

Tried and true

Large enterprises will often look to the tried and true, because they feel they're going to be around for a long time. So for those that are starting up, they need to present a case that they are working well across the entire IT periphery and working well with those traditional providers in order to gain that trust and mitigate risk.

Gardner: Michael Krigsman, when you look at IT failures, if the startup that exploits cloud computing to the hilt can quickly more from $100 million a year company to $700 million a year company rapidly because IT can keep up with them, in that it's a cloud based, or largely cloud based IT, does that scenario make sense, and is there a cloud benefit to avoiding IT failures?

Krigsman: Both Sandy and Dave hit on this. When a company starts up, they are trying to save money and so they become very adaptable very quickly, and they gain the experience of what's it like to have their data out there in the ether some place, and as that company grows, they are able to make better use of the flexibility and the agility that the cloud offers.

From that standpoint, they do have an advantage over an incumbent, and, again, the cultural aspect here is very important, because there are differences in how an organization relates to on-premise software versus the cloud.

Gardner: Last question and it's a follow-up to this last one. Dave Linthicum, you mentioned that there is a difference between the way the U.S. goes about IT and business compared to say other regions, for example, Asia-Pacific. Now, if the same advantage to being a newcomer to a field works at an individual company level, is there a regional benefit? That is to say, if the United States or any other region, were to embrace cloud computing and aggressively move into markets, would they have an advantage on the global scene? Is there a globalization effect of cloud computing?

Linthicum: I think they would have an effect on the global scene, because of the efficiencies to the architecture and also their ability to move quickly around some new technology spaces in some of the European and Asia-Pacific companies. Again, generally speaking, there are instances of very innovative and very quick moving companies in those areas.

What you're going to find is the biggest uptake of any kind of new technological shift is going to be in the United States or the North American marketplaces.



Ultimately, it would be about the ability to leverage technology that's pervasive around the world. What you're going to find is the biggest uptake of any kind of new technological shift is going to be in the United States or the North American marketplaces. We're seeing that in the U.S. right now.

The uptake on cloud computing in Europe is there, but it's not necessarily as quick as it is in the United States. The uptake in the Asia-Pacific countries is very slow typically, as they usually follow new technology. So, we could find that the cloud computing advantage it has brought to the corporate U.S. infrastructure is going to be significant in the next four years, based on the European enterprises out there and some of the Asia-Pacific enterprises out there that will play catch-up toward the end.

Also, they're dealing with some rather draconian regulations around data. In other words, in many countries, they can't let their financial data or the customer data reside outside of their country. So, if there is no cloud-computing presence in those countries, Amazon or the cloud computing providers, then it's illegal for them to leverage cloud computing.

Either laws are going to have to change or they are going to have to figure out some way around those laws in order for them to take advantage of the emerging cloud computing marketplace.

Gardner: Well, great, We have covered a lot of ground. I want to thank our panel. There's certainly a lot more to keep track of over the next several years to see where the economic and productivity advantages do or don't exist vis-à-vis cloud computing.

So let me thank our guests. Dave Linthicum, CTO of Bick Group. Thank you so much.

Linthicum: Thank you, Dana.

Gardner: Michael Krigsman. He is the CEO of Asuret and a blogger on ZDNet on IT failures. He also writes analyst reports for IDC. Thank you, Michael.

Krigsman: Thank you.

Gardner: And, Sandy Rogers, independent industry analyst, thanks so much for your perspective.

Rogers: Thanks, Dana, and the rest of the panelists.

Gardner: I would also like to thank the sponsor, our charter sponsor for today's BriefingsDirect Analyst Insights Edition, that's Active Endpoints, maker of the ActiveVOS business process management system.

Gardner: We have been discussing cloud computing through the lens of economics and the impact on cost and productivity. This is Dana Gardner, Principal Analyst at Interarbor Solutions. Thanks for listening and come back next time.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Charter Sponsor: Active Endpoints.

Special offer: Download a free, supported 30-day trial of Active Endpoint's ActiveVOS at www.activevos.com/insight.

Edited transcript of BriefingsDirect Analyst Insights Edition podcast, Vol. 51 on the impact of economics and business model disruption from cloud computing. Copyright Interarbor Solutions, LLC, 2005-2010. All rights reserved.


SPECIAL PARTNER OFFER


SOA and EA Training, Certification,

and Networking Events

In need of vendor-neutral, architect-level SOA and EA training? ZapThink's Licensed ZapThink Architect (LZA) SOA Boot Camps provide four days of intense, hands-on architect-level SOA training and certification.

Advanced SOA architects might want to enroll in ZapThink's SOA Governance and Security training and certification courses. Or, are you just looking to network with your peers, interact with experts and pundits, and schmooze on SOA after hours? Join us at an upcoming ZapForum event. Find out more and register for these events at http://www.zapthink.com/eventreg.html.
You may also be interested in: