Showing posts with label Akamai Technologies. Show all posts
Showing posts with label Akamai Technologies. Show all posts

Friday, October 30, 2009

Business and Technical Cases Build for Data Center Consolidation and Modernization

Transcript of a sponsored BriefingsDirect podcast on how data center consolidation and modernization helps enterprises reduce cost, cut labor, slash energy use, and become more agile.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Learn more. Sponsor: Akamai Technologies.

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you’re listening to BriefingsDirect.

Today, we present a sponsored podcast discussion on how data-center consolidation and modernization of IT systems helps enterprises reduce cost, cut labor, slash energy use, and become more agile.

We'll look at the business and technical cases for reducing the numbers of enterprise data centers. Infrastructure advancements, standardization, performance density, and network services efficiencies are all allowing for bigger and fewer data centers that can carry more of the total IT requirements load.

These strategically architected and located facilities offer the ability to seek out best long-term outcomes for both performance and cost -- a very attractive combination nowadays. But, to gain the big payoffs from fewer, bigger, better data centers, the essential list of user expectations for performance and IT requirements for reliability need to be maintained and even improved.

Network services and Internet performance management need to be brought to bear, along with the latest data-center advancements to produce the full desired effect of topnotch applications and data delivery to enterprises, consumers, partners, and employees.

Here to help us better understand how to get the best of all worlds -- that is high performance and lower total cost from data center consolidation -- we're joined by our panel. Please join me in welcoming James Staten, Principal Analyst at Forrester Research. Welcome, James.

James Staten: Thanks for having me.

Gardner: We're also joined by Andy Rubinson, Senior Product Marketing Manager at Akamai Technologies. Welcome, Andy.

Andy Rubinson: Thank you, Dana. I'm looking forward to it.

Gardner: And, Tom Winston, Vice President of Global Technical Operations at Phase Forward, a provider of integrated data management solutions for clinical trials and drug safety, based in Waltham, Mass. Welcome, Tom.

Tom Winston: Hi, Dana. Thanks very much.

Gardner: Let me start off with James. Let's look at the general rationale for data-center modernization and consolidation. What are the business, technical, and productivity rationales for doing this?

Data-center sprawl

Staten: There is a variety of them, and they typically come down to cost. Oftentimes, the biggest reason to do this is because you've got sprawl in the data center. You're running out of power, you're running out of the ability to cool any more equipment, and you are running out of the ability to add new servers, as your business demands them.

If there are new applications the business wants to roll out, and you can't bring them to market, that's a significant problem. This is something the organizations have been facing for quite some time.

As a result, if they can start consolidating, they can start moving some of these workloads onto fewer systems. This allows them to reduce the amount of equipment they have to manage and the number of software licenses they have to maintain and lower their support costs. In the data center overall, they can lower their energy costs, while reducing some of the cooling required and getting rid of some of those power drops.

Gardner: James, isn't this sort of the equivalent of Moore's Law, but instead of at silicon clock-speed level, it's at a higher infrastructure abstraction? Are we virtualizing our way into a new Moore's Law era?

Staten: Potentially. We've always had this gap between how much performance a new CPU or a new server could provide and how much performance an application could take advantage of. It's partly a factor of how we have designed applications. More importantly, it's a factor of the fact that we, as human beings, can only consume so much at so fast a rate.

Most applications actually end up consuming on average only 15-20 percent of the server. If that's the case, you've got an awful lot of headroom to put other applications on there.

We were isolating applications on their own physical systems, so that they would be protected from any faults or problems with other applications that might be on the same system and take them down. Virtualization is the primary isolating technology that allows us to do that.

Gardner: I suppose there are some other IT industry types of effects here. In the past, we would have had entirely different platforms and technologies to support different types of applications, networks, storage, or telecommunications. It seems as if more of what we consider to be technical services can be supported by a common infrastructure. Is that also at work here?

Unique opportunity

Staten: That's mostly happening as well. The exception to that rule is definitely applications that just can't possibly get enough compute power or enough contiguous compute power. That creates the opportunity for unique products in the market.

More and more applications are being broken down into modules, and, much like the web services and web applications that we see today, they're broken into tiers. Individual logic runs on its own engine, and all of that can be spread across some more monetized, consistent infrastructure. We are learning these lessons from the dot-coms of the world and now the cloud-computing providers of the world, and applying them to the enterprise.

Gardner: I've heard quite a few numbers across a very wide spectrum about the types of payoffs that you can get from consolidating and modernizing your infrastructure and your data centers. Are there any rules of thumb that are typical types of paybacks, either in some sort of a technical or economic metric?

Staten: There's a wide range of choices from the fact that the benefits come from how bad off you are when you begin and how dramatically you consolidate. On average, across all the enterprises we have spoken to, you can realistically expect to see about a 20 percent cost reduction from doing this. But, as you said, if you've got 5,000 servers, and they're all running at 5 percent utilization, there are big gains to be had.

Gardner: The economic payoff today, of course, is most important. I suppose there is a twofold effect as well. If you're facing a capacity issue and you're thinking about spending $40 or $50 million for an additional data center, and if you can reduce the need to do that or postpone it, you're saving on capital costs. At the same time, you could, perhaps through better utilization, reduce your operating costs as well.

Staten: Absolutely. One of the biggest benefits you get from virtualization is flexibility. It's so much easier to patch a workload and simply keep it running, while you are doing that. Move it to another system, but apply the patch, make sure the patch worked, deploy a clone, and then turn off the old version.

That's much more powerful, and it gives a lot more flexibility to the IT shop to maintain higher service-level agreements (SLAs), to keep the business up and running, to roll out new things faster, and be able to roll them back more easily.

Gardner: Andy Rubinson, this certainly sounds like a no-brainer: Get better performance for less money and postpone large capital expenditures. What are some of the risks that could come into play while we are starting to look at this whole picture? I'm interested in what's holding people back.

Rubinson: I focus mainly on delivery over the Internet. There are definitely some challenges, if you're talking about using the Internet with your data center infrastructure -- things like performance latency, availability challenges from cable cuts, and things of that nature, as well as security threats on the Internet.

It's thinking about how can you do this, how can you deliver to a global user base with your data center, without having to necessarily build out data centers internationally, and to be able to do that from a consolidated standpoint.

Gardner: So, for those organizations that are not just going to be focused on employees, or, if they are, that they are a global organization, they need to be thinking the most wide area network (WAN) possible. Right?

Rubinson: Absolutely.

Gardner: Let's go to our practitioner, Tom Winston. Tom, what sort of effects were you dealing with at Phase Forward, when you were looking at planning and strategy around data center location, capacity, and utilization?

Early adopter

Winston: Well, we were in a somewhat different position, in that we were actually an early adopter of virtualization technology, and certainly had seen the benefits of using that to help contain our data-center sprawl. But, we were also growing extremely rapidly.

When I joined the organization, it had two different data centers -- one on the East Coast and one on the West Coast. We were facing the challenge of potentially having to expand into a European data center, and even potentially a Pacific Rim data center.

By continuing to expand our virtualization efforts, as well as to leverage some of the technologies that Andy just mentioned as far as, Internet acceleration, via some of the Akamai technologies, we were able to forego that data center expansion. In fact, we were able to consolidate our data center to one East Coast data center, which is now our primary hosting center for all of our applications.

So, it had a very significant impact for us by being able to leverage both that WAN acceleration, as well as virtualization, within our own four walls of the data center. [Editor's note: WAN here and in subsequent uses refers to public wide area networks and not private.]

Gardner: Tom, just for the edification of our listeners, tell us a little bit about Phase Forward. Where are your users and where do your applications need to go.

In an age where . . . people are expecting things to be moving extremely quickly and always available, it's very important for us to be able to provide that application all the time, and to perform at a very high level.



Winston: We run electronic data capture (EDC) software, and pharmacovigilance software for the largest pharmaceutical and clinical device makers in the world. They are truly global organizations in nature. So, we have users throughout the world, with more and more heavy population coming out of the Asia Pacific area.

We have a very large, diverse user base that is accessing our applications 24x7x365, and, as a result, we have performance needs all the time for all of our users.

In an age where, as James mentioned, people are expecting things to be moving extremely quickly and always available, it's very important for us to be able to provide that application all the time, and to perform at a very high level.

One of the things James mentioned from an IT perspective is being able to manage that virtual stack. Another thing that virtualization allows us to do is to provide that stack and to improve performance very quickly. We can add additional compute resources into that virtual environment very quickly to scale to the needs that our users may have.

Gardner: James Staten, back to you. Based on Tom's perspective of the combination of that virtualization and the elasticity that he gets from his data center, and the ability to locate it flexibly, thanks to some network optimization and reliability issues, how important is it for companies now, when they think about data center consolidation, to be flexible in terms of where they can locate?

All over the place

Staten: It's important that they recognize that their users are no longer all in the same headquarters. Their users are all over the place. Whether they are an internal employee, a customer, or a business partner, they need to get access to those applications, and they have a performance expectation that's been set by the Internet. They expect whatever applications they are interacting with will have that sort of local feel.

That's what you have to be careful about in your planning of consolidation. You can consolidate branch offices. You can consolidate down to fewer data centers. In doing so, you gain a lot of operational efficiencies, but you can potentially sacrifice performance.

You have to take the lessons that have been learned by the people who set the performance bar, the providers of Internet-based services, and ask, "How can I optimize the WAN? How can I push out content? How can I leverage solutions and networks that have this kind of intelligence to allow me to deliver that same performance level?" That's really the key thing that you have to keep in mind. Consolidation is great, but it can't be at the sacrifice of the user experience.

Gardner: When you find the means to deliver that user experience, that frees you up to then place your data centers strategically based on things like skills or energy availability or tax breaks, and so forth. Isn't that yet another economic incentive here?

Staten: You want to have fewer data centers, but they have to be in the right location, and the right location has to be optimized for a variety of factors. It has to be optimized for where the appropriate skill sets are, just as you described. It has to be optimized for the geographic constraints that you may be under.

We're able to take some of that load off of the servers, and do the work in the cloud, which also helps reduce them.



You may be doing business in a country in which all of the citizen information of the people who live in that country must reside in that country. If that's the case, you don't necessarily have to own a data center there, but you absolutely have to have a presence there.

Gardner: Andy, back to you. What are some of the pros and cons for this Internet delivery of these applications? I suppose you have to rearchitect, in order to take advantage of this as well.

Rubinson: There are two main areas from the positives, the benefits, and that's the cost efficiency of delivering over the Internet, as well as the responsiveness. From the cost perspective, we're able to eliminate unnecessary hardware. We're able to take some of that load off of the servers, and do the work in the cloud, which also helps reduce them.

A lot of cost efficiencies

There are a lot of cost efficiencies that we get, even as you look to Tom's statement about being able to actually eliminate a data center and avoid having to build out a new data center. Those are all huge areas, where it can help to use the Internet, rather than having to build out your own infrastructure.

Also, in terms of responsiveness, by using the Internet, you can deploy a lot more quickly. As Tom explained, it's being able to reach the users across the globe, while still consolidating those infrastructures and be able to do that effectively.

This is really important, as we have seen more and more users that are going outside of the corporate WANs. People are connecting to suppliers, to partners, to customers, and to all sorts of things now. So, the private WANs that many people are delivering their apps over are now really not effective in reaching those people.

Gardner: As James said earlier, we've got different workloads and different types of applications. Help me understand what Akamai can do. Do you just accelerate a web app, or is there a bit more in your quiver in terms of dealing with different types of loads of media, content, application types?

Rubinson: There are a variety of things that we are able to deliver over the Internet. It includes both web- and IP-based applications. Whether it's HTTP, HTTPS, or anything that's over TCP/IP, we're able to accelerate.

. . . The other key area where we have benefit is through the delivery of dynamic data. By optimizing the cloud, we're able to speed the delivery of information from the origin as well.



We also do streaming. One of the things to consider here is that we actually have a global network of servers that kind of makes up the cloud or is an overlay to the cloud. That is helping to not only deliver the content more quickly, but also uses some caching technology and other things that make it more efficient. It allows us to give that same type of performance, availability, and security that you would get from having a private WAN, but doing it over the much less expensive Internet.

Gardner: You're looking at specifics of an application in terms of what's going to be delivered at frequent levels versus more infrequent levels, and you can cache the data and gain the efficiency with that local data store. Is that how it works?

Rubinson: A lot of folks think about Akamai as being a content delivery network (CDN), and that's true. There is caching that we are doing. But, the other key area where we have benefit is through the delivery of dynamic data. By optimizing the cloud, we're able to speed the delivery of information from the origin as well. That's where it's benefiting folks like Tom, where he is able to not only cache information, but the information that is dynamic, that needs to get back from the data center, goes more quickly.

Gardner: Let's check in with Tom. How has that worked out for you? What sort of applications do you use with wide area optimization, and what's been your experience?

Flagship application

Winston: Our primary application, our flagship application, is a product called InForm, which is the main EDC product that our customers use across the Internet. It's accelerated using Akamai technology, and almost 100 percent of our content is dynamic. It has worked extremely well.

Prior to our deployment of Akamai, we had a number of concerns from a performance standpoint. As James mentioned, as you begin to virtualize, you also have to be very conscious of the potential performance hits. Certainly, one of the areas that we were constrained with was performance around the globe.

We had users in China who, due to the amount of traffic that had to traverse the globe, were not happy with the performance of the application. Specifically, we brought in Akamai to start with a very targeted group of users and to be able to accelerate for them the application in that region.

It literally cut the problem right out. It solved it almost immediately. At that point, we then began to spread the rest of that application acceleration product across the rest of our domains, and to continue to use that throughout the product set.

Having an application perform to the level of a Google is something that our end users expect, even though obviously it's a much different application in what it's attempting to solve and what it's attempting to do.



It was extremely successful for us and helped solve performance issues that our end users were having. I think some of the comments that James made are very important. We do live in a world where everybody expects every application across the Internet to perform like Google. You want to search and you expect it to be back in seconds. If it's not, people tend to be unhappy with the performance of the application.

In our application, it's a much more complex application. A lot more is going on behind the scenes -- database calls, whatever it may be. Having an application perform to the level of a Google is something that our end users expect, even though obviously it's a much different application in what it's attempting to solve and what it's attempting to do. So, the benefits that we were able to get from the acceleration servers were very critical for us.

Rubinson: Just to add to that, we recently commissioned a study with Forrester, looking at what is that tolerance threshold [for a page to load]. In the past it had been that people had tolerance for about four seconds. As of this latest study, it's down to two seconds. That's for business to consumer (B2C) users. What we have seen is that the business-to-business (B2B) users are even more intolerant of waiting for things.

It really has gotten to a point where you need that immediate delivery in order to drive the usage of the tools that are out there.

Gardner: I suppose that's just human nature. Our expectations keep going up. They usually don't go down.

Rubinson: True.

Gardner: Back to you, Tom. Tell me a little bit more about this application. Is this a rich Internet application (RIA)? Is this strictly a web interface? Tell us a little bit more about what the technical challenge was in terms of making folks in China get the same experience as those on the East Coast, who were a mile away from your data center.

Everything is dynamic

Winston: The application is one that has a web front-end, but all the information is being sent back to an Oracle database on the back-end. Literally, every button click that you make is making some type of database query or some type of database call, as I mentioned, with almost zero static content. Everything is dynamic.

There is a heavy amount of data that has to go back and forth between the end user and the application. As a result, prior to acceleration, that was very challenging when you were trying to go halfway around the globe. It was almost immediate for us to see the benefits by being able to hop onto the Akamai Global Network and to cut out a number of the steps across the Internet that we had to traverse from one point to our data center.

Gardner: So, it was clearly an important business metric, getting your far-flung customers happy with their response times. How did that however translate back when you reverse engineered from the experience to what your requirement would be within that data center? Was there sort of a meeting of the minds between what you now understand the network is capable of, with what then you had to deliver through your actual servers and infrastructure?

l guess I'm looking for an efficiency metric or response in terms of what the consolidation benefit was.

Winston: As I mentioned, we had already consolidated from a virtualization standpoint within the four walls of the data center. So, we were continuing to expand in that footprint. But, what it allowed us to do was forego having to put a data center in the Pacific Rim or put a data center in Europe to put the application closer to the end user.

Operating like a cloud is really operating in this more homogeneous, virtualized, abstracted world that we call server virtualization in most enterprises.



Gardner: Let's look to the future a little bit. James, when people think nowadays about cloud computing, that's a very nebulous discussion and topic set. It seems as if what we're talking about here is that more enterprises are going to have to themselves start behaving like what people think of as a cloud.

Staten: Yes, to a degree. There is obviously a positive aspect of cloud and one that can potentially be a negative.

Operating like a cloud is really operating in this more homogeneous, virtualized, abstracted world that we call server virtualization in most enterprises. You want to operate in this mode, so that you can be flexible and you can put applications where they need to be and so forth.

But, one of the things that cloud computing does not deliver is that if you run it in the cloud, you are not suddenly in all geographies. You are just in a shared data center somewhere in the United States or somewhere in your geography. If you want to be global, you still have to be global in the same sense that you were previously.

Cloud not a magic pill

Rubinson: Absolutely. Just putting yourself in the cloud doesn't mean that you're not going to have the same type of latency issues, delivering over the Internet. It's the same thing with availability in trying to reach folks who are far away from that hosted data center. So, the cloud isn't necessarily the answer. It's not a pill that you can take to fix that issue.

Gardner: Andy, I don't think you can mention names, but you are not only accelerating the experience for end users of enterprise applications like a Phase Forward. You're also providing similar services for at least several of the major cloud providers.

Rubinson: It really is anybody who is using the cloud for delivery. Whether it's a high-tech, a pharma company, or even a hosting provider in the cloud, they've all seen the value of ensuring that their end users are having a positive experience, especially folks like software-as-a-service (SaaS) providers.

We've had a lot of interest from SaaS companies that want to ensure that they are not only able to give a positive user experience, but even from a sales perspective, being able to demonstrate their software in other locations and other regions is very valuable.

Obviously, by using the best practices that we've adopted to have blazing fast websites and applying them to make sure that all of your applications, consumed by everyone, are still blazing fast means that you don't have to reinvent the wheel.



Gardner: Now, James, when a commercial cloud provider provides an SLA to their customers, they need to meet it, but they also need to keep their costs as low as possible. More and more enterprises are trying to behave like service providers themselves, whether it's through ITIL adoption, IT shared services or service-oriented architecture (SOA). Over time, we're certainly seeing movement toward a provider-supplier, consumer-subscription relationship of some kind.

If we can use this acceleration and the ability to use the network for that requirement of performance to a certain degree, doesn't this then free up the folks who have to meet those SLAs in terms of what they need to provide? I'm getting back to this whole consolidation issue.

Staten: To some degree. Obviously, by using the best practices that we've adopted to have blazing fast websites and applying them to make sure that all of your applications, consumed by everyone, are still blazing fast means that you don't have to reinvent the wheel. Those practices work for your website. You just apply them to more areas.

If you're applying practices you already know, then you can free up your staff to do other things to modernize the infrastructure, such as deploying ITIL more widely than you have so far. You can make sure that you apply virtualization to a larger percentage of your infrastructure and then deal with the next big issue that we see in consolidation, which is virtual machine (VM) sprawl.

Can get out of control

T
his is where you are allowing your enterprise customers, whether they are enterprise architects, developers, or business units to deploy new VMs much more quickly. Virtualization allows you to do that, but you can quickly get out of control with too many VMs to manage.

Dealing with that issue is what is front and center for a lot of enterprise IT professionals right now. If they haven't applied the best practices or performance to their application sets and to their consolidation practices, that's one more thing on their plate that they need to deal with.

Gardner: So, this also can relate to something that many of us are forecasting. Not much of it happening yet, but it's this notion of a hybrid approach to cloud and sourcing, where you might use your data center up to a certain utilization, and under certain conditions, where there is a spike in demand, you could just offload that to a third-party Cloud provider.

If you're assured from the WAN services that the experience is going to be the same, regardless of the sourcing, they are perhaps going to be more likely to pursue such a hybrid approach. Is that fair to say, James?

Staten: This is a really good point that you're bringing up. We wrote about this in a report we called "Hollow Out The MOOSE." MOOSE is Forrester's term for the Maintenance and Ongoing Operations, Systems, and Equipment, which is basically everything you are running in your data center that hasn't yet been deployed up to this point.

The real answer is that you need to choose the right type of solution for the right problem. We call this Strategic Rightsourcing . . .



The challenge most enterprises have is that MOOSE consumes 70 or 80 percent of their entire budget, leaving very little for new innovation and other things. They see things like cloud and they say, "This is great. I'll just move this stuff to the cloud, and suddenly it will save me money."

No. The real answer is that you need to choose the right type of solution for the right problem. We call this Strategic Rightsourcing, which says to take the things that others do better than you and have others do them, but know economically whether that's a positive tradeoff for you or not. It doesn't necessarily have to be cash positive, but it has to be an opportunity to be cost positive.

In the case of cloud computing, if I have something that I have to run myself, it's very unique to how I design it, and it's really best that I run it in my data center, you're not saving money by putting that in the cloud.

If it's an application that has a lot of elasticity, and you want it to have the ability to be on two virtual machines during the evening, and scale up to as many as 50 during the day, and then shrink back down to 2, that's an ideal use of cloud, because cloud is all about temporary capacity being turned on.

A lot of people think that it's about performance, and it's not. Sure, load balancing and the ability to spawn new VMs increases the performance of your application, but performance is experienced by the person at the end of the wire, and that's what has to be optimized. That's why those types of networks are still very valuable.

Gardner: Tom Winston, is this vision of this hybrid and the use of cloud for ameliorating spikes and therefore reducing your total cost appealing to you?

Has to be right

Winston: It is, but I couldn't agree more with what James just said. It has to be for the right situation. Certainly, we've started to look at some of our applications, potentially using them in a cloud environment, but right now our critical application, the one that I mentioned earlier, is something that we have to manage. It's a very complex environment. We manage it and we need to hold it very close to the vest.

People have the idea that, "Gee, if I put it in the cloud, my life just got a lot easier." I actually think the reverse might be true, because if you put it into the cloud, you lose some control that you have when it's inside your four walls.

Now, you lose the ability to be able to provide the level of service you want for your customers. Cloud needs to be for the right application and for the right situation, as James mentioned. I really couldn't agree more with that.

For Akamai, it's really about how we're able to accelerate that.



Gardner: So, the cloud is not the right hammer for all nails, but for when that nail is correct, that hybrid model can perhaps be quite a economic benefit. Andy, at Akamai, are you guys looking at that hybrid model, and is there something there that your services might foster?

Rubinson: This is really something that we are agnostic about. Whether it's in a data center owned by the customer or whether it's in a hosted facility, we are all about the means of delivery. It's delivering applications, websites, and so forth over the public Internet.

It's something we're able to do, if there are facilities that are being used for, say, disaster recovery, where it's the hybrid scenario that you are describing. For Akamai, it's really about how we're able to accelerate that. How we are able to optimize the routing and the other protocols on the Internet to make that get from wherever it's hosted to a global set of end users.

We don't care about where they are. They don't have to be on the corporate, private WANs. It's really about that global reach and giving the levels of performance to actually provide an SLA. Tell me who else out there provides an SLA for delivery over the Internet? Akamai does.

Gardner: Well, we'll have to leave it there. We've been discussing how data center consolidation and modernization can help enterprises cut costs, reduce labor, slash their energy use, and become more agile, but also keeping in mind the requirements about the performance across wide area networks.

We've been joined by James Staten, he is a Principal Analyst at Forrester Research. Thank you, James.

Staten: Thank you.

Gardner: We were also joined by Andy Rubinson, Senior Product Marketing Manager at Akamai Technologies. Thank you, Andy.

Rubinson: Thank you very much.

Gardner: Also, I really appreciate your input Tom Winston, Vice President of Global Technical Operations at Phase Forward.

Winston: Dana, thanks very much. Thanks for having me.

Gardner: This is Dana Gardner, principal analyst at Interarbor Solutions. You've been listening to a sponsored BriefingsDirect podcast. Thanks for listening, and come back next time.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Learn more. Sponsor: Akamai Technologies.

Transcript of a sponsored BriefingsDirect podcast on how data center consolidation and modernization helps enterprises reduce cost, cut labor, slash energy use, and become more agile. Copyright Interarbor Solutions, LLC, 2005-2009. All rights reserved.

Monday, June 01, 2009

Dana Gardner Interviews Forrester's Frank Gillett on Future of Mission-Critical Cloud Computing

Transcript of a BriefingsDirect podcast with Frank Gillett of Forrester Research on the state of cloud computing and prospects for real-world use in enterprises.

Watch the video. Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com. Learn more. Sponsor: Akamai Technologies.

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions. Welcome to a special video podcast edition of BriefingsDirect.

Today, we're going to discuss cloud computing in the context of the real-world enterprise. We've certainly heard a lot about the vision for cloud computing and what it can do for the delivery of applications, services, infrastructure, and even development and deployment. What's less clear is how we take the vision and apply it to today's enterprise concerns and requirements.

We're going to look at the need for security, reliability, management,and even integration across multiple instances of cloud services. Here to help us understand the difference between the reality and the vision for cloud computing is Frank Gillett. He is a vice president and principal analyst for general cloud computing topics and issues at Forrester Research. Welcome to the show, Frank.

Frank Gillett: Thanks very much, Dana.

Gardner: You know, the whole notion of cloud computing isn't terribly new. I think it's more of a progression. We certainly had Internet and Web, Web applications, portals, and software-as-a-service (SaaS) applications. Now, taking it a step further, how do you define cloud computing? How can we put a box around this, given the large amount of hype that we've seen?

Gillett: Exactly, Dana. When I talk to folks in the industry, the old timers look at me and say, "Oh, time-sharing!" For some folks this idea, just like virtualization, harkens back to the dawn of the computer industry and things they've seen before. But, when we think about what cloud computing is, there are really two things that are brought to the forefront.

The first is, as you suggest, the rise of the Internet and the notion that instead of having everything on my own computer, or in sort of the database server, I go visit this website over a public network instead of the client-server private network within my company. So, you date it back basically to the dawn of Internet search with the beginning of AltaVista, Yahoo!, and then Google, where we had these applications called "search" that could only be hosted as a service provider.

We didn't think of them as cloud, per se, because cloud was just this funny sketch on a white board that people used to say, "Well, things go into the network, magic happens, and something cool comes from somewhere." Eventually, as you mentioned, those sorts of ideas began to morph into notions of actual SaaS, where I was running a business application as a service from a provider's location.

On a separate track, with the idea of server virtualization -- sharing one server as if it were several -- VMware kicked off this technology for the x86 architecture, in the 1998-1999 timeframe. Of course, the idea originally came from the mainframe, and that technology for machine sharing is sort of the opposite of these giant Web workloads that span machines that have tens or thousands of servers. These two ideas have fused and are now under this umbrella called cloud. I see a wide range of definitions.

The way I work with folks is not to say, "Here is my definition," but rather, "How are you thinking about it," and then categorize it. So broadly speaking, SaaS is a finished service that end users take in. Platform as a service (PaaS) is not for end users, but for developers.

With PaaS, think of a substitute for an application server, and if you think about this, then it's an environment at a service provider. Instead of running your own application server or your own copy of an operating system on site, the developer writes the software and deploys it using the tools from the service provider. He deploys at the service provider and never has to think about operating systems, servers, storage architectures or any of that junk.

Now, some developers want more control at a lower level, right? They do want to get into the operating system. They want to understand the relationship among the different operating systems instances and some of the storage architecture.

At that layer, you're talking about infrastructure as a service (IaaS), where I'm dealing with virtual servers, virtualized storage, and virtual networks. I'm still sharing infrastructure, but at a lower level in the infrastructure. But, I'm still not nailed to this specific hardware the way you are in say a hosting or outsourcing setup.

So, in simple terms, that's how I think about it. SaaS for end users. PaaS for developers who don't want to get into the infrastructure. And, IaaS for developers who want to go that low, or for IT folks who have workloads that they want to bring from the back office and deploy in that environment. That latter one is still secondary, and the whole thing is still emerging. If you were looking at this in Internet time, we're in 1995 or 1996.

Where are we now?

Gardner: We're in the opening innings of cloud computing, but there have been a number of converging trends and even economic incentives that have kicked in to make this top-of-mind for a lot of people now.

What's going on from your research perspective at Forrester? You're looking at adaption patterns. You're looking at mind share. You're looking at economic and technical rationales within enterprises. If we're in the first or second inning in terms of vision, where are we in terms of implementation?

Gillett: Implementation, particularly when you look at it from the point of the view of the enterprise, is pretty early. When we surveyed folks to ask about their use of IaaS, we found two to three percent of enterprises, and about the same for small and medium-sized businesses (SMB), say that they are actually doing some form of pay-per-use hosting of virtual servers at a service provider.

You just can’t throw a cloud-computing phrase at someone and say, “Are you doing it?” Because most of them ask, “Well, what do you mean?” We have to ask specific questions.

We also asked folks about SaaS. When we look at adoption for that, a third of companies are doing some form of SaaS. In both cases,

In cloud stuff, a lot of the noisy early adopters are startups that are very present on the Web, social media, blogs, and stuff like that.

interestingly, the bigger the company the more likely they are to be doing it, despite the hype that the small companies will go first. They tend not to grab the bleeding-edge technology, except for the startups. In cloud stuff, a lot of the noisy early adopters are start-ups that are very present on the Web, social media, blogs, and stuff like that.

A lot of the examples we hear about startups are like Animoto, Good Data, or Allurent who are using this capability to build their own businesses, and they're talking a lot about it. It doesn't necessarily mean that your typical enterprise is doing it, and, if they are, it's probably the developers, and it's probably Web-oriented stuff. So it's a specific subset of what's happening in the enterprise.

Gardner: So, clearly there are some economic incentives for startups that get involved. They don't have to have that upfront capital expense, they can pay as they scale. So, they can create a business model that's commensurate with their costs.

Gillett: That's right.

Gardner: But, for the big payoff from cloud computing, those larger enterprises are at the scale where the cost savings, the efficiency, and the productivity will be the most impactful, what are they doing?

Gillett: When you look at the infrastructure guys who worry about servers and storage, the only place that they may be playing around with this is in testing, development, or workloads where they have to do a bunch of stuff in a hurry and then quit.

One apocryphal example is The New York Times needs to render a hundred years of newspaper articles as PDFs. And, this is an Amazon customer. So, there's the developer scratching his head and saying, "How am I going to find all these servers to render this stuff, and how long is it going to take?"

He starts mucking around with Amazon [Web Services] and figures out that he can move the data up to Amazon, which takes a little while. It was a few terabytes of TIFF files, scanner stuff. Then he's able to write software to take that data once it's at Amazon and convert it to PDFs. He runs the whole thing in 18 hours on few tens or hundreds of instances. Then, he's done, and the whole thing cost him something less than a conventional expense report, a couple of hundred bucks ...

Gardner: Time-share.

Just do it

Gillett: ... Right. Instead of having go out and buy the gear, borrow it, or run it on nights or weekends or whatever, he's just able to go out and do it. That gives you an example of how people are doing it in the infrastructure layer. It's really workloads like test and development, special computation, and things like that, where people are experimenting with it. But, you have to look at your developers, because often it's not the infrastructure guys who are doing this. It's the developers.

It's the people writing code that say, “It takes too long to get infrastructure guys to set up a server, configure the network, apportion the storage, and all that stuff. I'll just go do it over here at the service provider."

My colleague James was talking to an infrastructure guy at a major entertainment company. He says, "Hey, I saw you're using cloud computing." "No, we're not." "Well, take a look at this URL." "I didn't know about this." Click.

Gardner: That raises a very interesting question. Who in the enterprise will be specifying and therefore become responsible for cloud-computing implementations?

Gillett: That question illustrates the challenge of this foggy thing called "cloud." There is no one thing called "cloud," and therefore, there

Who in the enterprise will be specifying and therefore become responsible for cloud-computing implementations?

is no one owner in the enterprise. What we find is that, if you are talking about SaaS, business owners are the ones who are often specing this.

So, a sales person might be looking at, say, Salesforce.com and say, "Hey, I want that." Eventually, they involve the IT folks, but sometimes it's further down the cycle. Sometimes, it's after the fact when they come to IT and say, "We've got this CRM-as-a-service thing, and we need to integrate it with the billing and financials."

What's happening is this whole change in dialog within IT and between IT and it's internal customers, because people at different levels are responsible for different aspects.

There's a different angle on this for security and compliance folks. They're trying to figure out how to make sure -- when anyone can run out with a credit card and buy IT infrastructure -- that they're following all the regs they've got to follow. Whether it's the generic stuff for being a publicly traded company, or basic accounting purposes, or, more importantly, for HIPAA regulations or special financial services regulations, it's quite a challenge, and it's fundamentally a governance challenge.

'One throat to choke'

Gardner: If we have multiple cloud services, multiple levels of cloud in terms of application development infrastructure, we are probably also going to see some implementations internally of the cloud provisioning and the setup for virtualization and lower-cost computing. So, with multiple instances of cloud, some internal and some external, who is the "one throat to choke" if something goes wrong?

Gillett: Bottom line, there isn't one, because there is no one thing. If you look at SaaS, in a handful instances, you might see stuff like that within a large company, but those are mostly from service providers. It's when you get to IaaS, the notion that I can use virtual servers as a shared service, that I can self-provision from a portal, and that are somehow tracked by resource consumption.

That's what we expect to see coming out of IT infrastructure, but that will take longer. If you look at virtualization adoption, only a little more than half of the companies in our surveys report that they are even doing x86 virtualization. So far, of the ones that are virtualized, it's only about a quarter of their operating system instances that are virtualized. That's from a survey late last year.

By the summer of 2010, they're projecting that they will have about half of their operating system instances virtualized, which, from our experience, seems quite aggressive as an average target across these thousand enterprises we surveyed in North America and Europe.

Gardner: Well, Frank, I think enterprises are going to be challenged by this notion they are the place for that "one throat to choke," given that there are so many different spinning plates in this equation across network services, cloud providers, other parts of the business process. What can they go to then, as a third party, to gather the insight to extend their service-level agreements (SLAs) or enforce them?

Gillett: You're right to call on this and ask for the double click down, because they are on their own within the company. They've got to manage the service providers, but there is this thing called the network that's between them and the service providers.

It's not going to be as simple as just going to your network provider, the Internet service provider, and saying, "Make sure my network stays up." This is about understanding and thinking about the performance of the network end to end, the public network, much harder to control than understanding what goes on within the company.

This is where you have to couple looking at your Internet or network service provider with the set of offerings out there for content

It's not going to be as simple as just going to your network provider, the Internet service provider, and saying, "Make sure my network stays up."

and application acceleration. What you're really looking for is comprehensive help in understanding how the Internet works, how to deal with limitations of geography and the physics, the speed of light, making sure that you are distributing the applications correctly over the network -- the ones that you control and architect -- and understanding how to work with the network to interact with various cloud-service providers you're using across the network.

Going to look at the service providers, and the technology offerings for content acceleration, application acceleration, other forms of network resident services can give you a more comprehensive look at the network. Even though you can't get the uber "one throat the choke," at the network layer you can go for a more comprehensive view of the application, and the performance of the network, which is now becoming a critical part of your business process. You depend on these service providers of various stripes scattered across the Internet.

If you take the notion of service-oriented architecture (SOA), and explode it across the public network, now you need sort of the equivalent of the internal network operation center, but you need help from an outside provider, and there's a spectrum of them obviously to do that. When you're asking about governance, the governance of the network is really important to get right and to get help with. There is no way for an individual company to try and manage all that themselves, because they are not in the public network themselves.

Gardner: In the past, I might have been able to exercise governance, security, service levels, liability types of values internally, but this is not going to happen on the Internet. I need to have, in a sense, access to that network?

Access to the network


Gillett: Yes, you need access to the network. People think, "Oh, that means I have to go out and worry about the service providers or the network providers, compliance and all that stuff." No, no, no. It's true, but the really important thing is understanding the comprehensive view of the performance of the network, and getting help from a service provider that has that kind of view. There are a number of parties that have various stories about that.

As your dependence on these different services increases, taking a look at those offerings and understanding how to optimize it is critical. I'll give one tiny example here.

I spoke to a luxury goods and perfume maker that had a public website with transactions, as well as content, on their website. I said, "How many servers does it take to run your transactions?" And they said it only takes four, and that includes the two redundant ones. "Oh, really? That's all?" They said, "Well, not really. Three quarters of my workload is with my application and content acceleration provider. They take care of three quarters of my headache. They make it all work." So, that's a great example.

Gardner: Moving work out onto the network itself.

Gillett: Exactly. In that case, they were not yet dependent on a variety of service providers, but they were really interested in making sure their website worked publicly and externally. They found this provider who was able to do that for them quite effectively, reduced the workload on premises, and gave them the capacity that they needed, stuff at the edge and all that.

Gardner: So the desire is there. The rationale from a technological productivity, that is to say, with more bang for your investment and

There's no such thing as "the" cloud provider, or one cloud provider.

your infrastructure is there. What seems to be missing is this notion of trust, governance, and reliability. If I'm an end-user and something goes wrong, do I call IT, do I call the cloud provider, or do I call the network services provider?

Gillett: Dana, I'll point out one thing, and I'm going to back up to hit one thing that I haven't properly addressed. There's no such thing as "the" cloud provider, or one cloud provider. Part of the complication for IT is, not only do they have multiple parties within the company, which has always been a struggle, as they get into this, they're going to find themselves dealing with multiple providers on the outside.

So, maybe you've got the services still in your IT as an infrastructure. You've got your internal capability. Then, you've got an application, SaaS, and perhaps PaaS, and a business process that somehow stitches all four of those things together. Each one has its own internal complexities and all of it's running over the public network, unless you have got some private thing between these public service providers, which seems unlikely. So, it's really challenging.

Now, to double back, you talked about the economic incentive. One of the misleading ideas here is that cloud is always cheaper. Cloud is not always cheaper. There are different value propositions, reasons you would go to a “cloud service provider.”

One of them is the notion of pay-per-use. I want to pay for what I use. Well, if you want to buy it on a spot market, which is a term that's familiar people who think about buying oil and other commodities, you pay a premium to buy stuff on-demand. You pay more per hour, than if you make an upfront commitment.

SaaS pricing models

If you look at the payment or pricing models for SaaS, you tend to pay per-person per-month. It's crudely matching business value, because you have a user using it during the month. It doesn't truly track to true resource consumption, but you have a semi-predictable bill, which people you've allocated, how many months.

When you pay per use on virtual servers, it looks cheap -- say Amazon's bottom dollar rate of 10 cents an hour. They have other ones, but that's the sort of rock bottom entry one. When you add the cost of running that workload 24/7/365, that can come up more expensive than certainly doing it yourself, particularly if your accounting system doesn't aggregate all the cost together to get you a true cost.

To benchmark to an external service providers, I have to be better at taking care of my own accounting. It's quite hard to compare, because some people who argue they are cheaper will be wrong. They're not thinking as a shareholder, only as the person holding that particular budget within the enterprise.

In other cases, it is truly cheaper than a service provider. I had another service provider come to me and say that they are able to do storage for one-tenth the cost of Amazon's storage cost, because they have optimized for their workload. They understand it and they know how to tune the cost for it.

All these different notions of cloud offer a huge set of trade offs for how fast you can provision what the unit cost is, but people should think of

It's quite hard to compare, because some people who argue they are cheaper will be wrong. They're not thinking as a shareholder, only as the person holding that particular budget within the enterprise.

it as a spectrum of things. You're not always getting something that's cheaper. Sometimes it's more effective for the business, but not necessarily cheaper on a unit-cost basis.

Gardner: So, as we look at the economics, we also have to factor in the notion that people can do a lot more or do it differently with a cloud model environment than they could have done internally. This is how we can, in a sense, integrate across different sets of services from different providers that can specialize, but put them in the context of a business process.

So, we have modules, if you will, of cloud services. This is, I think, the pay-off that people are also looking for. How do you describe not just the economic benefits, but these abilities to do things that could not have been done before in a single data center, where applications are monolithically supported?

Gillett: We have been talking for a long time about ideas like this. Early on, we talked about shared and automated infrastructure at Forrester, early in 2002. We followed that up with a report on what we called "Organic Business" that really talked about this notion of different companies being able to work together in flexible and fluid ways, and really being able to do new ways of business innovation.

If you look at it, a lot of these concepts are embodied in the whole set of ideas around SOA, that everything is manifested as services, and it's all loosely coupled, and they can work together. Well, that works great, as long as you've got good governance over those different services, and you've got the right sort of security on them, the authentication and permissions, and you found the right balance of designing for reuse, versus efficiently getting things done.

SOA is actually a dirty word actually for some of the more Web- or Internet- oriented folks, but for the enterprise folks, some of the cloud ideas are just a broadening and extension of SOA and the notion of, "Now, I can pull some of my services from outside."

Look at a company like Avalara, a tax calculation service. Why should I do my own tax calculations or buy an on-premises suite of software and constantly have to update it? Why don't I just go to a service provider and send them the informations about the transaction, have them return to me the correct tax payment and the entities to send it to? Then, I can pay for the tax calculation per order, and I'm all done. I don't have to worry about any of that stuff.

What if?

But, as you're hinting at, I have to think about how I make that business process work, making sure that I work over the Internet? What do I do if that service provider hiccups or a backhoe cuts a fiber optic cable between me and the service provider?

Now, I'm becoming more dependent on the public Internet infrastructure, once I'm tying into these service providers and tying into multiple parties. Like a lot of things in technology, unless you're going to completely turn over everything to an outside service provider, which sounds like traditional outsourcing to me, the "one throat to choke" is your own.

You'd have to figure this stuff out, and you can get help to simplify it, so you have only a handful of people to bang heads together. If you think about it, it's not that different than when I ran all the infrastructure on my own premises, because I had gear and applications from different parties, and, at the end of day, it was up to me to referee these folks and get them to work together.

Gardner: So, your perspective that SOA sets the stage and that cloud computing is a larger abstraction and a use case, if you will, for SOA. That makes a lot of sense. We have some precedents, though, for how this might work. We have SaaS, which has become quite popular in recent years around certain applications -- sales force automation, resource management in the enterprise, human capital management (HCM), and so forth.

We have a track record of organizations saying, "Listen, I don't want to be in the commodity applications business. I want to specialize in what's going to differentiate me as an enterprise. I don't want to have everyone recreating the same application instance. We want to get reuse. We want to get efficiency of scale," and so forth. What's been the ability of managing and governing SaaS up to this point?

Gillett: That's still getting worked out. One of the problems with SaaS, particularly as you get into multiple packages, is how I get those

You'd have to figure this stuff out, and you can get help to simplify it, so you have only a handful of people to bang heads together.

different entities to work together. And one of the answers, of course, is don't work with multiple parties. Go to one party and work with their expanding pool of SaaS, but most companies won't have the luxury of choosing that.

Then you're into integration, and that's one of the struggles we see folks having with SaaS today -- working out how to do that integration. Do they have the direct connect between the providers? Do they route it through their own internal capabilities? How do they monitor that and make sure that it's working effectively?

So, we have some lessons from the experience of SaaS, even though that aspect of the thing that some call cloud is further along the track. Some people insist that SaaS isn't part of cloud. I'm not going to have that fight.

Even though they are the most along, they have a lot to figure out. So I look at this, and I say, "Okay, we've got a decade here to sort this out." It's a completely different problem, by the way, to think about how I take the existing applications I run inside my company, and think about migrating them to a service provider.

I want to pause here and double down on something you said which is, "Cloud is about commoditizing IT, and only things that aren't differentiating leave my company." Not true.

Cloud and mission-critical apps

Cloud services can handle mission-critical workloads, things that differentiate you. In fact, that might only be possible if you do them in a service provider, and with the commodity stuff. In fact, part of the point here is to get folks to really think about what are their needs, what are the offerings in the marketplace, and what's best for the company or the shareholders about taking advantage of that mix of internal capabilities and third-party.

Let me give you an example. Let's say that your business has critical calculations to run overnight, say, for ad placement on websites. Let's say that that's soaks up huge amounts of computing capacity when you run the workload at night, but sits idle during the day.

Gardner: A batch process?

Gillett: Yeah, and a batch process that doesn't saturate the server. If I provision for peak, say Christmas, I have this huge amount of capacity sitting around idle the rest of the year.

Gardner: A very costly system?

Gillett: Guess what? That's one of the workloads that runs at Amazon's EC2 IaaS or computer as a service.

Gardner: Mission critical or not?

Gillett: Correct. In that case, it's more cost effective and more flexible for them to run it with the service provider, even though it's mission critical. It's a more effective use of resources.

Now, let's flip it around the other way. Take a provider that does streaming of public websites of media. You go to a website of a major newspaper or a television network and you want to see their video. This provider helps with that on the back-end. What they found, when they looked at their internal infrastructure, was that they felt they were cheaper than the Amazon at running their core infrastructure.

Amazon looked like a nice extra capacity on top, so they wouldn't have to buy over provision as much. Amazon also looked like a great way to add capacity into new regions before they got critical mass to do it cost effectively themselves in that region. There are two examples of the non-intuitive ways to think about this.

Gardner: Right, mission critical, and being able to handle success, which should come -- even unexpectedly. What we need then to get to

In that case, it's more cost effective and more flexible for them to run it with the service provider, even though it's mission critical. It's a more effective use of resources.

that benefit seems to come back to governance time and again. We had governance issues internally, especially when we moved to SOA. We have to manage integration issues, reliability, compliance, and different applications of regulations within industries.

That gets to a higher level of complexity when we move to cloud. What's going to be governance as a service? How are we going to get between these cloud providers and the enterprise to manage this complexity?

Gillett: It's so early that it's hard to see what the solution is going to be. The closest I have seen that begins to hint at anything, and I don't even think of this as a sort of, a very much of the step down the road.

There's a provider in Europe called Zimory, another startup, that's trying to serve as a brokerage through raw computers as a service. If you want to know where the cheapest stuff is, you want to follow the sun, or move your workload around to follow the cheap stuff, that's an example of what Zimory is trying to do.

That's not quite governance, but there is an element of that in there. Fundamentally, what you were hinting at in your questions, Dana, is IT was already struggling with notions of internally shared infrastructure, things like blade servers and server virtualization that required the different stovepipes and IT ops to talk to each other and work together.

There's also this big chasm between developers and ops in terms of “throw it over the wall deployment,” and now we are just going to explode that out across the open Internet to the service providers that people are tying into.

Cloud hype bubble

It feels like we are in a cloud hype bubble right now. All the hype and noise is sort of on the upswing still, but we are going to see this subside and calm down late this year or next year. This is not to say that the ideas aren't good. It's just that it will take a significant amount of time to sort things out, figure out the right choices for the offerings to mature, for the early adopters to get in, the mainstream folks, and the laggards. It's only as we get deeper into it that we even begin to understand the governance ideas.

So your questions are spot on, but early, because right now people are still dealing with SaaS and just beginning to figure out how to take advantage of computers as a service. I'm speaking from the point of view of the enterprise. I have a few developers dabbling in PaaS, and people are figuring out what to do.

All of this, as I suggest, it is going to force IT to rethink what its value proposition is and how it does it. It's going to be interesting to see whether they can do it themselves, or whether the service provider steps up and does richer, more complex complete offerings. That will take some time, and we'll see new fangled forms of outsourcing, if you will, that are more “cloud oriented.” I don't know what that would look like either, because that's not easy.

Gardner: As we discussed in the beginning, the movement to cloud is a progression. We started with the Internet and the Web moving into applications and portals. We had to peel the onion then. We keep hitting more layers. We came up with optimization and wide area network, acceleration technologies, distributing different aspects of the Web application to the edge, the data, the graphics, and so forth. Those same sorts of technologies and solutions pertain to the cloud.

Gillett: Absolutely. If you think about it, what this fundamentally means is that developers will have to rethink how they write applications architecturally and think about where they're trying to deliver the business experience to. That means thinking about the network end to end, and thinking globally, if you're a company that has to worry about global reach. Then that means, ultimately thinking about architecturally where things belong in the network.

Static content doesn't change much. You want that out as close as possible to the user to reduce latency and the uncertainty about long-haul transit. Furthermore, from the point of view of all the combined entities providing backbone Internet, you need to decide whether you want to keep chewing up long-haul pipe to move the same video or content transcontinental, when for a low cost, you could stick it locally.

Gardner: That becomes more the case when you have multiple enterprises accessing the same set of core application.

Gillett: Absolutely. Remember, this isn't just enterprises. It might be enterprises trying to reach millions of consumers.

You start thinking about how to distribute application logic, to create fast response, good business service levels and things like that

That's one example of the static content. Think about dynamic content. Think about the fact that if I'm selling something like concert tickets or airline seats, there are a limited number of them. I can sell the first batch of them at the edge without having to go back to the core database, as long as I'm not selling a specific seat.

It's a little tricky here, but if you're selling a thousand widgets, you can cache at the edge the application logic that says, "Sell the first 800 from the edge, and then flip a switch and then we'll back haul to sell the last 200, so we don't oversell."

You start thinking about how to distribute application logic, to create fast response, good business service levels and things like that, despite the fact that you think, "We're just selling one thing and all that has to come back to a central database." Not necessarily. So, you really start to think about that. You think about how to prioritize things across the network. This is more important than that. All of it is basically fighting the laws of physics, also trying to figure out the speed of light, and all sorts of computation stuff.

Most cost-effective way

It's also trying to figure out the most cost-effective way to do it. Part of what we're seeing is the development and progression of an industry that's trying to figure out how to most cost-effectively deliver something. Over time we'll see changes in the financial structures of the various service providers, Internet, software or whatever, as they try to find the right way to most cost-efficiently deliver these capabilities.

Gardner: So, we need to rethink governance into an abstraction of cloud. We'd also need to rethink the architecture of the application from its inception and in the use cases that are more likely in a cloud environment.

Gillett: That's right. Let's not scare anybody by saying, "I can't do anything until I do all that stuff." We're trying to describe the journey that they're going on.

If you could sit down and write an application today from an enterprise that's Web facing, take a look at the conceptual architecture of what you're doing, and think about what capabilities belong where. Is there some stuff that would be better off at a service provider, not just for cost reasons, but for performance reasons? What kind of service provider?

I look at applications and content acceleration service offerings, I look at hosting of Web apps, and then I look at computer as a service, and to me it look like they're blurring a little bit. Amazon is out there offering a content-delivery network. The hosters are partnering with folks who do app acceleration or content delivery. I'm looking at the app delivery and content acceleration guys, and asking, "When are they going to help me with the hosting? They've already got three quarters of my workload?"

It's a very interesting time to create new applications. I want to reinforce the point you were hinting at, which is, it's one thing to take an existing workload and figure out what the best thing to do with it is across this increasing spectrum of choices.

It's another thing to start at the beginning, as you begin to architect the application and say, "What kinds of abstractions or modular architectures are loosely coupled to purchase, could I improve the performance of this application in the long run, or increase my options down the road for taking advantage of service providers.

If you have the luxury of a blank sheet of paper, there are some interesting possibilities to think about, but we're really early. So, don't get too hung up on sharpening your pencil and trying to figure it out. Just make the best set of choices you can make right now and keep running.

Gardner: We're just about out of time, but for those organizations that have this spectrum of options, that like what they see somewhat out in the future, how do they get started? How they put themselves in position to take advantage of it, sooner rather than later and perhaps gain a competitive advantage as a result?

Gillett: A lot depends on where you sit within the organization. For folks who are responsible for end-user applications or who purchase them, it's making sure that SaaS options are in the mix, and not just the

A lot depends on where you sit within the organization.

end-user applications, things like an Avalara tax service. They're a modular plug-in to your overall application architecture. I dubbed this one point "components as a service," because it's really end-user facing, but it feeds that.

For developers, there are two sets of choices. Look at PaaS. Are there reasons to think about Microsoft Azure or a Google App Engine as a place to execute your code? And, there are others -- Salesforce.com and LongJump -- but sometimes it involves development tools over the Web, rather than your local tools -- quite a diverse spectrum of things.

The other developer options are that you don't want to deploy to, in effect, an app, server as a service. You want the infrastructure. Then, look at IaaS. Then, you're looking at Rackspace's offerings under the Mosso business unit. I can't remember their new name, but Slicehost was somebody they acquired. You have ServePath's GoGrid offering. You have Amazon EC2, where you go and say, "Hey. I set up a bunch of virtual servers. Here is the VLAN to connect them." It's like working with raw infrastructure, except virtual.

Then, yet another role within IT is the IT infrastructure operations person. If you needs some more compute capacity for the test and dev guys, for that odd batch job, or temporary thing, or maybe you have some workloads that you think steady state -- that run 24/7/365 -- you want them at a service provider. Then, you also go look at the computer-as-a-service offerings.

Interestingly, there is a different set of offerings, if you're thinking about running conventional back-office apps, versus the Web stuff. Then, you're looking more at Rackspace and Mosso, and you're looking at SAVVIS. You want servers that, when you pile up a lot of virtual servers on one box, you get a nice mission-critical enterprise underneath it, trying to catch it, versus Web app servers that funky developers are playing with. They're running tens of thousands of instances. They want the cheapest boxes that they can find, and so they're two different value propositions.

Gardner: So, the common theme here, it sounds like, is to experiment, try a bunch of different things, but keep in mind that if one of those experiments works, you're going to want to transition that into a mission-critical, enterprise-caliber service.

Gillett: Yeah, and I want to come back to something you were saying, which is, it is about governance? One of the things that we're telling our infrastructure and operations guys is to get in early ahead of the developers.

Don't let them run willy-nilly and pick a bunch of services. Work with the enterprise architect, the IT architect, to identify some services that fit your security and compliance requirements. Then, tell the developers, "Okay. Here is the approved ones that you can go play with, and here's how we're going to integrate them."

So, proactively, get out in front of these people experimenting with their credit cards, even if it's uncomfortable for you. Get in early on the governance. Don't let that one run away from you.

Gardner: Well, great. We're taking a look at cloud computing through the lens of vision versus reality. Clearly, there's an awful lot happening, and I think that will continue for some time.

This is Dana Gardner, principal analyst at Interarbor Solutions. You've been enjoying a special video podcast production of BriefingsDirect. We've been joined by Frank Gillett, vice president and principal analyst at Forrester Research. Thank you, Frank.

Gillett: Thank you, Dana.

Gardner: Thanks again for listening, and come back next time.

Watch the video. Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com. Learn more. Sponsor: Akamai Technologies.

Transcript of a BriefingsDirect video podcast with Frank Gillett of Forrester Research on the state of cloud computing and prospects for the future. Copyright Interarbor Solutions, LLC, 2005-2000. All rights reserved.