Showing posts with label IaaS. Show all posts
Showing posts with label IaaS. Show all posts

Wednesday, September 24, 2014

University of New Mexico Delivers Efficient IT Services by Centralizing on VMware Secure Cloud Automation

Transcript of a BriefingsDirect podcast on how a major university is moving toward achieving the best cloud-computing benefits while empowering users.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: VMware.

Dana Gardner: Hi, this is Dana Gardner, Principal Analyst at Interarbor Solutions, and you're listening to BriefingsDirect.

Gardner
Our latest discussion focuses on one of the toughest balancing acts in seeking the best of cloud computing benefits. This balance comes from obtaining the proper degree of centralization or "common good" for infrastructure efficiency, while preserving a sufficient culture of decentralization for agility, innovation, and departmental-level control.

The requirement for empowering centralization is no more evident than in a large university setting, where support and consensus must be preserved among such constituencies as faculty, staff, students, and researchers -- across an expansive educational community.

But the typical IT model does not support localized agility when it takes weeks to spin up a server, if online services lack automation, or if manual processes hold back efficient ongoing IT operations. Too much IT infrastructure redundancy also means weak security, high costs, lack of agility, and slow upgrades.

We're joined today by an IT executive from the University of New Mexico (UNM) to learn more about moving to a streamlined and automated private cloud model to gain a common good benefit, while maintaining a vibrant and reassured culture of innovation.

We're also joined by a VMware executive to learn more about the latest ways to manage cloud architectures and processes to attain the best of cloud efficiencies, while empowering improved services delivery and process agility.

Please join me now in welcoming our guests, Brian Pietrewicz, Director of Computing Platforms at the University of New Mexico in Albuquerque. Welcome, Brian.

Brian Pietrewicz: Thanks, Dana. Glad to be here.

Gardner: And we're here with Kurt Milne, Director of Product Marketing in the Management Business Unit at VMware. Welcome, Kurt.

Kurt Milne: Thank you, Dana. Hello, Brian.

Preparing for change

Gardner: Brian, new technology often creeps ahead of where entrenched IT processes are, sometimes to the point where decentralization becomes a detriment. There are too many moving parts, not enough coordination, and redundancy. Yet when we try to put in new models -- like private cloud -- that means change.

I'd like to hear a bit more about your IT organization at the university and how you've been able to do change, but at the same time not alienate your users, who are, I imagine, used to having things their way. So tell us a bit about how you started to juggle this balancing act.

Pietrewicz: At the University of New Mexico, as you mentioned, it's a highly decentralized organization. In most cases, the departments are responsible for their own IT. In most cases, that means they don't have the resources to effectively run IT, in particular, things like data centers, servers, storage, disaster recovery (DR), and backups.

Pietrewicz
What we're doing to improve the process is providing infrastructure as a service (IaaS) to those groups so that they don’t have to worry about the heavy lifting of the infrastructure pieces that I mentioned before. They can stay focused on their core mission, whether that’s physics, or psychology, or who knows what.

So we offer IaaS. We're running a VMware stack, and we're also running vCloud Automation Center (vCAC). We've deployed the Self-Service Portal. We give departments, faculty members, or departmental IT folks the ability to go into the portal and deploy their own machines at will.

Then, they are administrators of that machine. They also have additional management features through the vCAC console so that they can effectively do whatever they need to do with the server, but not have to worry about any of the underlying infrastructure.

Gardner: That sounds like the best of both worlds. In a sense, you're a service provider in the organization, getting the benefits of centralization and efficiency, but allowing them to still have a lot of hands-on control, which I assume that they want.

Pietrewicz: Correct. The other part is the agility, the ability for them to be able to react quickly, to consume infrastructure on demand as they need it, and have the benefit of all the things that virtualization brings with redundant infrastructure, lower cost of ownership, and those sorts of things.

Gardner: Kurt, is this something that’s common in the market or is this something specific to education and university vertical industry users, where they like to have that balance between a service-provider approach for efficiency and agility, but still leave a lot of the hands-on, how to do what you want to do your way, benefits in place?

New expectations

Milne: No, this is something we see with a lot of our customers increasingly in many different industries. It’s an interesting time to be in the IT space, because there's this new set of expectations being imposed on IT by the business to be strategic, to quickly adopt new technology, and boost innovation.

Milne
At the same time, IT still has the full set of responsibilities they've always had -- to stay secure, to avoid legacy debt, to drive operational excellence so they maintain uptime, security, and quality of service for transactional systems and business-critical systems.

It’s really an interesting paradox. How do you do these two things that are seemingly mutually exclusive -- go fast, but at the same time, stay in control?

Brian’s approach is what I call it "push button IT," where you give folks a button to push and they get what they need when they want it. But if IT controls the button and they control what happens when the user pushes the button, IT is able to maintain control. It’s really the best of both worlds.

Gardner: Brian, tell us a little bit about how long you have been there and what it was like before you began this journey?

Pietrewicz: I've been at UNM for about two-and-a-half years, and I can tell you the number one complaint. We suffer from a lot of the same problems that other large IT shops have, with funding and things like that. But the primary issue that we had when I walked in the door was customers being upset because we didn't have clearly-defined services, and we had sold these services to these customers.

We had sold virtual machines (VMs) with database backups, and all kinds of interesting things, with no service-level agreements (SLAs), no processes, nothing wrapped around it. The delivery of these services was completely inconsistent.

So I started out down the new path. The first thing that we did was to make the services more consistent. Just to give you an example, deploying a virtual machine for a customer. The way that it was when I got here was that a ticket came into the service desk. It went to a single technician, and then whichever technician got that ticket figured out their own way of getting that machine deployed.
At the same time, IT still has the full set of responsibilities they've always had -- to stay secure, to avoid legacy debt, to drive operational excellence.

As the next step in that process, we went through and, instead of just having it being done a different way by whoever received the ticket, we identified all the steps associated. In looking at all the steps associated, we identified over a 100 manual steps that went though six different completely separate groups inside of our organization.

Those included operating system, storage, virtualization, security, and networking for firewall changes. In all those various groups that deploy their individual piece of that puzzle, it was being done differently every time. Our deployment times were taking as long as three weeks. You can imagine how painful that is when it takes 20 minutes to spin up a VM -- but it was taking three weeks to deploy it to a customer.

We identified all the steps and defined the process very, very clearly; exactly what it takes to deploy a VM. The interesting thing that came out of that was that it gave us the content necessary to be able to start developing a true service description and an SLA.

Ticketing system

It also made it so that it was consistent. We did a few things after we did the process development. We generated workflows within our ticketing system, so that all that happened was a ticket was put in and then it auto-generated all the necessary tickets to deploy the VM, so it happened in a very consistent way.

That dropped the deployment time from three weeks down to about three days, because it still had to go through certain approval process and things like that with security.

For the next step we said, "Okay, how can we do this better?" We looked at all of those steps that we put in place and found that they were all repetitive, manual steps that could be easily automated. So enters VMware vCAC.

We took all the steps, after we had them clearly defined, and we automated all the steps that we could. We couldn’t automate all of them, for example, sending information to our billing system to bill the customer back. From vCAC we shoot an email over to our ticketing system, that generates a ticket. Then, the billing information is still entered manually, and we are working on an upgrade to that.

When I first got here, the services were not defined and the processes were not defined. Since then, we have clearly defined the processes, narrowed those down into the very specific processes and tasks that had to be done, and then we automated. We're going through the process of automating every step in that process.
ITIL is very challenging to implement, but it's extremely helpful, because it gives you a framework to work within.

Now, we have a thing we call Lobo Cloud -- our mascot is the Lobo. Customers can now go online and deploy a machine within 20 minutes. So basically everything has transformed from extremely inconsistent service and taking as long as three weeks to deploy, to now it being the equivalent going into McDonald’s and ordering a Big Mac. It’s extremely consistent and down from three weeks to 20 minutes.

Gardner: I assume Brian that you've adopted some industry-standard methods, perhaps a framework, that gave you some guidance on this. How does your service delivery policy adhere to an industry standard like ITIL?

Pietrewicz: That’s what we use. We follow ITIL and we're at varying levels of maturity with it. ITIL is very challenging to implement, but it's extremely helpful, because it gives you a framework to work within, to start narrowing down these process, defining services, setting SLAs. It gives you a good overarching framework to work within.

The absolute hardest part of all of this is implementing the ITIL framework, identifying your processes, identifying what your service is, and identifying your SLA. Walking through all of that is exponentially harder than putting the technology in place.

Gardner: It seems to me that not only are you going to get faster servers, response times, and automation, but there are some other significant benefits to this approach. I'm thinking about security, disaster recovery (DR), the ability to budget better through an OPEX model, and then ultimately reduce total costs.

Is it too soon or have some of these other benefits that I have heard about typically when people move to a more automated cloud approach? How is that working for you?

Less expensive

Pietrewicz: We don’t really have good statistics on it. For the folks that had machines sitting underneath their desks and in closets before, we don’t have a lot of the statistics to know exactly the cost and the time they were spending on that.

Anybody who works with virtualization quickly learns that once you hit a certain size, it becomes significantly less expensive. You become far more agile and you get a huge number of benefits. Some of them are things that you mentioned -- the deployment time, DR, the ability to automate, the taking advantage of economies of scale.

Instead of deploying one $10,000 server per application, you're now loading up 70 machines on a $15,000 server. All of those things come into play. But we really don’t have good statistics, because we didn’t really have any good processes before we started.

What’s interesting now is that our next step in the process is to automate our billing process. Once we do that, we're going to have everything from our virtual infrastructure deployed into our billing system and either a charge-back or a show-back methodology.
The same kind of tools and processes that can automate the delivery of those services can also automate tearing down those services when they're done.

So we'll have complete detailed costs of all of our infrastructure associated with every department and every application that is using our service. We'll be able to really show the total cost of ownership (TCO).

Milne: Brian, it sounds like you're on a path that a lot of our customers are on. What we see typically is that there is a change in consumption behavior when your customers know that they can get IaaS on demand. They stop hoarding resources. The same kind of tools and processes that can automate the delivery of those services can also automate tearing down those services when they're done.

Virtualization by itself increases capacity utilization quite a bit, but then going to this kind of services delivery, service consumption for infrastructure, actually further increases utilization and drives down over-provisioning.

Adding that cost transparency to that service will further change your consumers' behavior and the ability to get it when you need it and only pay for what you use drives down the amount of resources that you have to keep in your data center.

Pietrewicz: Absolutely. It’s amazing what happens when you have to pay for something and it’s very visible.

Milne: I always feel that if IT is free that really changes the supply and demand equation, if you study economics. People don’t know what to do with free. They typically take too much.

Economic behavior

Pietrewicz: Right. This really starts driving basic economic and social behavior into the equation in IT. It’s a difficult thing for organizations to get their head around, and they're sort of getting it here at the university. It’s not completely in place. The way that we look at it is as a, "We'll build it, and they'll come" kind of thing.

Most folks have figured out that they can really save that money. Instead of going out and buying a $10,000 server, they can buy a $1,000 VM from us that does the exact same thing. If they don’t want it any more, they can turn it off and not pay any more. All of those things come into play.

Gardner: This is interesting. This fit-for-purpose concept of using what you need when you need it and then not using it anymore relates to that discussion we had about centralized and decentralized. Now that you've been enjoying some benefits through the Lobo Cloud and this "common-good" approach to infrastructure, have you gotten feedback from the users? Are they happy with this or do they wish they had those servers under their desks again?

Pietrewicz: You have a little of both. We definitely have people that like to hug their servers. We had the sort of old school approach with a lot of things. You would assume that universities would be on the cutting edge. In a lot of cases, we are, but we also have people that just really like hugging their servers. They like to be able to touch their servers and that sort of thing. So we have both.
That means that people have to be transparent and make clear decisions about where they're spending their money. That's also starting to drive adoption.

We have people who are very, very appreciative that we put this service out there for them, because they know it’s the only way for them to do it effectively. But we still have some of the old-school folks who prefer the physical and are taking a while to adapt. That’s part of the whole, "Build it and they will come" thing. It’s the kind of thing where they have to adjust their mentality to use it.

Gardner: Consensus is, of course, important.

Pietrewicz: Another piece on that is the university was experimenting with a thing called reliability centered maintenance (RCM), which is a budgeting process that works toward the bottom line of a particular organization. That means that people have to be transparent and make clear decisions about where they're spending their money. That's also starting to drive adoption.

Gardner: Just for our audience to understand the scale here, you have done an awful lot in two-and-a-half years or less. How many individuals are we talking about? What’s the size of your community, your user base? How many VMs do you have? What are some of the defining characteristics of your organization?

Pietrewicz: UNM is approximately 45,000 faculty, staff, and students. We have about 100 either departments or affiliates, and today, we're running about 660 VMs for our organization.

Gardner: And what percentage of the organization is virtualized?

Pietrewicz: For central IT, it’s between 98 percent and 99 percent. For the rest of the organization, it’s not clear. We don’t have an audit that shows every physical box that anybody might be using out there.

I'd say that the adoption of virtualization is very low in places where people haven't used IaaS, because the initial entry cost for virtualization can be higher. Many of the very small organizations just aren't big enough to warrant the infrastructure necessary to run virtualization the right way.

Ancillary benefits

Gardner: We talked about some of the ancillary benefits of your approach, but there are some direct benefits when you go to a cloud model, which gives you more options. You can have your private cloud. You can look to public cloud and other hosting models, and then you can start to see a path or a vision towards a hybrid cloud environment, where you might actually move workloads around based on the right infrastructure approach for the right job at the right time. Any thoughts about where your clouds goals are vis-à-vis the hybrid potential?

Pietrewicz: We have a few things in play that we're actively working. Today, we have people using various cloud providers. The interesting part about that they're just paying for it with a credit card out of their department, and the university doesn’t have any clear way of knowing exactly what’s out there. We don’t really have any good security mechanisms in place for determining whether there's any sensitive data being stored out there inadvertently.

We're working with a lot of the cloud providers that we are already spending money with and we are already working with to develop consolidated accounts. One, we can save money through economies of scale. And two, we can get some visibility into what folks are actually using the cloud for. And then three, IT would like to act as an adviser to be able to point out for the various cloud providers that are out there -- this particular provider is good at functionality or this particular provider is good at security.
We envision setting up hybrid cloud services with those public cloud providers to be able to move the workloads back and forth when necessary.

The first step is to corral the use of public cloud for UNM and create an escorting process to the cloud. The second step is going to be a hybrid cloud that we'll set up from our private cloud here on site. We envision setting up hybrid cloud services with those public cloud providers to be able to move the workloads back and forth when necessary.

The other major benefit that we very much look forward to is being able to do DR in the cloud and taking advantage of the ability to replicate data and then spin up systems as you need them, rather than having a couple of million dollars in equipment sitting, waiting, and hoping you never use it. Things that you have to refresh every four years so that you have a viable DR plan.

Gardner: Is vCloud Automation Center something that will be useful in moving to this hybrid model? The one button to push, as it were, on the private cloud, will that become a one button to push in the hybrid model as well?

Pietrewicz: It will. I mentioned those various cloud service providers. Most of them are compatible with the vCloud Connector, so that you can simply just connect up that hybrid cloud service and with a little bit of work, be able to massage your portal.

We can have a menu option of public cloud providers through our portal that they could just select and say that they want to get a vCHS, Amazon, or Terremark, and then potentially move workloads back and forth. So vCAC and vCloud Connector are all at the center of it.

The other interesting piece that we're working on and going to try to figure out as part of this is that we really want to start looking into NSX and/or VIX to be able to provide very clear security boundaries, basically multi-tenancy, and then potentially be able to move those multi-tenant environments back and forth in the cloud or extend them from public to private cloud as well.

Software-defined networking

Gardner: Brian, you mentioned multi-tenancy earlier, and of course, there is a lot going on with software-defined data center, networking, and storage. What is it about it that’s interesting to you and why is this a priority for you, software-defined networking (SDN), for example?

Pietrewicz: SDN is the next sort of step in being able to truly automate your IaaS and your virtual environment. If you want to be able to dynamically deploy systems and have them be in a SAN box that is multi-tenant by customer, you really need to have an SDN-type solution, or at least that’s extremely helpful to do that.

One of the things that we are looking at next is to be able to implement something like NSX, so that we can deploy the equivalent of what’s a virtual wire, a multi-tenant environment, to individual customers, so that they can only see their stuff and can’t see their neighbors and vice versa.

The key is the ability to orchestrate that on demand and not have to deal with the legacy VLAN and firewall kind of issues that you have with the legacy environment.

Gardner: It’s interesting how a lot of these major trends -- service delivery, cloud, private cloud, DR, and SDN -- are interrelated. It’s a complex bundle, but the payoffs, when you do this inclusively, are pretty impressive.
From VMware’s perspective, that kind of network virtualization capability is critical for our hybrid cloud service.

Pietrewicz: Whenever you get to the point of abstracting things to the software level, you provide the ability to automate. When you have the ability to automate, you get tremendous flexibility. That sometimes can be an issue in and of itself, just making decisions on how you want to do something. But along with that flexibility, you get the ability to automate just about anything that you want or need to be able to do.

The second piece to that is that we're really excited about figuring out, when we build the hybrid cloud model, how we might be able to extend those tenants into the cloud, either as active running workloads or in a DR model, so that the multi-tenancy is retained.

Milne: From VMware’s perspective, that kind of network virtualization capability is critical for our hybrid cloud service. It’s that capability that NSX provides that creates that seamless experience from your data center out to the hybrid cloud.

As you said, Brian, that kind of network configuration, allocation, and reallocation of IP addresses, when you are moving things from one data center to another, is not something you want to do on a manual basis. So NSX is a key component of our hybrid cloud vision. It’s something that lot of the other cloud providers just don’t have.

Pietrewicz: I see it as the next frontier in IT. I think that when SDN starts taking off, it’s going to be a game changer in ways that we are not even recognizing yet, and that’s one example. Moving a workload from one network to another network is extremely powerful.

Cloud broker

Gardner: Kurt, this sounds as if not only is Brian transitioning into being a service provider to his constituencies, but now he's also becoming a cloud broker. Is this typical of what you're seeing in the market as well?

Milne: It is. Some of our customers will take a step to try to get their arms around shadow IT, users going around IT, to just offer that provisioning option through the IT portal. So it’s like, "You're using Amazon? That’s fine. We can help you do that." So putting a button in the service catalog deploys the kind of work that they've been doing in a public cloud like Amazon, but it has to come through IT. Then, IT is aware of it.

There's a saying I like. It’s called the "cloud boomerang." A lot of times, the IT customers will put thing out in the public cloud, but like a boomerang, it seems to always come back. The customer wants to integrate it with an existing system or they realize that they have to support it up in the cloud. A lot of times, those rogue deployments make their way back to the IT organization. So putting an Amazon service in the vCAC portal and not changing anything else is a nice first step in corralling that.
Now, we're taking that next step and combining a lot of those capabilities into a single platform.

Pietrewicz: That is exactly what we're seeing. At a university, because there isn’t really governance, it’s more like build a good service and hope they come. We take the approach of trying to enable it. We want to make it very transparent and say that they can use Amazon or vCHS, but there's a better way to do it. If you do it through the portal, you may be able to move those workloads back and forth.

We are actually seeing exactly what you mentioned, Kurt. Folks are reaching the limitations of using some of the cloud providers, because they need to get access to data back here at UNM and are actually doing the boomerang approach. They started out there and now they're migrating their machines into our IaaS so that they can get access to the data that they need.

Gardner: Kurt, we heard some very interesting things at VMworld recently around the cloud-management platform. Why don’t you tell us a little bit about that and how that fits into what we've been discussing in terms of this ongoing maturity and evolution that a large organization like the University of New Mexico is well into?

Milne: We recently announced the vRealizeSuite, which is a cloud management platform. So we're moving our product management strategy to a common platform.

Over the years, VMware has either built or acquired quite a few different management products. We've combined those products into a number of suites, like our automation, operations, and our business management suites. Now, we're taking that next step and combining a lot of those capabilities into a single platform.

There are a couple of guiding ideas there. We see in organizations like Brian’s is that the lines between the automated provisioning of those workloads automation, provisioning those workloads, and the ongoing operations and maintenance and support of those workloads, is really starting to blur.

So you have automation tasks that might happen when you're doing a support call. Maybe you want to provision some more resources, and there are operations tasks like checking system health that you might want to do as a step in an automation routine.

Shared services

Our product strategy change is to move toward a shared-services model, similar to a service-oriented architecture. The different services that are underlying our management products would be executable through a tool like vCAC, through a command line interface, or through like a REST API. There's kind of a mix-and-match opportunity to execute those services in different ways.

To build that platform with the shared service model on top, we need to start re-architecting some of our products in the back-end, so that we have a common orchestration engine, a common DR backup and a common policy engine. You don’t want one tool to undo the work that another tool did yesterday. You can’t have conflicting robots going out and doing automated tasks.

The general idea is to try to further consolidate these different management functions into a single platform. The overall goal is to try to help organizations maintain control, but then also increase flexibility and speed for their business users.

Gardner: Brian, is that something that you think is going to be on your radar? Is management so distributed now that you're looking for a more consolidated approach that’s inclusive?
The overall goal is to try to help organizations maintain control, but then also increase flexibility and speed for their business users.

Pietrewicz: That would be wonderful. We're doing things many different ways. If you take the example of orchestration, we are using Orchestrator, PowerShell, Perl, and starting to experiment with Puppet.

It would be really good if you could have one standardized way that you approach orchestration, as an example, and how that might tie into all the other pieces for back-end management, rather than handling it several different ways. As Kurt was mentioning, one part starts to step on another part. Having that be consolidated and consistent would be a huge value.

Milne: The other part of the strategy is also to make that work across environments. So the same tools and services would be available if you are provisioning up to Amazon or to your private cloud or hybrid cloud service, and even different hypervisors.

We're fully aware of the heterogeneous nature of the modern data center. So we're shifting to try to create that kind of powerful common management stack with that unified management experience across all of the environment. It’s kind of a nirvana. When we talk to people, they say that’s exactly what they want. So our vision is to kind of march towards delivering on that.

Gardner: Kurt, I am trying to recall from VMworld whether this was offered on-premises, as a service from a cloud, or some combination?

Service offerings

Milne: That’s the other interesting part of this. We're starting to go down the path of offering a number of our management products as a service. For example, at VMworld, we announced the availability of a beta for our vCAC product as a software as a service (SaaS), so you can without installing any software get a service portal, get that workflow and policy engine, and deploy infrastructure services across different environments.

We'll be rolling out betas for our other products in subsequent quarters over the next year or so. Then potentially we could have the SaaS services interact with and combine with the services that are available through the products that are installed on-premise. Our goal is to get these out there and then understand what the best use cases are, but that kind of mix and match is part of the vision.

Gardner: It’s interesting. We might have a reverse boomerang when it comes to the management of all of this. Does that sound appealing Brian? Is that something you would look to as a cloud service, comprehensive management?
Our goal is to get these out there and then understand what the best use cases are, but that kind of mix and match is part of the vision.

Pietrewicz: Absolutely, but it’s largely dependent on return on investment (ROI). It’s that balance of, when you get to a certain level in an IT shop, it’s sometimes cheaper to do things in-house than it is to outsource it, and sometimes not. You have to do the analysis on the ROI on what makes more sense to bring it in or to use a SaaS.

As an example, we completely outsourced all of our email, because it’s a lot of work. It's very simple and easy to do as a SaaS solution, but it’s a lot more work to do in-house. It’s definitely something that we would look into.

Milne: In a mid-sized organization that might have 300 different applications that the IT organization supports, maybe 50 of those are IT tools. Already we've seen progress with companies like ServiceNow that have a SaaS-based service desk. It makes sense to start to turn more of those management products into a SaaS delivery model.

Gardner: I'm afraid we're getting near our time limit, but I wanted to see, Brian, if you had some thoughts about others who are starting to move in your direction, perhaps their own Lobo Cloud, their own portal rationalizing these services, being able to measure them better. What in 20/20 hindsight do you have that you could recommend for them as they go about this? Any learned lessons you could share?

Process orientation

Pietrewicz: The biggest lesson learned, without a doubt, is the focus on the process orientation, the ITIL model. The technology is really not that hard. It’s determining what your service is, what are you trying to deliver, and then how do you build that into a consistently delivered service, complete with SLAs and service descriptions that meet the customer needs. That's the most difficult part.

The technical folks can definitely sling the technology. That doesn’t seem to be that big of a deal. The partners and providers do a very good job of putting together products that make it happen, but the hard part is defining the processes and defining the services and making sure that they are meeting the customer needs.

Gardner: Kurt, any thoughts in reaction to what Brian said in terms of getting started on the right path around cloud rationalization of your IT organization?

Milne: One of the things that I've seen is a lot of organizations go through this process that Brian has described, trying to clearly define their services and figure out which parts of those services they're going to automate.
The hard part is defining the processes and defining the services and making sure that they are meeting the customer needs.

A lot of organizations start that service definition effort from an inside-out perspective, get a bunch of IT guys together, and try to define what you do on a daily basis in a service. That's hard.

The easier approach is just to go talk to your customers and users and ask, "If I were going to give you a button you could click to get what you need, what would you put behind the button?" Then, you define your services more from an outside-in perspective. It seems to be where companies get anyway and you just shortcut a lot of teeth gnashing and internal meetings when you do it that way.

Gardner: It always comes back to the requirements list, doesn’t it?

Milne: That’s right.

Gardner: I'm afraid we'll have to leave it there. You've been listening to a sponsored BriefingsDirect discussion on one of the toughest balancing acts, seeking the best of cloud computing benefits, while also empowering your users.

And we've seen at a large university how this balance comes from attaining a proper degree of centralization or common good for infrastructure services through a portal, while preserving that sufficient culture of decentralization and agility. We've also heard how there are going to be new ways to better manage cloud architectures across a variety of different models, and then perhaps ultimately as a service in and of itself.

So I'd like to thank our guests, Brian Pietrewicz, Director of Computing Platforms at the University of New Mexico in Albuquerque. Thank you so much, Brian.

Pietrewicz: Thanks, Dana. Thanks, Kurt.

Gardner: We've also been here with Kurt Milne, Director of Product Marketing in the Management Business Unit at VMware. Thank you so much, Kurt.

Milne: Thank you.

Gardner: And thank you also to our audience for joining us for this BriefingsDirect discussion. This is Dana Gardner, Principal Analyst at Interarbor Solutions. Thanks again for listening, and come back next time.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: VMware.

Transcript of a BriefingsDirect podcast on how a major university is moving toward achieving the best cloud-computing benefits while empowering users. Copyright Interarbor Solutions, LLC, 2005-2014. All rights reserved.

You may also be interested in:

Thursday, August 08, 2013

T-Mobile Swaps Manual Cloud Provisioning for Services Portal, Gains Lifecycle Approach to Cloud Across Multiple Platforms and Data Centers

Transcript of a BriefingsDirect podcast on how a major telecom company has improved its IT performance to deliver better experiences and payoffs for its businesses and end users alike.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: HP.

Dana Gardner: Hello, and welcome to the next edition of the HP Discover Performance Podcast Series. I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your moderator for this ongoing discussion of IT innovation and how it’s making an impact on people’s lives.

Gardner
Once again, we're focusing on how IT leaders are improving their services' performance to deliver better experiences and payoffs for businesses and end users alike, and this time we're coming to you directly from the HP Discover 2013 Conference in Las Vegas. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

Our next innovation case study interview highlights how wireless services provider T-Mobile US, Inc. improved how it delivers cloud- and data-access services to its enterprise customers. We'll see how T-Mobile walked back use of manual cloud provisioning services and delivered a centralized service portal to manage and deploy infrastructure better and also improve their service offerings across multiple platforms and across multiple data centers.

To learn more about how T-Mobile enabled a lifecycle approach to delivering advanced cloud services, please join me in welcoming our guest, Daniel Spurling, Director of IT Infrastructure at T-Mobile US, Inc. Welcome.

Daniel Spurling: Thanks, Dana.

Gardner: Tell me about the trends that are driving your business now. We know T-Mobile as a mobile provider, but is this speed, is this competition? What are some of the big top-of-mind issues for you and your market?

Spurling: To answer that question, I'm going to frame up a little history and go into where T-Mobile has come from in the last few years and what has driven some of that business shift in our space.

As many know, in 2011 AT&T attempted to acquire T-Mobile. When that dissolved, there was a heavy recognition that we needed to drive greater innovation on our business side. We had received a generous donation, we’ll call it, of $4 billion dollars and a lot a spectrum. We drove a lot of innovation on our network side, on the RF side, but the IT side also had to evolve.

We, as an IT group, were looking at where we needed to start evolving within the infrastructure space, we recognized that manual processes are a very rudimentary way of delivering servers or compute storage, etc. This was not going to meet the agility needs that our business was exhibiting. So we started on this path of driving a significant cultural shift, and mindset shift as well as the actual technological shift in the infrastructure space, with cloud as one of the core anchor points within that.

Gardner: When you decided that cloud was the right model to gain this agility, what were some of the problems that you faced in terms of getting there?

Not a surprise

Spurling: When you talk about cloud, you have to define what cloud is. We recognize that cloud is almost like a progression of where we've been going within IT. It is not like it is a surprise.

Spurling
We've been trying to figure out how to enable more self-service. We've been trying to figure out how to drive greater automation. We've been trying to figure out how to utilize those ubiquitous network access points, the ubiquitous services, external or internal of the company, but in a more standardized and consolidated fashion.

It wasn't so much that we were surprised and said, "Oh, we need to go cloud." It was more on the lines of we recognized that we needed to double down our efforts in those key tenets within cloud. For T-Mobile, those key tenets really were how we drive greater standardization consolidation to enable greater automation and then to provide self-service capabilities to our customers.

Gardner: Were there particular types or sets of applications that you identified as being the first and foremost to go into this new model?

Spurling: That's a great question. A lot of people look at the applications, as either an application play or an infrastructure play, because of the ecosystem that existed when the cloud ecosystem was kind of birthing, a year-and-a-half ago, two years ago. We started more on the infrastructure side. So we looked at it and said, "How do we enable the application growth that you are talking about? How do we enable that from an infrastructure perspective?"
We recognized that we needed to double down our efforts in those key tenets within cloud.

And we saw that we needed to focus more on the infrastructure side and enable our partners within our IT teams -- our development partners, our application support partners, etc. -- to be able to transform the application stacks to be more cloud-capable and cloud-aware.

We started giving them the self-service capability on the infrastructure side, started on that infrastructure-as-a-service (IaaS) type capability, and then expanded into the platform-as-a-service (PaaS) capability across our database, application, and presentation layers.

Gardner: The good news with cloud is that you do away with manual processes and you have self-service and automation. The bad news is that you have self-service and automation, and they can get very complex and unwieldy, and like with virtual machines (VMs), sometimes there is a sprawl issue. How did you go about this in such a way that you didn’t suffer in terms of these new automation capabilities?

Spurling: I'm going to break it into two parts. Look at the complexity of an IT organization today, especially for a company of T-Mobile's size. T-mobile has 46,000 employees, around 43 million customers. It's not a small entity. The complexity that we have in the IT space mirrors that large complexity that we have in the business space.

Tough choices

We recognized on the infrastructure side, as well as in the application, test and support sides, that we cannot automate everything. We had to really drive heavy consolidation and standardization. We had to make some tough choices about the stuff that we were -- for lack of a better term -- going to pare off our infrastructure tree: different operating systems, different hardware platforms, and data centers that we were going to shut down.

We had to drive that heavy rationalization across all of the towers within our IT space, in order to enable the automation you talked about, without creating a significant amount of complexity.

On the sprawl question though, we made a conscious decision that we were going to allow or permit some level of sprawl, because of the business agility that was gained.

When you look at server sprawl, there are concerns around licensing, computer utilization, and stranding resources or assets. There are a lot of concerns around sprawl, but when you look at how much business benefit we got from enabling that agility or that speed to deliver and speed to market, the minimal amount of sprawl that was incurred was worth it from a business perspective.
You have to continue to deliver for your customers, but you need to prioritize what you are doing in that maintenance space.

We still try to manage it. We still make sure that we're utilizing our compute storage data centers, etc., as efficiently as possible, but we've almost back-burnered the sprawl issue in favor of enabling business.

Gardner: So with multiple platforms -- Windows, Linux, AIX, Unix -- and multiple data centers across large geographies, how can you do that without a larger staff? Do you find the centralization possible or is it really pie in the sky?

Spurling: It’s a bit of both. When you look at how much work there is to enable an automation solution, you almost have to be -- and my team hates it when I use the term -- ambidextrous. On one hand, you have to continue to deliver for your customers, but you need to prioritize what you are doing in that maintenance space and shave off a bit to invest in the innovation space.

You're going to have to make some capital investments, and maybe some resource investments as well, to drive that innovation the next step forward. But you almost have to do it within the space that you are coexisting in that maintains and innovates at the same time, because you can't drop one in favor of the other.

We did have to make some tradeoffs on the maintenance side, in order to take some qualified and some bright resources that we are excited about in our burgeoning cloud future, and then invest those resources to continue driving us forward in the technological and also cultural space. We made a significant cultural change too.

Gardner: That was going to be my next question. When it comes to making these transitions in technology, platform, and approach, I often hear companies say they have a lagging cultural shift as well. What did that involve in terms of your internal IT department making that shift more of a service bureau supporting your business like a business within a business?

Buggy whips

Spurling: A lot of times when you talk about evolution in either business context or kind of an academic context, you hear the story about the buggy whip. The buggy whip, back in the day, was something that everybody knew. About 125 years ago, everybody probably knew someone who made buggy whips or who sold buggy whips. Today, no one knows anybody who makes or sells buggy whips.

The buggy whip industry went away, but a brand-new industry emerged in the automobile space. In the same context. the old IT way of manually building servers, provisioning storage, and loading applications may be going away, but there is a brand-new environment that's been created in a higher value space.

As to the cultural shift you talked about, we had to make significant investments in our leadership to be able to help set a vision, show our employees where that vision intersected with their personal careers and how they continue to move on.

Then, you lead and help them to do that kind of emotional change. I'm not a server builder anymore. I'm now a consultant with the business on delivering a value, I'm now an automation engineer, or I'm now delivering future value and looking at new products that we can drive further automation into. That cultural change is ongoing, and it’s certainly not done.

Gardner: And given that this transition and transformation is fairly broad in terms of its impact, you don’t just buy this out of a box with your professional services. How did the combination of people, process, technology and outside your knowledge come together?
With those tools, with HP professional services, and with our own internal team members, we created a tactical team that went out there and "attacked cloud."

Spurling: When we started down the path, we had a lot of people in our teams who were really excited about making IT better. T-Mobile is full of people who are dedicated and excited about making T-Mobile the best wireless company out there. They're starting to change the conversation to make T-Mobile the best company that is enabling people to get access to the Internet, to their friends, to data, etc.

So the people were excited to jump on, but we still had a knowledge gap. We knew that, from a leadership perspective, we weren’t going to get the time to market that we wanted, by training our resources, helping them learn and make mistakes. We had to rely on professional services. So we partnered with HP very heavily to drive greater, instant-on services in our cloud solution.

On the technology side, we have everybody under the sun from a tooling perspective, but we do have a significant investment in HP software. We made a decision to move forward with the HP Cloud Suite. Pieces like HP Operations Orchestration (HPOO) or Cloud Service Automation (CSA), and building out those platforms to be the overarching cloud solution that, for lack of a better term, created that federation of loosely coupled systems that enabled cloud delivery.

With those tools, with HP professional services, and with our own internal team members, we created a tactical team that went out there and "attacked cloud," delivered that, and continues to deliver that now.

Paybacks

Gardner: Before we close out, and it might be too early in your journey to measure this, but are there any paybacks? Can you look at results, either business, technological, or financial from going to a cloud model, provisioning with that automation, advancing the technology, making those cultural hurdles? What do you get for it?

Spurling: I could talk for hours on this one question. When you break out all of the advances that we've made internally and all the business benefits that have been realized, you can break them into so many different categories, in green-dollar and blue-dollar saves, in resource saves, etc. I’ll highlight a few.

When we look at the cloud opportunity and the agility that has been gained, the ability to deliver things in an almost immediate fashion, one of the byproducts that we may not exactly have intended was that our internal customers have demanded in the past a lot of complexity or a lot of significant specific systems.

When we said, you can get that significant system, whatever it is, in a couple of weeks or you can get this cloud solution that delivers 95 percent of what you ask in a couple of hours, almost always those things that we thought were hard requirements melted away. The customer said, "You know what, I'm okay with this 95-percent deal because it gets me to my business objective faster."
Because of the investments we made in standardization and automation, our cloud portfolio, we were able to build out that capacity in record time.

Though we as IT thought you had to have that complexity, we're realizing now that that complexity may not have been required all along, because we are able to deliver so quickly. The byproduct of that is that we're seeing massive amounts of standardization that we could never have thought would organically be possible.

From an agility perspective, there's time to market. We had a significant launch with the iPhone, a big event in T-Mobile’s history, probably one of the largest launches that we've had. That required a significant amount of investment in our back-end systems because of the load that was put in our activations and payment inside our systems.

Because of the investments we made in standardization and automation, our cloud portfolio, we were able to build out that capacity in record time, in days versus what would have taken in weeks or months two years previously. We were able to support our business with very little lead time, and the results were very impressive for us as a business. So those two areas, that standardization and consolidation and that rapid ability to deliver on business objectives, are the two key ones that we take away.

Gardner: Daniel, let’s close out on the future. When you look to unforeseen events in your business, it could be mergers, acquisitions, changes in the market, new products, new applications, do you feel that the investments you’ve made in cloud also puts you in a position to be able to move rapidly? What future direction do you have in mind for your cloud trajectory?

Spurling: As I said in the beginning, we're just starting with cloud. That’s not fair to say. We are just continuing with cloud. We've done it in the past. We've used mainframes to distribute it.

Just one step

We’ve done application hosting with the Internet craze into software as a service (SaaS), that we now are seeing PaaS external to our internal organizations. We're seeing software to find everything starting to have a role. And there is a really interesting play that says, there is no end. Cloud is just one step in continuing to evolve IT to be more of a business partner.

That's really how we are looking at it. We're making great strides in that space. You talked about new applications or business mergers, etc. In every single area, we're setting ourselves up to be closer to the business, to move that self-service capability. I'm not just talking about a webpage. I am talking about being able to consume an IT service as a business leader in a simple way. We're moving that closer-and-closer to the business and we are being less and less of a gatekeeper for technology, which is super-exciting for us to see in the organization.

For us specifically, we're recognizing that the investments we made in our PaaS plays as well as test automation as well as some of the dev platforms. We're seeing those start to have payoffs in the fact that we're developing cloudware applications that are now scalable in a way that we've never seen before, without massive human invention.

So we're able to tell our business, "Go ahead and have a great marketing idea, and let’s move it forward. Let’s try that thing out. If it doesn't work, it’s not going to hurt IT. It's not going to take 18 months to deliver that." We're seeing IT able to respond about as fast as the business wants to go.
In every single area, we're setting ourselves up to be closer to the business, to move that self-service capability.

We are not there yet today. It’s a continuing journey, but that’s our trajectory in the next 6 to 12 months, and then who knows what’s going to happen, and we are excited to see.

Gardner: Well, great, I'm afraid we have to leave it there. We've been learning about how wireless services provider T-Mobile US, Inc. improved how it delivers cloud and data and applications to its enterprise customers, and we've seen how T-Mobile walked back the use of manual cloud provisioning and in order to move to a more advanced and automated approach and that has delivered some very impressive results.

So join me in thanking our guest, Daniel Spurling, Director of IT Infrastructure at T-Mobile US. Thanks so much.

Spurling: Thanks, Dana. It’s my pleasure.

Gardner: I'd like to thank our audience as well for joining us for this special HP Discover Performance podcast coming to you directly from the HP Discover 2013 Conference in Las Vegas.

I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host for this ongoing series of HP sponsored discussions. Thanks again for joining, and come back next time.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: HP.

Transcript of a BriefingsDirect podcast on how a major telecom company has improved their IT performance to deliver better experiences and payoffs for their businesses and end users alike. Copyright Interarbor Solutions, LLC, 2005-2013. All rights reserved.

You may also be interested in:

Thursday, May 09, 2013

Thomas Duryea's Journey to Cloud Part 2: Helping Leading Adopters Successfully Solve Cloud Risks

Transcript of a BriefingsDirect discussion on how a stepped approach helps an Australian IT service provider smooth the way to cloud benefits at lower risk for its customers.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: VMware.

Dana Gardner: Hi, this is Dana Gardner, Principal Analyst at Interarbor Solutions, and you're listening to BriefingsDirect.

Gardner
Our latest podcast discussion centers on how a leading Australian IT services provider, Thomas Duryea Consulting, has made a successful journey to cloud computing.

We'll learn how a cloud-of-clouds approach provides new IT services for Thomas Duryea's many Asia-Pacific region customers. Our discussion today continues a three-part series on how Thomas Duryea, or TD, designed, built and commercialized an adaptive cloud infrastructure.

The first part of our series addressed the rationale and business opportunity for TD's cloud-services portfolio, which is built on VMware software. [Disclosure: VMware is a sponsor of BriefingsDirect podcasts.]

This second installment focuses on how a variety of risks associated with cloud adoption and cloud use have been identified and managed by actual users of cloud services.

Learn more about how adopters of cloud computing have effectively reduced the risks of implementing cloud models. Here to share the story on this journey, we're joined once again by Adam Beavis, General Manager of Cloud Services at Thomas Duryea in Melbourne, Australia.
The question that many organizations keep coming back with is whether they should do cloud computing.

Welcome back, Adam.

Adam Beavis: Thank you, Dana. Pleasure to be here.

Gardner: Adam, we've been talking about cloud computing for years now, and I think it's pretty well established that we can do cloud computing quite well technically. The question that many organizations keep coming back with is whether they should do cloud computing. If there are certain risks, how do they know what risks are important? How do they get through that? What are you in learning so far at TD about risk and how your customers face that?

Beavis: People are becoming more comfortable with the cloud concept as we see cloud becoming more mainstream, but we're seeing two sides to the risks. One is the technical risks, how the applications actually run in the cloud.

Moving off-site

What we're also seeing -- more at a business level -- are concerns like privacy, security, and maintaining service levels. We're seeing that pop up more and more, where the technical validation of the solution gets signed off from the technical team, but then the concerns begin to move up to board level.

We're seeing intense interest in the availability of the data. How do they control that, now that it's been handed off to a service provider? We're starting to see some of those risks coming more and more from the business side.

Gardner: I've categorized some of these risks over the past few years, and I've put them into four basic buckets. One is the legal side, where there are licenses and service-level agreements (SLAs), issues of ownership, and permissions.

The second would be longevity. That is to say, will the service provider be there for the long term? Will they be a fly-by-the-seat-of-the-pants organization? Are they are going to get bought and maybe merged into something else? Those concerns.

The third bucket I put them in is complexity, and that has to do with the actual software, the technology, and the infrastructure. Is it mature? If it's open source, is there a risk for forking? Is there a risk about who owns that software and is that stable?
One of the big things that the legal team was concerned about was what the service level was going to be, and how they could capture that in a contract.

And then last, the long-term concern, which always comes back, is portability. You mentioned that about the data and the applications. We're thinking now, as we move toward more software-defined data centers, that portability would become less of an issue, but it's still top of mind for many of the people I speak with.

So let's go through these, Adam. Let's start with that legal concern. Do you have any organizations that you can reflect on and say, here is how they did it, here is how they have figured out how to manage these license and control of the IP risks?

Beavis: The legal one is interesting. As a case study, there's a not-for-profit organization for which we were doing some initial assessment work, where we validated the technical risk and evaluated how we were going to access the data once the information was in a cloud. We went through that process, and that went fine, but obviously it then went up to the legal team.

One of the big things that the legal team was concerned about was what the service level agreeement was going to be, and how they could capture that in a contract. Obviously, we have standard SLAs, and being a smaller provider, we're flexible with some of those service levels to meet their needs.

But the one that they really started to get concerned about was data availability ... if something were to go wrong with the organization. It probably jumps into longevity a little bit there. What if something went wrong and the organization vanished overnight? What would happen with their data?

Escrow clause

That's where we see legal teams getting involved and starting to put in things like the escrow clause, similar to what we had with software as a service (SaaS) for a long time. We're starting to see organizations' legal firms focus on doing these, and not just for SaaS -- but infrastructure as a service (IaaS) as well. It provides a way for user organizations to access their data if provider organizations like TD were to go down.

Beavis
So that's one that we're seeing at the legal level. Around the terms and conditions, once again being a small service provider, we have a little more flexibility in what we can provide to the organizations on those.

Once our legal team sits down and agrees on what they're looking for and what we can do for them, we're able to make changes. With larger organizations, where SLAs are often set in stone, there's no flexibility about making modifications to those contracts to suit the customer.

Gardner: Let's pause here for a second and learn more about TD for those listeners who might be new to our series. Tell us about your organization, how big you are, and who your customers are, and then we'll get back into some of these risks issues and how they have been managed.

Beavis: Traditionally, we came from a system-integrator background, based on the east coast of Australia -- Melbourne and Sydney. The organization has been around for 12 years and had a huge amount of success in that infrastructure services arena, initially with VMware.
Being a small service provider, we have a little more flexibility in what we can provide to the organizations.

Other companies heavily expanded into the enterprise information systems area. We still have a large focus on infrastructure, and more recently, cloud. We've had a lot of success with the cloud, mainly because we can combine that with a managed services.

We go to market with cloud. It's not just a platform where people come and dump data or an application. A lot of the customers that come into our cloud have some sort of managed service on top of that, and that's where we're starting to have a lot of success.

As we spoke about in part one, our customers drove us to start building a cloud platform. They can see the benefits of cloud, but they also wanted to ensure that for the cloud they were moving to, they had an organization that could support them beyond the infrastructure.

That might be looking after their operating systems, looking after some of their applications such as Citrix, etc. that we specialize in, looking after their Microsoft Exchange servers, once they move it to the cloud and then attaching those applications. That's where we are. That's the cloud at the moment.

Gardner: Just quickly revisiting those legal issues, are you finding that this requires collaboration and flexibility from both parties, learning the road that assuages risks for one party, but protects the other? Is this a back and forth activity? This surely requires some agility, but also some openness. Tell me about the culture at TD that allows you to do that well.

Personality types

Beavis: It does, because we're dealing with different personality types. The technical teams understand cloud and some love it and push for it. But once you get up to that corporate board level, the business level, some of the people up there may not understand cloud -- and might perceive it as more of a risk.

Once again, that's where that flexibility of a company like TD comes in. Our culture has always been "customers first," and we build the business around the longevity of their licenses. That's one of the core, underlying values of TD.

We make sure that we work with customers, so they are comfortable. If someone in the business at that level isn't happy, and we think it might have been the contract, we'll work with them. Our legal team will work with them to make sure we can iron that out, so that when they move across to cloud, everybody is comfortable with what the terms and conditions are.

Gardner: Moving toward this issue of longevity -- I suppose stability is another way to look at it -- is there something about the platform and industry-standard decisions that you've made that helps your customers feel more comfortable? Do they see less risk because, even though your organization is one organization, the infrastructure, is broader, and there's some stability about that that comes to the table?

Beavis: Definitely. Partnering with VMware was one of our core decisions, because their platform everywhere is end-to-end standard VMware. It really gives us an advantage when addressing that risk if organizations ask what happens if our company doesn't run or they're not happy with the service.
It's something that SaaS organizations have been doing for a long time, and we’re only just starting to see it more and more now when it comes to IaaS.

The great thing is that within our environment -- and it's one part of VMware’s vision -- you can then pick up those applications, and move them to another VMware cloud provider. Thank heaven, we haven't had that happen, and we intend it not to happen. But, for organizations to understand that, if something were to go wrong, they can move that to another service provider without having to re-architect those applications or make any major changes. This is one area where we're well getting around that longevity risk discussion.

Gardner: Any examples come to mind of organizations that have come to you with that sort of a question? Is there any sort of an example we can provide for how they were reducing the risk in their own minds, once they understood that extensibility of the standard platform?

Beavis: Once again, it was a not-for-profit organization recently where that happened. We documented the platform. We then gave them the advice of the escrow organizations, where they would have an end-to-end process. If something were to happen to TD, they would have an end-to-end process of how they would get their data, and have it restored on another cloud provider -- all running on common VMware infrastructure.

That made them more comfortable with what we were offering, the fact that there was a way out that that would not disappear. As I said, it's something that SaaS organizations have been doing for a long time, and we’re only just starting to see it more and more now when it comes to IaaS and cloud hosting.

Gardner: Now the converse of that would be that some of your customers who have been dabbling in cloud infrastructure, perhaps open-source frameworks of some kind, or maybe they have been integrating their own components of open-source available software, licensed software. What have you found when it comes to their sense of risk, and how does that compare to what we just described in terms of having stability and longevity?

More comfortable

Beavis: Especially in Australia, we probably have 85 percent to 90 percent of organizations with some sort of VMware in their data center. They no doubt seem to be more comfortable gravitating to some providers that are running familiar platforms, with teams familiar with VMware. They're more comfortable that we, as a service provider, are running a platform that they're used to.

We'll probably talk about the hybrid cloud a bit later on, but that ability for them to still maintain control in a familiar environment, while running some applications across in the TD cloud, is something that is becoming quite welcoming within organizations. So there's no doubt that choosing a common platform that they're used to working on is giving them confidence to start to move to the cloud.

Gardner: Do you have any examples of organizations that may have been concerned about platforms or code forking -- or of not having control of the maturity around the platform? Are there any real-life situations where the choice had to be made, weighing the pros and cons, but then coming down on the side of the established and understood platform?

Beavis: More organizations aren’t promoting what their platform is, and we’re not quite sure that it could be built on OpenStack or other platforms. We're not quite sure what they're running underneath.

We've had some customers say that some service providers aren’t revealing exactly what their platform is, and that was a concern to them. So it's not directed to any other platforms, but there's no doubt that some customers still want to understand what the underlying infrastructure is, and I think that will remain for quite a while.
As they are moving into cloud for the first time, people do want to know what that platform sitting there underneath is.

At the moment, as they are moving into cloud for the first time, people do want to know what that platform underneath is.

It also comes down to knowing where the data is going to sit as well. That's probably the big one we’re seeing more and more. That's been a bit of a surprise to me, the concerns people certainly have around things like data sovereignty and the Patriot Act. People are quite concerned about that, mainly because their legal teams are dictating to them where the data must reside. That can be anything from being state based or country based, where the data cannot leave the region that's been specified.

Gardner: I suppose this is a good segue into this notion of how to make your data, applications, and the configuration metadata portable across different organizations, based on some kind of a standard or definition. How does that work? What are the ways in which organizations are asking for and getting risk reduction around this concept of portability?

Beavis: Once again, it's about having a common way that the data can move across. The basics come into that hybrid-cloud model initially, like how people are getting things out. One of the things that we see more and more is that it's not as simple as people moving legacy applications and things up to the cloud.

To reduce that risk, we're doing a cloud-readiness assessment, where we come in and assess what the organization has, what their environment looks like, and what's happening within the environment, running things like the vCenter Operations tools from VMware to right-size those environments to be ready for the cloud.

Old data

We’re seeing a lot of that, because there's no point moving a ton of data out there, and putting it on live platforms that are going to cost quite a bit of money, if it's two or four years old. We’re seeing a lot of solution architects out there setting those environments before they move up.

Gardner: Is there a confluence between portability and what organizations are doing with disaster recovery (DR)? Maybe they're mirroring data and/or infrastructure and applications for purposes of business continuity and then are able to say, "This reduces our risk, because not only do we have better DR and business continuity benefits, but we’re also setting the stage for us to be able to move this where we want, when we want."

They can create a hybrid model, where they can pick and choose on-premises, versus a variety of other cloud providers, and even decide on those geographic or compliance issues as to where they actually physically place the data. That's a big question, but the issue is business continuity, as part of this movement toward a lower risk, how does that pan out?

Beavis: That's actually one of the biggest movements that we’re seeing at the moment. Organizations, when they refresh their infrastructure, don’t see the the value refreshing DR on-premise. Let the first step cloud be "let's move the DR out to the cloud, and replicate from on-premises out into our cloud."

Then, as you said, we have the advantage to start to do things like IaaS testing, understanding how those applications are going to work in the cloud, tweak them, get the performance right, and do that with little risk to the business. Obviously, the production machine will continue to run on-premises, while we're testing snapshots.
DR is still the number one use case that we're seeing people move to the cloud.

It's a good way to put a live snapshot of that environment, and how it’s going to perform in the cloud, how your users are going to access it, bandwidth, and all that type of stuff that you need to do before starting to run up. DR is still the number one use case that we’re seeing people move to the cloud.

Gardner: As we go through each of these risks, and I hear you relating how your customers and TD, your own organization, have reacted to them, it seems to me that, as we move toward this software-defined data center, where we can move from the physical hardware and the physical facilities, and move things around in functional blocks, this really solves a lot of these risk issues.

You can manage your legal, your SLAs, and your licenses better when you know that you can pick and choose the location. That longevity issue is solved, when you know you can move the entire block, even if it's under escrow, or whatever. Complexity and fear about forking or immaturity of the infrastructure itself can be mitigated, when you know that you can pick and choose, and that it's highly portable.

It's a round-about way of getting to the point of this whole notion of software-defined data center. Is that really at heart a risk reduction, a future direction, that will mitigate a lot of these issues that are holding people back from adopting cloud more aggressively?

Beavis: From a service provider's perspective it certainly does. The single-pane management window that you can do now, where you can control everything from your network -- the compute and the storage -- certainly reduces risk, rather than needing several tools to do that.

Backup integration

And the other area where the venders are starting to work together is the integration of things like backup, and as we spoke about earlier, DR. Tools are now sitting natively within that VMware stack around the software-defined data center, written to the vSphere API, as we're trying to retrofit products to achieve file-level backups within a virtual data center, within vCloud. Pretty much every day, you wake up there's a new tool that's now supported within that.

From a service provider's perspective it's really reducing the risk and time to market for the new offerings, but from a customer's perspective it's really getting in that experience that they used to. On-premise over a TD cloud, from their perspective, makes it a lot easier for them to start to adopt and consume the cloud.

Gardner: One last chance, Adam, for any examples. Are there any other companies that you would like to bring up that illustrate some of these risk-mitigation approaches that we've been discussing?

Beavis: Another one was a company, a medical organization. It goes back to what we were saying earlier. They had to get a DR project up and running. So they moved that piece to the cloud, and were unsure whether they would ever move any of their production data out. But six months after running DR in the cloud, we just started to provide some capacity.

The next thing was that they had a new project, putting in a new portal for e-learning. They decided for the first time, "We've got the capacity seeing over in the cloud. Let's start to do that." So they’ve started to migrate all their test and dev environment out there, because in their mind they reduced the risk around the up time in the cloud due to the success that had with the DR. They had all the statistics in reporting back on the stability of that environment.

Then, they became comfortable to move the next segment, which was the test and dev environment. And all things are going well. That application will run out of the cloud and will be their first application out there.
We have the team here that can really make sure we architect or build those apps correctly as they start to move them out.

That was a company that was very risk averse, and the DR project took a lot of getting across the line in the first case. We'll probably see that, in six to eight months, they're going to be running some of their core applications out of the cloud.

We'll start to see that more and more. The customers’ roadmap to the cloud will move from DR, maybe some test and dev, and new applications. Then, as that refresh comes up to the on-premise, they would be in a situation where they have completed the testing for those applications and feel comfortable to move them out to the cloud.

Gardner: That really sounds like an approach to mitigating risk, when it comes to the cloud, gradual adoption, learn, test, and then reapply.

Beavis: It is, and one of the big advantages we have at TD is the support around a lot of those applications, as people move out -- how Citrix is going to work in the cloud, how Microsoft Exchange is going to work in the cloud, and how their other applications will work. We have the team here that can really make sure we architect or build those apps correctly as they start to move them out.

So a lot of customers are comfortable to have a full-service service provider, rather than just a platform for them to throw everything across.

Gardner: Great. We've been discussing how a leading Australian IT service provider, Thomas Duryea Consulting, has made a successful journey to cloud computing. This sponsored second installment on how a variety of risks associated with cloud adoption have been identified and managed, comes via a three-part series on how TD designed, built and commercialized a vast cloud infrastructure built on VMware.

We've seen how, through a series of use case scenarios, a list of risks has been managed. And we also developed a sense of where risk as a roadmap can be balanced in terms of starting with disaster recovery and then learning from there. I thought there was really an interesting new insight to the market.

So look for the third and final chapter in our series soon, and we'll then explore the paybacks and future benefits that a cloud ecosystem provides for businesses. We'll actually examine the economics that compel cloud adoption.

With that, I’d like to thank our guest Adam Beavis, the General Manager of Cloud Services at Thomas Duryea Consulting in Melbourne, Australia. This was great, Adam. Thanks so much.

Beavis: Absolute pleasure.

Gardner: And of course, I would like to thank you, our audience, for joining as well. This is Dana Gardner, Principal Analyst at Interarbor Solutions.

Thanks again for listening, and don't forget to come back next time for the next BriefingsDirect podcast discussion.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: VMware.

Transcript of a BriefingsDirect podcast on how a stepped approach helps an Australian IT service provider smooth the way to cloud benefits at lower risk for its customers. Copyright Interarbor Solutions, LLC, 2005-2013. All rights reserved.

You may also be interested in: