Showing posts with label VMware. Show all posts
Showing posts with label VMware. Show all posts

Tuesday, February 17, 2015

California Natural Resources Agency Gains Agility Through Software-Defined Data Center Strategy

Transcript of a BriefingsDirect discussion on how a large state agency harnesses broad virtualization to do more with less in IT while remaining agile and efficient.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: VMware.

Dana Gardner: Hello, and welcome to the special BriefingsDirect podcast series coming to you directly from the recent VMworld 2014 Conference. I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host throughout this series of BriefingsDirect IT Strategy Discussions.

Gardner
We’re in San Francisco to explore the latest developments in hybrid cloud computing, end-user computing, software-defined data center (SDDC), and virtualization infrastructure management.

Our next innovator case study interview focuses on the California Natural Resources Agency (CNRA) in Sacramento. They have a large purview, overseeing some 25 different agencies. They've set up a SDDC and are deep into the process of maturing its value and utility.

To learn more about how the CNRA gains agility from a SDDC strategy, we welcome Tony Morshed, Chief Technology Officer for the California Resources Data Center in the California Department of Water Resources. Welcome, Tony.

Tony Morshed: Thanks.

Gardner: We’re also here with Michael Hom, Data Center Chief in the IT Infrastructure Services Branch for the California Department of Water Resources. Welcome, Michael.

Michael Hom: Good morning.

Gardner: First, gentlemen, help us understand a little bit about the size of your organization. This is a large state government department, but you're really a department of departments. Help me understand the breadth of your agency.

Morshed: Our department, Water Resources, consists of 3,500 people that are part of the agency that comprises many departments. The bigger ones are Parks and Rec, Cal Fire, and Fish and Wildlife. There are about 28 agencies and conservancies and 25,000 people onboard.

http://www.cold.ca.gov/agency_display.asp?ATRID=WTRRSC
Morshed
Gardner: So in order to support all these people and all these different agencies, a common infrastructure is important, but also it sounds like you need to have differentiation and customization for their specific needs. How do you accomplish both the goal of a common infrastructure for efficiency, but also still be able to meet all your requirements for all those different people?

Morshed: When we started our consolidation effort, we decided to transform ourselves more to an ISP-style setup. We know that most departments have their own IT shops, and you know there is still that trust thing. So we just built the common infrastructure. We let them share the infrastructure, but they have their own security posture. We segregate all their traffic, so each department can still feel like they're autonomous, but yet we all share the infrastructure, in which case we all share the savings.

Mandate to consolidate

Hom: With that, we had a mandate to also consolidate. First, the State of California is really about cost savings. Each of the 25-plus organizations mostly had their own IT shops. By combining the infrastructure as a service (IaaS) in our multitenancy data center, they're able to reap the benefits of cost savings for infrastructure, but also concentrate each department's specific needs and applications.

Hom
Gardner: It sounds as if you have a private-cloud approach, multi-tenancy, and trying to get that elasticity and efficiency going. It also sounds like you’re well on the way to an  SDDC, but you know that means different things to different people. I'd like to get your take on what SDCC means.

Also, are you exploring software-defined storage, software-defined networking, or more on the workload support for the servers. How does it shape up in your mind and in your particular implementation?

Morshed: When the software-defined stuff came out, for me, one of the big things was disaster recovery (DR). If I could stretch my data center into another facility, DR becomes a non-issue, because those workloads can shift between sites without any trouble with automation.

That was the next piece for us -- automation. We realize that we’re part-way there, but to get all the way there, we need to do fuller automation. This means that we need to quit tinkering with the network and storage every time we want to do something new.

Those were big driving factors for going to SDDC. To us, it means that we’re obfuscating the hardware. The hardware’s there. It’s just running. We’re working and tweaking everything at the software layer -- so we could be a lot more agile.

Gardner: Michael, SDDC, how do you define that and to what degree are you into that journey?

Separating the physical

Hom: For the SDDC we really wanted to provide a logical data center to each of our organizations. We wanted to separate the physical, which allows our folks to support more of a logical infrastructure, where they still have autonomy, but the physical layer is basically one and the same.

Today, from a functional point of view, they get what they’ve had before, but without the overhead of physical support. We've used the VMware vSphere and vCloud Suite to provide that software-defined queuing. Right now, we're embarking on a software-defined networking, using VMware NSX and third-party vendors to support that.

We’re looking to use automation soon to help us decrease overhead, and pass those savings on to each of the organizations.

Gardner: Given that you have a common set of infrastructure and, I imagine, lot of common data, are you well into the VMware Virtual SAN adoption for software-defined storage as well? How does that shape up?

Morshed: For Virtual SAN, we're looking at these cases right now and we see some. We’re not quite there yet, and so we’ll focus on the software-defined networking or automation as well. Prior to Virtual SAN, which will probably come later, we’re still evaluating and determining what our needs are in that area.
For me, the greater organizational problems and the structure of your processes become the harder things, because we’re not as used to dealing with those things.

Gardner: It’s complicated, right? There are lot of interdependencies. Certain things happen that ripple across others, and so it’s a crawl-walk-run. Yet the benefits come from a whole greater than the sum of the parts, if I could use a couple of clichés, but it is an interesting time in the business.

What are some of the challenges you have in terms of getting closer to a full SDDC? Are these technology, culture, process, or all of the above?

Morshed: At first it was technology, but the culture and the organizational mindset are the bigger challenge. You can find solutions and work through technology. We're IT people, and we’re used to the technology problems. For me, the greater organizational problems -- and the structure of your processes -- become the harder things, because we’re not as used to dealing with those things.

Gardner: Now, Michael, part of the adoption of a complex, long-term journey on IT is to show results early and get buy-in. Are there instances where you look at that? Maybe, it's the DR, where you can have a sense of better dependability on your resources? Any way to describe what you’ve done early on that has lead to a greater emphasis to adopt more aggressively?

Better service levels

Hom: Definitely. One of the key things with early wins is providing a better service level for provisioning, and that’s something that everybody has been struggling with. With the cloud infrastructure, we've been able to provision within days, if not a day. And that typically beats most of the service tools that each of the organizations have had. So that was an early win.

It’s things like that where we decrease overhead and make IT more accessible for business. That makes it a win, and starts to have the ball rolling as far as other features, such as DR, greater capacity, and things that would be tough for each individual organization to do on their own.

Gardner: Of course, when you do well in a state agency, it’s apparent, right? They're representing services to the public, and you’re the largest US state. You have a large bureaucracy, probably the size of what many countries have. Is there something different about the public sector in terms of accountability or response? How do you have a different set of requirements as a public organization and perhaps enterprises, too?

Morshed: We do have a difference. Our procurement is much harder. Getting people is much harder. We live within a lot of constraints that the private sector doesn’t realize. We have a hard time adjusting our work levels. Can we get more people now? No. It takes forever to get more people, if you can ever get them.
We live within a lot of constraints that the private sector doesn’t realize.

Gardner: So it’s doing more with less over and over again.

Morshed: Constantly doing more with less. Part of this virtualization is survivability. We would never be able to survive or give our business the tools they need to do their business without this. We would just be a sinking ship.

Gardner: So the whole philosophy, Michael, of SDDC and virtualization, is of doing more with less, of automating, of boiling out the manual processes and going to more real-time responsive technology driven infrastructure makes total sense for your organization?

Hom: Definitely. To go with what Tony says, we really don’t have much overhead when we need to respond to future projects. When there's an uptick in activity, there is no way to have more resources available. So we need to build that into our infrastructure to allow for that dynamic bandwidth to happen from a personal level.

Gardner: We’re here of course at VMworld 2014. There’s lot of news going on. Anything in particular piquing your interest, perhaps with the OpenStack support in the EVO Hyper-Converged Infrastructure? What is now on your agenda after hearing some news to reach those goals as you describe them?

EVO looks pretty nice

Morshed: There are a couple of things. EVO looks pretty nice. I was out on the floor and looking at it yesterday and talking with the CIO and I see it as something that we might be able to use for some of our outlying offices, where we have around 100 to 150 people. We can drop something like that in, put virtual desktop infrastructure (VDI) on it, and deliver VDI services to them locally, so they don't have to worry about that traffic going over the wide area network (WAN).

The other piece is the acquisition made recently with CloudVolumes and looking at how we can use that to leverage our VDI structure. We're using another product right now in that space, but again with CloudVolumes it’s been a part of VMworld. It’s more interesting, because we know that the chances of all the software being upgraded and updated in at the same time in interoperability is greater if it’s a VMware product.

For us, it’s been a real struggle to make sure that all the products that we use, interact and as there’s an upgrade, everything upgrades at the same time. To me, those are the two biggest things that I'm getting out of the announcements.
The business could come up with more dollars, but to be able to be more agile and more flexible is where it really pays off.

Gardner: Right and it’s like building a virtuous cycle of adoption benefit because when you do the SDDC built on virtualization that provides a public-cloud benefit. Then, you can start realizing those end-user computing benefits like VDI. So, there’s really this snowball effect.

Is that something that you’ve been able to demonstrate? Do you have any metrics of success that you can point to?

Morshed: We do have some tangible benefits. We have reduced our CAPEX by somewhere around 40 percent and our OPEX around 32 percent. I don’t have the numbers, but we have deployed VDI in the Department of Water Resources and we already virtualized about 600 to 800 desktops. Not only is it helping us save costs there; it’s also used as a strategy for a remote access as a strategy to help protect our server infrastructure by using VDI for admins.

So there are those tangible things that you can reach out and measure and those intangible things, where it’s allowing us to do something easier and more flexible. That, for me, is the bigger win. The business could come up with more dollars, but to be able to be more agile and more flexible is where it really pays off.

Gardner: So we get productivity, we've got DR, which reduces your risk, and we've got some hard savings and economics. It's pretty compelling. Michael, any thoughts about how those fit together, and which ones are more important to you?

More flexible

Hom: Definitely, this allows us to be more flexible, as Tony said, and there are some things that we're trying to do that we would never imagine without a SDDC. So they increased security, greater capacity, capabilities to our business.

Gardner: How about VMware specifically? Is there some differentiator in terms of how they produce this products that has allowed you to follow this journey? Is this more of a partnership than a procurement relationship? It sounds like the track that VMware takes in its strategy very much aligns with yours.

Morshed: It’s very much a partnership. In fact, we basically only want to work with business partners. We don’t want to work with vendors, because we don’t need someone to sell us something and walk away. VMware has been hand-in-hand with us for this whole journey.

When we look at other products for the mix, we look for the deep partners with VMware because we know virtual machines (VMs) are core. So, when we look in the storage partners and we’re looking at networking partner, we’re making sure that those partners are partners of VMware.
We don’t want to work with vendors, because we don’t need someone to sell us something and walk away. VMware has been hand-in-hand with us for this whole journey.

One of the things that we find is the inoperability once everything has been virtualized. Everything has to connect, and it’s not a single stack. So if one thing gets upgraded, we need to make sure that everything across a stack can accept that upgrade. Otherwise, we lose the ability to take the advantage of the upgrade until everybody else catches up.

Early on, we were in that position and we’re doing everything we can to remove ourselves from that position.

Gardner: Michael, any thoughts on the nature of the VMware relationship that you could point to in terms of legacy in an approach that others might want to consider.

Hom: Definitely. We consider VMware a strategic partner. A couple of things that illustrate that is that we’ve been involved with the VMware’s Excellware and Velocity program and that's been two-fold. For the velocity side, we have marked up a fully working SDDC with SX, with virtualized automation, operations in business as a stack.

Gardner: One of the things we've heard here at VMworld is be bold, be brave, be a little bit aggressive. Go out there and do these things. Any thoughts for other organizations that are just dipping their toes into the water? Is it higher risk than reward to be bold and brave in getting early? Or is it perhaps something that allows you to then be a differentiator and be better in your own environment, whether it’s a public or private sector?

Set in stone

Morshed: The first thing is always question what you’ve got that's set in stone, because most of it is not set in stone. We've all heard a lot of things that you can't do. You can’t virtualize Oracle, but you can. You can't do this, you can't do that, you can't get the network focused at the top of the storage. That's all that stuff that you actually can do.

You have to really look at it, peel it back, make sure that "you can't" is an actual thing, and then figure out how to get around it. The way I see it is that, as the world turns, things morph, and if you don’t move into this virtualization space, you're going to be left behind. You're going to be the guy making buggy whips. There are no buggy whips running around. There’s no use for them.

We’re all being asked to do so much more with the same resources or fewer resources. We're all being pushed to keep up with how the demand is going out there. Technology is just jumping, and this is the only way on the infrastructure side to keep up with that.
We’re all being asked to do so much more with the same resources or fewer resources. We're all being pushed to keep up with how the demand is going out there.

Gardner: Michael, any other thoughts in terms of 20/20 hindsight on your experience and why being aggressive and being bold has paid off?

Hom: Virtualization is definitely up and running, at least in state organizations. It’s probably something that we might do that or we might use as a toolset, but from looking at VMware this week, virtualization is the industry standard.

If you don’t take it on, then you really won’t be able to respond to business needs. What happens is that when the official IT organization becomes obsolete, there are going to be ad-hoc IT organizations and those would become the norm. If you want to be relevant, you have to use every tool set that you can to provide the business needs.

Gardner: Very good. I'm afraid we’ll have to leave it there. We've been learning from the California Natural Resources Agency in Sacramento how they’ve been embarking and benefiting from a SDDC strategy. I'd like to thank our guests, Tony Morshed, Chief Technology Officer for the California Natural Resources Data Center in the California Department of Water Resources. Thank you so much, Tony.

Morshed: No problem. It’s been a pleasure.

Gardner: And we've also been joined by Michael Hom, Data Center Chief in the IT infrastructure services branch for the California Department of Water Resources. Thanks so much, Michael.

Hom: Thank you.

Gardner: And also a big thank you to our audience for joining this special podcast series coming to you directly from the recent 2014 VMworld Conference in San Francisco. I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host throughout this series of VMware-sponsored BriefingsDirect IT strategy discussions. Thanks again for listening, and come back next time.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: VMware.

Transcript of a BriefingsDirect discussion on how a large state agency harnesses broad virtualization to do more with less in IT while remaining agile and efficient. Copyright Interarbor Solutions, LLC, 2005-2015. All rights reserved.

You may also be interested in:

Wednesday, December 10, 2014

Creative Solutions in Healthcare Improves Client Services and Saves Money Using VMware vCloud Air

Transcript of a Briefings Direct podcast on how a major healthcare provider is improving internal operations and patient care with a hybrid cloud model.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: VMware.

Dana Gardner: Hello, and welcome to the special BriefingsDirect podcast series coming to you directly from the recent VMworld 2014 Conference. I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host throughout this series of BriefingsDirect IT Strategy Discussions.

Gardner
We’re here to explore the latest developments in hybrid cloud computing, end-user computing, software-defined data center, and virtualization infrastructure management.

Our next innovative case study interview focuses on Creative Solutions in Healthcare in Fort Worth, Texas. We're going to hear how they've adopted cloud, and why cloud is benefiting them as a healthcare organization.

To learn more about Creative Solutions in Healthcare, we're joined by their CIO, Shawn Wiora. Welcome.

Shawn Wiora: Good morning, Dana.

Gardner: Tell us a bit about your organization? It sounds like you've got a big, sprawling healthcare facility group?

Wiora: Yes. Creative Solutions in HealthCare is the largest independent owner and operator of skilled nursing facilities (SNFs), which are nursing homes, in the State of Texas. We also operate assisted-living facilities and we provide long-term care solutions, primarily in Texas.

Gardner: How many people we are talking about, both in terms of your employees and then also patients?

Wiora
Wiora: We have about 6,000 employees. Many of them are nurses, and many of them are capturing data about our patients and our residents. Our residents are in the thousands and, as a private company, we're able to deliver solutions in the marketplace that are really geared toward lifestyle, care, nutrition, activities, and programs. That's why the company has been so successful -- we have this passionate care about our residents.

Gardner: Of course, healthcare is really gearing up and changing in terms of how it's using IT and leveraging IT, and I suppose you're no different?

Wiora: That's exactly right. HealthCare has been ramping up in terms of IT, not only catching up with the industry, but in some cases, leading the forefront, especially when it comes to patient care and delivering innovative diagnosis and treatment programs over telemedicine and other types of electronic media.

Gardner:  Shawn, tell us a bit about why cloud computing has been appealing to you with your requirements? What challenges were you trying to solve when you looked at the cloud model?

Going virtual

Wiora: It's an interesting story. About two years ago, the company was 100 percent physical in terms of its server infrastructure. Similar to many other long-term care facilities, we have to deliver new forms of compliance as it relates to HIPAA, the HITECH Act, and the NIST framework.

So if you take all those, in addition to the new apps that are being required of the organization, new types of health exchanges that we are involved with, the requirements were just escalating dramatically. So we started with a physical infrastructure and we looked at going virtual.

Gardner: Did you begin cloud in a particular part of your organization, perhaps in development, and then expanded out, or was this wholesale change ramp–up? How did you approach it?

Wiora: You're right. It was a wholesale change ramp-up. We took a big challenge by embarking on an initiative that allowed the company to go from physical to virtual, and at the same time, we went from premise-based to the cloud. We did that together.

Fortunately, we already had some really good experience with virtualization, but by no means did we have a program that was deploying across the server infrastructure. So we issued an RFP and we selected a group of vendors at the top of the pyramid. At that top was Azure, AWS, and VMware’s vCloud. We chose Microsoft Azure.
The team at the VMware understood what we were doing in terms of our timeline, our projects, and our applications that we are looking to move to the cloud.

We started a pilot with Azure, and it was really interesting. We're a Microsoft house, and the team chose Azure based on the fact that not only we were Microsoft house, but we had a number of initiatives that we wanted to move to the cloud, including Microsoft Exchange.

So, we started moving Exchange into the cloud with our Azure program. Then, we asked Microsoft to issue a document that indicated that they would support Exchange, their own software, in Azure, their own cloud, and guess what happened.

We did not get acknowledgment. Ultimately, they would not indicate that they would support their own software in their own cloud. We were flabbergasted. We just couldn't believe it.

We ended up pulling the plug on that project, on that initiative. We went back to the marketplace and we chose VMware’s vCloud hybrid services, now known as vCloud Air, and we quickly ramped up. That's the reason why this project has been so successful -- the ramp-up.

The team at the VMware understood what we were doing in terms of our timeline, our projects, and our applications that we are looking to move to the cloud. That's really where they differentiated, not only between Azure and AWS in terms of their on-boarding, because we did pilots on all those cloud infrastructures. VMware’s vCloud Air team, the on-boarding team, had the best on-boarding process for any kind of IT project that I've been involved with in the past 20 years.

Had our back

It just really made the IT team at Creative Solutions in Healthcare, the company, just feel like those guys really had our back. They really cared about what was happening. They knew that we were under the gun, because we had done this Azure kind of cluster, and it was not even feasible for us to go down the path with our own infrastructure. It ended up being a great partnership.

Gardner: Shawn, tell me to what degree you're hybrid? Do you have an on-premises cloud virtualized set of applications? Do you have another set of applications? You've have opted to go into the public cloud, the vCloud Air. Is this something that you're still sorting out in terms of what goes where? How about the data? Is that also on-prem, and how you are factoring the hybrid approach?

Wiora: We're very deliberate with our cloud strategy. We started with a pilot of some core applications, got our feet wet in the cloud, and then we took that success that we had. Again, the on-boarding that we received in that process was really second to none.

That made the team feel very comfortable with moving other infrastructures. Now, we've moved our entire back-office infrastructure, our accounting, a number of custom apps, provisioning, and supply chain into the cloud with the vCloud Air.
That's what IT should be focused on: how do we ultimately deliver solutions that the other business units, and ultimately our patients, can appreciate.

We're are also in a hybrid environment, as you've indicated. We have servers throughout our facilities and servers at headquarters. We have other software-as-a-service (SaaS) models that we're interacting with. We're moving data from other providers back into our on-premise environment and then we're moving that into vCloud Air. There's a lot of hybrid going on right now.

Gardner: So that integration, management, and orchestration, being able to automate that, seems very important to you. You want to be able to set this up, have it run, and then devote your energy to all these new projects?

Wiora: Yes. That's really where the return is to the company, the shareholders, the board, and the management team. That's what IT should be focused on: how do we ultimately deliver solutions that the other business units, and ultimately our patients, can appreciate.

Dana, we're in the long-term care industry and we've been very successful in growing the company based on the passionate, caring model. The IT organization aligns its passion and care towards the patients.

Instead of being wrapped up with servers, virtualization, and all of the other things that VMware is the best at doing, we're outward focused on the business units and the patients.

New product appeal

Gardner: Shawn, we are here at VMworld, and there is been some news and announcements, some new things happening with vCloud Air, particularly the Object Storage beta program and Virtual Private Cloud on Demand.

Are you interested in being a beta customer? Do these added new functions for public and hybrid cloud model appeal to you?

Wiora: Absolutely. We talked about total cost of ownership (TCO), and when you look at ramping up on an on-demand scenario that has been announced, we're absolutely interested. We've indicated our interest. We're going to be moving forward and looking at that.

Who has more data than healthcare? There are some organizations that have a lot of data, but we track what our patients eat, what time they go to sleep, what they do during the day in terms of activity. We're talking each and every day across each and every facility, thousands of patients.
It is been game changing for the company. It is been game changing for our patients.

So Object Based storage, yes, this is something that is in our future. We talked earlier about desktop virtualization and the Air launch announcement recently that was also something that we were keenly interested in.

Gardner: So, one last area for adoption. You have talked about the on-boarding process, but there's also the end-user absorption of new approaches from IT. How this has gone in terms of your end users?

Have they noticed a change in the type of applications? Has it been something that they didn't notice? What's been that result at that end-user inception point when you made this transition to cloud?

Wiora: It is been game changing for the company. It is been game changing for our patients. Instead of being fearful about approaching IT, the business units are coming to IT, and they know that we can ramp up applications very quickly.

We just ramped up our maintenance application in a couple of days. In the past, that would have taken months of planning. The business unit laughed. They just looked at IT and said, "You have to be kidding. This is up and running already?"

Advice for others

Gardner: That's a strong testament. How about advice for other organizations that are beginning that RFP process, that are thinking about cloud, looking at the different approaches, the different providers? Any words of wisdom in hindsight that you could offer now that you have been through that process?

Wiora: Absolutely. Who wants to reinvent the wheel? If I'm looking at going to the cloud for the first time or if I am looking at enhancing my hybrid cloud environment, I would suggest you look at TCO.

Look at what your labor costs are. Look at who the A-Team is in the industry for virtualizing. Look at what the roadmaps are and look at which vendors really don't care what you put in your cloud infrastructure. There are vendors. as we talked about earlier, that really have the ability to approve or disapprove what you put in there.
You have to look at TCO and look at partnering with an organization that can help you easily ramp up.

I'd look at that, but you have to look at TCO and look at partnering with an organization that can help you easily ramp up. Then, I think you look at how you want to run your IT organization. If those things make sense to you, then I would suggest you look at the vCloud.

Gardner: Well great. I'm afraid we'll have to leave it there. We've been hearing about how Creative Solutions in Healthcare in Fort Worth Texas has embarked on a cloud journey and have leveraged hybrid cloud quite successfully. So a big thank you to our guest, Shawn Wiora, the CIO at Creative Solutions in Healthcare. Thank you.

Wiora: Thank you, Dana.

Gardner: And also thank you to our audience for joining this special podcast series coming to you directly from the 2014 VMworld Conference in San Francisco.

I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host throughout this series of VMware sponsored BriefingsDirect IT discussions. Thanks again for listening, and come back next time.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: VMware.

Transcript of a Briefings Direct podcast on how a major healthcare provider is improving internal operations and patient care with a hybrid cloud model. Copyright Interarbor Solutions, LLC, 2005-2014. All rights reserved.

You may also be interested in:

Tuesday, October 07, 2014

MIT Media Lab Computing Director Details the Virtues of Cloud Computing for Agility and DR

Transcript of a Briefings Direct podcast on how MIT researchers are reaping the benefits of virtualization.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: VMware.

Dana Gardner: Hello, and welcome to a special BriefingsDirect podcast series coming to you directly from the VMworld 2014 Conference. I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host throughout this series of BriefingsDirect IT strategy discussions.

Gardner
We’re here in San Francisco the week of August 25 to explore the latest developments in hybrid cloud computing, user computing, software-defined data center (SDDC), and virtualization infrastructure management.

Our next innovator case study interview focuses on the MIT Media Lab in Cambridge, Massachusetts and how they're exploring the use of cloud and hybrid cloud and enjoying such use benefits as speed, agility and disaster recovery (DR)

To learn more about how the MIT Media Lab is using cloud computing, we’re joined by Michail Bletsas, research scientist and Director of Computing at the MIT Media Lab. Welcome.

Michail Bletsas: Thank you. 

Gardner: Tell us about the MIT Media Lab. How big is the organization? What’s your charter?

Bletsas: The organization is one of the many independent research labs within MIT. MIT is organized in departments, which do the academic teaching, and research labs, which carry out the research.

http://web.media.mit.edu/~mbletsas/
Bletsas
The Media Lab is a unique place within MIT. We deviate from the normal academic research lab in the sense that a lot of our funding comes from member companies, and it comes in a non-direct fashion. Companies become members of the lab, and then we get the freedom to do whatever we think is best.

We try to explore the future. We try to look at what our digital life will look like 10 years out, or more. We're not an applied research lab in the sense that we're not looking at what's going to happen two or three years from now. We're not looking at short-term future products. We're looking at major changes 15 years out.

I run the group that takes care of the computing infrastructure for the lab and, unlike a normal IT department, we're kind of heavy on computing. We use computers as our medium. The Media Lab is all about human expression, which is the reason for the name and computers are one of the main means of expression right now. We're much heavier than other departments in how many devices you're going to see. We're on a pretty complex network and we run a very dynamic environment.

Major piece

A lot has changed in our environment in recent years. I've been there for almost 20 years. We started with very exotic stuff. These days, you still build exotic stuff, but you're using commodity components. VMware, for us, is a major piece of this strategy because it allows us a more efficient utilization of our resources and allows us to control a little bit the server proliferation that we experienced and that everybody has experienced.

We normally have about 350 people in the lab, distributed among staff, faculty members, graduate students, and undergraduate students, as well as affiliates from the various member companies. There is usually a one-to-five correspondence between virtual machines (VMs), physical computers, and devices, but there are at least 5 to 10 IPs per person on our network. You can imagine that having a platform that allows us to easily deploy resources in a very dynamic and quick fashion is very important to us.

We run a relatively small operation for the size of the scope of our domain. What's very important to us is to have tools that allow us to perform advanced functions with a relatively short learning curve. We don’t like long learning curves, because we just don’t have the resources and we just do too many things.

You are going to see functionality in our group that is usually only present in groups that are 10 times our size. Each person has to do too many things, and we like to focus on technologies that allow us to perform very advanced functions with little learning. I think we've been pretty successful with that.
We really need to interact with our infrastructure on a much shorter cycle than the average operation.

Gardner: So your requirements are to support those 350 people with dynamic workloads, many devices. What is it that you needed to do in your data center to accommodate that? How have you created a data center that’s responsive, but also protects your property, and that allows you to reduce your security risk?

Bletsas: Unlike most people, we tend to have our resources concentrated close to us. We really need to interact with our infrastructure on a much shorter cycle than the average operation. We've been fortunate enough that we have multiple, small data centers concentrated close to where our researchers are. Having something on the other side of the city, the state, or the country doesn’t really work in an environment that’s as dynamic as we are.

We also have to support a much larger community that consists of our alumni or collaborators. If you look at our user database right now, it’s something in the order of 3,500, as opposed to 350. It’s a very dynamic in that it changes month to month. The important attributes of an environment like this is that we can’t have too many restrictions. We don’t have an approved list of equipment like you see in a normal corporate IT environment.

Our modus operandi is that if you bring it to us, we’ll make it work. If you need to use a specific piece of equipment in your research, we’ll try to figure out how to integrate it into your workflow and into what we have in there. We don’t tell people what to use. We just help them use whatever they bring to us.

In that respect, we need a flexible virtualization platform that doesn’t impose too many restrictions on what operating systems you use or what the configuration of the VMs are. That’s why we find that solutions, like general public cloud, for us are only applicable to a small part of our research. Pretty much every VM that we run is different than the one next to it. 

Flexibility is very important to us. Having a robust platform is very, very important, because you have too many parameters changing and very little control of what's going on. Most importantly, we need a very solid, consistent management interface to that. For us, that’s one of the main benefits of the vSphere VMware environment that we’re on.

Public or hybrid

Gardner: Of course, virtualization sounds like a great fit when you have such dynamic, different, and varied workloads. But what about taking advantage of cloud, public cloud, and hybrid cloud to some degree, perhaps for disaster recovery (DR) or for backup failover. What's the rationale, even in your unique situation, for using a public or hybrid cloud?

Bletsas: We use hybrid cloud right now that’s three-tiered. MIT has a very large campus. It has extensive digital infrastructure running our operations across the board. We also have facilities that are either all the way across campus or across the river in a large co-location facility in downtown Boston and we take advantage of that for first-level DR.

A solution like the vCloud Air allows us to look at a real disaster scenario, where something really catastrophic happens at the campus, and we use it to keep certain critical databases, including all the access tools around them, in a farther-away location.

It’s a second level for us. We have our own VMware infrastructure and then we can migrate loads to our central organization. They're a much larger organization that takes care of all the administrative computing and general infrastructure at MIT at their own data centers across campus. We can also go a few states away to vCloud Air [and migrate our workloads there in an emergency].
We know that remote events are remote, until they happen, and sometimes they do.

So it’s a very seamless transition using the same tools. The important attribute here is that, if you have an operation that small, 10 people having to deal with such a complex set of resources, you can't do that unless you have a consistent user interface that allows you to migrate those workloads using tools that you already know and you're familiar with.

We couldn’t do it with another solution, because the learning curve would be too hard. We know that remote events are remote, until they happen, and sometimes they do. This gives us, with minimum effort, the ability to deal with that eventuality without having to invest too much in learning a whole set of tools, a whole set of new APIs to be able to migrate.

We use public cloud services also. We use spot instances if we need a high compute load and for very specialized projects. But usually we don’t put persistent loads or critical loads on resources over which we don’t have much control. We like to exert as much control as possible.

Gardner: I'd like to explore a little bit more this three-tiered cloud using common management, common APIs. It sounds like you're essentially taking metadata and configuration data, the things that will be important to spin back up an operation should there be some unfortunate occurrence, and putting that into that public cloud, the vCloud Air public cloud. Perhaps it's DR-as-a-service, but only a slice of DR, not the entire data. Is that correct?

Small set of databases

Bletsas: Yes. Not the entire organization. We run our operations out of a small set of databases that tend to drive a lot of our websites. A lot of our internal systems drive our CRM operation. They drive our events management. And there is a lot of knowledge embedded in those databases.

It's lucky for us, because we're not such a big operation. We're relatively small, so you can include everything, including all the methods and the programs that you need to access and manipulate that data within a small set of VMs. You don’t normally use them out of those VMs, but you can keep them packaged in a way that in a DR scenario, you can easily get access to them.

Fortunately, we've been doing that for a very long time because we started having them as complete containers. As the systems scaled out, we tended to migrate certain functions, but we kept the basic functionality together just in case we have to recover from something.
We are fortunate enough to have a very good, intimate knowledge of our environment. We know where each piece lies. That’s the benefit of running a small organization

In the older days, we didn’t have that multi-tiered cloud in place. All we had was backups in remote data centers. If something happened, you had to go in there and find out some unused hardware that was similar to what you had, restore your backup, etc.

Now, because most of MIT's administrative systems run under VMware virtualization, finding that capacity is a very simple proposition in a data center across campus. With vCloud Air, we can find that capacity in a data center across the state or somewhere else.

Gardner: For organizations that are intrigued by this tiered approach to DR, did you decide which part of those tiers would go in which place? Did you do that manually? Is there a part of the management infrastructure in the VMware suite that allowed you to do that? How did you slice and dice the tiers for this proposition of vCloud Air holding a certain part of the data?

Bletsas: We are fortunate enough to have a very good, intimate knowledge of our environment. We know where each piece lies. That’s the benefit of running a small organization. We occasionally use vSphere’s monitoring infrastructure. Sometimes it reveals to us certain usage patterns that we were not aware of. That’s one of the main benefits that we found there.

We realized that certain databases were used more than we thought. Just looking at those access patterns told us, “Look, maybe you should replicate this." It doesn’t cost much to replicate this across campus and then maybe we should look into pushing it even further out.

It is a combination of having a visibility and nice dashboards that reveal patterns of activity that you might not be aware of even in an environment that's not as large as ours.

Gardner: We’re here at VMworld 2014. There's been quite a bit of news, particularly in the vCloud Air arena. We've talked and heard about betas for ObjectStore and for virtual private cloud. Are these of interest to you now that you’ve done a hybrid cloud using DR-as-a-service? Does anything else intrigues you?

Standard building blocks

Bletsas: We like the move toward standardization of building blocks. That’s a good thing overall, because it allows you to scale out relatively quickly with a minor investment in learning a new system. That’s the most important trend out there for us. As I've said, we're a small operation. We need to standardize as much as possible, while at the same time, expanding the spectrum of services. So how do you do that? It’s not a very clear proposition.

The other thing that is of great interest to us is network virtualization. MIT is in a very peculiar situation compared to the rest of the world, in the sense that we have no shortage of IP addresses. Unlike most corporations where they expose a very small sliver of their systems to the outside world and everything happens on the back-end, our systems are mostly exposed out there to the public internet.

We don’t run very extensive firewalls. We're a knowledge dissemination and distribution organization and we don’t have many things to hide. We operate in a different way than most corporations. That shows also with networking. Our network looks like nothing like what you see in the corporate world. The ability to move whole sets of IPs around our domain, which is rather large and we have full control over, is a very important thing for us.

It allows for much faster DR. We can do DR using the same IPs across the town right now because our domain of control is large enough. That is very powerful because you can do very quick and simple DR without having to reprogram IP, DNS Servers, load balancers, and things like that. That is important.
That is very powerful because you can do very quick and simple DR without having to reprogram IP, DNS Servers, load balancers, and things like that.

The other trend that is also important is storage virtualization and storage tiering and you see that with all the vendors down in the exhibit space. Again, it allows you to match the application profile much easier to what resources you have. For a rather small group like ours, which can't afford to have all of its disk storage and very high-end systems, having a little bit of expensive flash storage, and then a lot of cheap storage, is the way for us to go.

The layers that have been recently added to VMware, both on the network side and the storage side help us achieve that in a very cost-efficient way.

Gardner: The benefits of having a highly virtualized environment -- including the data center, including the end user computing endpoints -- gives you that flexibility of taking workloads and apps from development to test to deployments. So there's a common infrastructure approach there, but also a common infrastructure across cloud, hybrid cloud, and DR.

So it’s sort of a snowball effect. The more virtualization you're adapting, the more dynamic and agile you can be across many more aspects of IT.

Bletsas: For us, experimentation is the most important thing. Spinning out a large number of VMs to do a specific experiment is very valuable and being able to commandeer resources across campus and across data centers is a necessary requirement for something like an environment like this. Flexibility is what we get out of that and agility and speed of operations.

In the older days, you had to go and procure hardware and switch hardware around. Now, we rarely go into our data centers. We used to live in our data centers. We go there from time to time but not as often as we used to do, and that’s very liberating. It’s also very liberating for people like me because it allows me to do my work anywhere.

Gardner: Very good. I'm afraid we’ll have to leave it there. We’ve been discussing the virtues of cloud computing and hybrid cloud computing with the MIT Media Lab. I’d like to thank our guest, Michail Bletsas, research scientist and Director of Computing at the MIT Media Lab in Cambridge, Mass. Thanks so much.

Bletsas: Thank you.

Gardner: And also a big thank you to our audience for joining this special podcast series coming to you directly from the 2014 VMworld Conference in San Francisco.

I'm Dana Gardner; Principal Analyst at Interarbor Solutions, your host throughout this series of VMware-sponsored BriefingsDirect IT discussions. Thanks again for listening and come back next time.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: VMware.

Transcript of a Briefings Direct podcast on how MIT researchers are reaping the benefits of virtualization. Copyright Interarbor Solutions, LLC, 2005-2014. All rights reserved.

You may also be interested in:

Wednesday, September 24, 2014

University of New Mexico Delivers Efficient IT Services by Centralizing on VMware Secure Cloud Automation

Transcript of a BriefingsDirect podcast on how a major university is moving toward achieving the best cloud-computing benefits while empowering users.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: VMware.

Dana Gardner: Hi, this is Dana Gardner, Principal Analyst at Interarbor Solutions, and you're listening to BriefingsDirect.

Gardner
Our latest discussion focuses on one of the toughest balancing acts in seeking the best of cloud computing benefits. This balance comes from obtaining the proper degree of centralization or "common good" for infrastructure efficiency, while preserving a sufficient culture of decentralization for agility, innovation, and departmental-level control.

The requirement for empowering centralization is no more evident than in a large university setting, where support and consensus must be preserved among such constituencies as faculty, staff, students, and researchers -- across an expansive educational community.

But the typical IT model does not support localized agility when it takes weeks to spin up a server, if online services lack automation, or if manual processes hold back efficient ongoing IT operations. Too much IT infrastructure redundancy also means weak security, high costs, lack of agility, and slow upgrades.

We're joined today by an IT executive from the University of New Mexico (UNM) to learn more about moving to a streamlined and automated private cloud model to gain a common good benefit, while maintaining a vibrant and reassured culture of innovation.

We're also joined by a VMware executive to learn more about the latest ways to manage cloud architectures and processes to attain the best of cloud efficiencies, while empowering improved services delivery and process agility.

Please join me now in welcoming our guests, Brian Pietrewicz, Director of Computing Platforms at the University of New Mexico in Albuquerque. Welcome, Brian.

Brian Pietrewicz: Thanks, Dana. Glad to be here.

Gardner: And we're here with Kurt Milne, Director of Product Marketing in the Management Business Unit at VMware. Welcome, Kurt.

Kurt Milne: Thank you, Dana. Hello, Brian.

Preparing for change

Gardner: Brian, new technology often creeps ahead of where entrenched IT processes are, sometimes to the point where decentralization becomes a detriment. There are too many moving parts, not enough coordination, and redundancy. Yet when we try to put in new models -- like private cloud -- that means change.

I'd like to hear a bit more about your IT organization at the university and how you've been able to do change, but at the same time not alienate your users, who are, I imagine, used to having things their way. So tell us a bit about how you started to juggle this balancing act.

Pietrewicz: At the University of New Mexico, as you mentioned, it's a highly decentralized organization. In most cases, the departments are responsible for their own IT. In most cases, that means they don't have the resources to effectively run IT, in particular, things like data centers, servers, storage, disaster recovery (DR), and backups.

Pietrewicz
What we're doing to improve the process is providing infrastructure as a service (IaaS) to those groups so that they don’t have to worry about the heavy lifting of the infrastructure pieces that I mentioned before. They can stay focused on their core mission, whether that’s physics, or psychology, or who knows what.

So we offer IaaS. We're running a VMware stack, and we're also running vCloud Automation Center (vCAC). We've deployed the Self-Service Portal. We give departments, faculty members, or departmental IT folks the ability to go into the portal and deploy their own machines at will.

Then, they are administrators of that machine. They also have additional management features through the vCAC console so that they can effectively do whatever they need to do with the server, but not have to worry about any of the underlying infrastructure.

Gardner: That sounds like the best of both worlds. In a sense, you're a service provider in the organization, getting the benefits of centralization and efficiency, but allowing them to still have a lot of hands-on control, which I assume that they want.

Pietrewicz: Correct. The other part is the agility, the ability for them to be able to react quickly, to consume infrastructure on demand as they need it, and have the benefit of all the things that virtualization brings with redundant infrastructure, lower cost of ownership, and those sorts of things.

Gardner: Kurt, is this something that’s common in the market or is this something specific to education and university vertical industry users, where they like to have that balance between a service-provider approach for efficiency and agility, but still leave a lot of the hands-on, how to do what you want to do your way, benefits in place?

New expectations

Milne: No, this is something we see with a lot of our customers increasingly in many different industries. It’s an interesting time to be in the IT space, because there's this new set of expectations being imposed on IT by the business to be strategic, to quickly adopt new technology, and boost innovation.

Milne
At the same time, IT still has the full set of responsibilities they've always had -- to stay secure, to avoid legacy debt, to drive operational excellence so they maintain uptime, security, and quality of service for transactional systems and business-critical systems.

It’s really an interesting paradox. How do you do these two things that are seemingly mutually exclusive -- go fast, but at the same time, stay in control?

Brian’s approach is what I call it "push button IT," where you give folks a button to push and they get what they need when they want it. But if IT controls the button and they control what happens when the user pushes the button, IT is able to maintain control. It’s really the best of both worlds.

Gardner: Brian, tell us a little bit about how long you have been there and what it was like before you began this journey?

Pietrewicz: I've been at UNM for about two-and-a-half years, and I can tell you the number one complaint. We suffer from a lot of the same problems that other large IT shops have, with funding and things like that. But the primary issue that we had when I walked in the door was customers being upset because we didn't have clearly-defined services, and we had sold these services to these customers.

We had sold virtual machines (VMs) with database backups, and all kinds of interesting things, with no service-level agreements (SLAs), no processes, nothing wrapped around it. The delivery of these services was completely inconsistent.

So I started out down the new path. The first thing that we did was to make the services more consistent. Just to give you an example, deploying a virtual machine for a customer. The way that it was when I got here was that a ticket came into the service desk. It went to a single technician, and then whichever technician got that ticket figured out their own way of getting that machine deployed.
At the same time, IT still has the full set of responsibilities they've always had -- to stay secure, to avoid legacy debt, to drive operational excellence.

As the next step in that process, we went through and, instead of just having it being done a different way by whoever received the ticket, we identified all the steps associated. In looking at all the steps associated, we identified over a 100 manual steps that went though six different completely separate groups inside of our organization.

Those included operating system, storage, virtualization, security, and networking for firewall changes. In all those various groups that deploy their individual piece of that puzzle, it was being done differently every time. Our deployment times were taking as long as three weeks. You can imagine how painful that is when it takes 20 minutes to spin up a VM -- but it was taking three weeks to deploy it to a customer.

We identified all the steps and defined the process very, very clearly; exactly what it takes to deploy a VM. The interesting thing that came out of that was that it gave us the content necessary to be able to start developing a true service description and an SLA.

Ticketing system

It also made it so that it was consistent. We did a few things after we did the process development. We generated workflows within our ticketing system, so that all that happened was a ticket was put in and then it auto-generated all the necessary tickets to deploy the VM, so it happened in a very consistent way.

That dropped the deployment time from three weeks down to about three days, because it still had to go through certain approval process and things like that with security.

For the next step we said, "Okay, how can we do this better?" We looked at all of those steps that we put in place and found that they were all repetitive, manual steps that could be easily automated. So enters VMware vCAC.

We took all the steps, after we had them clearly defined, and we automated all the steps that we could. We couldn’t automate all of them, for example, sending information to our billing system to bill the customer back. From vCAC we shoot an email over to our ticketing system, that generates a ticket. Then, the billing information is still entered manually, and we are working on an upgrade to that.

When I first got here, the services were not defined and the processes were not defined. Since then, we have clearly defined the processes, narrowed those down into the very specific processes and tasks that had to be done, and then we automated. We're going through the process of automating every step in that process.
ITIL is very challenging to implement, but it's extremely helpful, because it gives you a framework to work within.

Now, we have a thing we call Lobo Cloud -- our mascot is the Lobo. Customers can now go online and deploy a machine within 20 minutes. So basically everything has transformed from extremely inconsistent service and taking as long as three weeks to deploy, to now it being the equivalent going into McDonald’s and ordering a Big Mac. It’s extremely consistent and down from three weeks to 20 minutes.

Gardner: I assume Brian that you've adopted some industry-standard methods, perhaps a framework, that gave you some guidance on this. How does your service delivery policy adhere to an industry standard like ITIL?

Pietrewicz: That’s what we use. We follow ITIL and we're at varying levels of maturity with it. ITIL is very challenging to implement, but it's extremely helpful, because it gives you a framework to work within, to start narrowing down these process, defining services, setting SLAs. It gives you a good overarching framework to work within.

The absolute hardest part of all of this is implementing the ITIL framework, identifying your processes, identifying what your service is, and identifying your SLA. Walking through all of that is exponentially harder than putting the technology in place.

Gardner: It seems to me that not only are you going to get faster servers, response times, and automation, but there are some other significant benefits to this approach. I'm thinking about security, disaster recovery (DR), the ability to budget better through an OPEX model, and then ultimately reduce total costs.

Is it too soon or have some of these other benefits that I have heard about typically when people move to a more automated cloud approach? How is that working for you?

Less expensive

Pietrewicz: We don’t really have good statistics on it. For the folks that had machines sitting underneath their desks and in closets before, we don’t have a lot of the statistics to know exactly the cost and the time they were spending on that.

Anybody who works with virtualization quickly learns that once you hit a certain size, it becomes significantly less expensive. You become far more agile and you get a huge number of benefits. Some of them are things that you mentioned -- the deployment time, DR, the ability to automate, the taking advantage of economies of scale.

Instead of deploying one $10,000 server per application, you're now loading up 70 machines on a $15,000 server. All of those things come into play. But we really don’t have good statistics, because we didn’t really have any good processes before we started.

What’s interesting now is that our next step in the process is to automate our billing process. Once we do that, we're going to have everything from our virtual infrastructure deployed into our billing system and either a charge-back or a show-back methodology.
The same kind of tools and processes that can automate the delivery of those services can also automate tearing down those services when they're done.

So we'll have complete detailed costs of all of our infrastructure associated with every department and every application that is using our service. We'll be able to really show the total cost of ownership (TCO).

Milne: Brian, it sounds like you're on a path that a lot of our customers are on. What we see typically is that there is a change in consumption behavior when your customers know that they can get IaaS on demand. They stop hoarding resources. The same kind of tools and processes that can automate the delivery of those services can also automate tearing down those services when they're done.

Virtualization by itself increases capacity utilization quite a bit, but then going to this kind of services delivery, service consumption for infrastructure, actually further increases utilization and drives down over-provisioning.

Adding that cost transparency to that service will further change your consumers' behavior and the ability to get it when you need it and only pay for what you use drives down the amount of resources that you have to keep in your data center.

Pietrewicz: Absolutely. It’s amazing what happens when you have to pay for something and it’s very visible.

Milne: I always feel that if IT is free that really changes the supply and demand equation, if you study economics. People don’t know what to do with free. They typically take too much.

Economic behavior

Pietrewicz: Right. This really starts driving basic economic and social behavior into the equation in IT. It’s a difficult thing for organizations to get their head around, and they're sort of getting it here at the university. It’s not completely in place. The way that we look at it is as a, "We'll build it, and they'll come" kind of thing.

Most folks have figured out that they can really save that money. Instead of going out and buying a $10,000 server, they can buy a $1,000 VM from us that does the exact same thing. If they don’t want it any more, they can turn it off and not pay any more. All of those things come into play.

Gardner: This is interesting. This fit-for-purpose concept of using what you need when you need it and then not using it anymore relates to that discussion we had about centralized and decentralized. Now that you've been enjoying some benefits through the Lobo Cloud and this "common-good" approach to infrastructure, have you gotten feedback from the users? Are they happy with this or do they wish they had those servers under their desks again?

Pietrewicz: You have a little of both. We definitely have people that like to hug their servers. We had the sort of old school approach with a lot of things. You would assume that universities would be on the cutting edge. In a lot of cases, we are, but we also have people that just really like hugging their servers. They like to be able to touch their servers and that sort of thing. So we have both.
That means that people have to be transparent and make clear decisions about where they're spending their money. That's also starting to drive adoption.

We have people who are very, very appreciative that we put this service out there for them, because they know it’s the only way for them to do it effectively. But we still have some of the old-school folks who prefer the physical and are taking a while to adapt. That’s part of the whole, "Build it and they will come" thing. It’s the kind of thing where they have to adjust their mentality to use it.

Gardner: Consensus is, of course, important.

Pietrewicz: Another piece on that is the university was experimenting with a thing called reliability centered maintenance (RCM), which is a budgeting process that works toward the bottom line of a particular organization. That means that people have to be transparent and make clear decisions about where they're spending their money. That's also starting to drive adoption.

Gardner: Just for our audience to understand the scale here, you have done an awful lot in two-and-a-half years or less. How many individuals are we talking about? What’s the size of your community, your user base? How many VMs do you have? What are some of the defining characteristics of your organization?

Pietrewicz: UNM is approximately 45,000 faculty, staff, and students. We have about 100 either departments or affiliates, and today, we're running about 660 VMs for our organization.

Gardner: And what percentage of the organization is virtualized?

Pietrewicz: For central IT, it’s between 98 percent and 99 percent. For the rest of the organization, it’s not clear. We don’t have an audit that shows every physical box that anybody might be using out there.

I'd say that the adoption of virtualization is very low in places where people haven't used IaaS, because the initial entry cost for virtualization can be higher. Many of the very small organizations just aren't big enough to warrant the infrastructure necessary to run virtualization the right way.

Ancillary benefits

Gardner: We talked about some of the ancillary benefits of your approach, but there are some direct benefits when you go to a cloud model, which gives you more options. You can have your private cloud. You can look to public cloud and other hosting models, and then you can start to see a path or a vision towards a hybrid cloud environment, where you might actually move workloads around based on the right infrastructure approach for the right job at the right time. Any thoughts about where your clouds goals are vis-à-vis the hybrid potential?

Pietrewicz: We have a few things in play that we're actively working. Today, we have people using various cloud providers. The interesting part about that they're just paying for it with a credit card out of their department, and the university doesn’t have any clear way of knowing exactly what’s out there. We don’t really have any good security mechanisms in place for determining whether there's any sensitive data being stored out there inadvertently.

We're working with a lot of the cloud providers that we are already spending money with and we are already working with to develop consolidated accounts. One, we can save money through economies of scale. And two, we can get some visibility into what folks are actually using the cloud for. And then three, IT would like to act as an adviser to be able to point out for the various cloud providers that are out there -- this particular provider is good at functionality or this particular provider is good at security.
We envision setting up hybrid cloud services with those public cloud providers to be able to move the workloads back and forth when necessary.

The first step is to corral the use of public cloud for UNM and create an escorting process to the cloud. The second step is going to be a hybrid cloud that we'll set up from our private cloud here on site. We envision setting up hybrid cloud services with those public cloud providers to be able to move the workloads back and forth when necessary.

The other major benefit that we very much look forward to is being able to do DR in the cloud and taking advantage of the ability to replicate data and then spin up systems as you need them, rather than having a couple of million dollars in equipment sitting, waiting, and hoping you never use it. Things that you have to refresh every four years so that you have a viable DR plan.

Gardner: Is vCloud Automation Center something that will be useful in moving to this hybrid model? The one button to push, as it were, on the private cloud, will that become a one button to push in the hybrid model as well?

Pietrewicz: It will. I mentioned those various cloud service providers. Most of them are compatible with the vCloud Connector, so that you can simply just connect up that hybrid cloud service and with a little bit of work, be able to massage your portal.

We can have a menu option of public cloud providers through our portal that they could just select and say that they want to get a vCHS, Amazon, or Terremark, and then potentially move workloads back and forth. So vCAC and vCloud Connector are all at the center of it.

The other interesting piece that we're working on and going to try to figure out as part of this is that we really want to start looking into NSX and/or VIX to be able to provide very clear security boundaries, basically multi-tenancy, and then potentially be able to move those multi-tenant environments back and forth in the cloud or extend them from public to private cloud as well.

Software-defined networking

Gardner: Brian, you mentioned multi-tenancy earlier, and of course, there is a lot going on with software-defined data center, networking, and storage. What is it about it that’s interesting to you and why is this a priority for you, software-defined networking (SDN), for example?

Pietrewicz: SDN is the next sort of step in being able to truly automate your IaaS and your virtual environment. If you want to be able to dynamically deploy systems and have them be in a SAN box that is multi-tenant by customer, you really need to have an SDN-type solution, or at least that’s extremely helpful to do that.

One of the things that we are looking at next is to be able to implement something like NSX, so that we can deploy the equivalent of what’s a virtual wire, a multi-tenant environment, to individual customers, so that they can only see their stuff and can’t see their neighbors and vice versa.

The key is the ability to orchestrate that on demand and not have to deal with the legacy VLAN and firewall kind of issues that you have with the legacy environment.

Gardner: It’s interesting how a lot of these major trends -- service delivery, cloud, private cloud, DR, and SDN -- are interrelated. It’s a complex bundle, but the payoffs, when you do this inclusively, are pretty impressive.
From VMware’s perspective, that kind of network virtualization capability is critical for our hybrid cloud service.

Pietrewicz: Whenever you get to the point of abstracting things to the software level, you provide the ability to automate. When you have the ability to automate, you get tremendous flexibility. That sometimes can be an issue in and of itself, just making decisions on how you want to do something. But along with that flexibility, you get the ability to automate just about anything that you want or need to be able to do.

The second piece to that is that we're really excited about figuring out, when we build the hybrid cloud model, how we might be able to extend those tenants into the cloud, either as active running workloads or in a DR model, so that the multi-tenancy is retained.

Milne: From VMware’s perspective, that kind of network virtualization capability is critical for our hybrid cloud service. It’s that capability that NSX provides that creates that seamless experience from your data center out to the hybrid cloud.

As you said, Brian, that kind of network configuration, allocation, and reallocation of IP addresses, when you are moving things from one data center to another, is not something you want to do on a manual basis. So NSX is a key component of our hybrid cloud vision. It’s something that lot of the other cloud providers just don’t have.

Pietrewicz: I see it as the next frontier in IT. I think that when SDN starts taking off, it’s going to be a game changer in ways that we are not even recognizing yet, and that’s one example. Moving a workload from one network to another network is extremely powerful.

Cloud broker

Gardner: Kurt, this sounds as if not only is Brian transitioning into being a service provider to his constituencies, but now he's also becoming a cloud broker. Is this typical of what you're seeing in the market as well?

Milne: It is. Some of our customers will take a step to try to get their arms around shadow IT, users going around IT, to just offer that provisioning option through the IT portal. So it’s like, "You're using Amazon? That’s fine. We can help you do that." So putting a button in the service catalog deploys the kind of work that they've been doing in a public cloud like Amazon, but it has to come through IT. Then, IT is aware of it.

There's a saying I like. It’s called the "cloud boomerang." A lot of times, the IT customers will put thing out in the public cloud, but like a boomerang, it seems to always come back. The customer wants to integrate it with an existing system or they realize that they have to support it up in the cloud. A lot of times, those rogue deployments make their way back to the IT organization. So putting an Amazon service in the vCAC portal and not changing anything else is a nice first step in corralling that.
Now, we're taking that next step and combining a lot of those capabilities into a single platform.

Pietrewicz: That is exactly what we're seeing. At a university, because there isn’t really governance, it’s more like build a good service and hope they come. We take the approach of trying to enable it. We want to make it very transparent and say that they can use Amazon or vCHS, but there's a better way to do it. If you do it through the portal, you may be able to move those workloads back and forth.

We are actually seeing exactly what you mentioned, Kurt. Folks are reaching the limitations of using some of the cloud providers, because they need to get access to data back here at UNM and are actually doing the boomerang approach. They started out there and now they're migrating their machines into our IaaS so that they can get access to the data that they need.

Gardner: Kurt, we heard some very interesting things at VMworld recently around the cloud-management platform. Why don’t you tell us a little bit about that and how that fits into what we've been discussing in terms of this ongoing maturity and evolution that a large organization like the University of New Mexico is well into?

Milne: We recently announced the vRealizeSuite, which is a cloud management platform. So we're moving our product management strategy to a common platform.

Over the years, VMware has either built or acquired quite a few different management products. We've combined those products into a number of suites, like our automation, operations, and our business management suites. Now, we're taking that next step and combining a lot of those capabilities into a single platform.

There are a couple of guiding ideas there. We see in organizations like Brian’s is that the lines between the automated provisioning of those workloads automation, provisioning those workloads, and the ongoing operations and maintenance and support of those workloads, is really starting to blur.

So you have automation tasks that might happen when you're doing a support call. Maybe you want to provision some more resources, and there are operations tasks like checking system health that you might want to do as a step in an automation routine.

Shared services

Our product strategy change is to move toward a shared-services model, similar to a service-oriented architecture. The different services that are underlying our management products would be executable through a tool like vCAC, through a command line interface, or through like a REST API. There's kind of a mix-and-match opportunity to execute those services in different ways.

To build that platform with the shared service model on top, we need to start re-architecting some of our products in the back-end, so that we have a common orchestration engine, a common DR backup and a common policy engine. You don’t want one tool to undo the work that another tool did yesterday. You can’t have conflicting robots going out and doing automated tasks.

The general idea is to try to further consolidate these different management functions into a single platform. The overall goal is to try to help organizations maintain control, but then also increase flexibility and speed for their business users.

Gardner: Brian, is that something that you think is going to be on your radar? Is management so distributed now that you're looking for a more consolidated approach that’s inclusive?
The overall goal is to try to help organizations maintain control, but then also increase flexibility and speed for their business users.

Pietrewicz: That would be wonderful. We're doing things many different ways. If you take the example of orchestration, we are using Orchestrator, PowerShell, Perl, and starting to experiment with Puppet.

It would be really good if you could have one standardized way that you approach orchestration, as an example, and how that might tie into all the other pieces for back-end management, rather than handling it several different ways. As Kurt was mentioning, one part starts to step on another part. Having that be consolidated and consistent would be a huge value.

Milne: The other part of the strategy is also to make that work across environments. So the same tools and services would be available if you are provisioning up to Amazon or to your private cloud or hybrid cloud service, and even different hypervisors.

We're fully aware of the heterogeneous nature of the modern data center. So we're shifting to try to create that kind of powerful common management stack with that unified management experience across all of the environment. It’s kind of a nirvana. When we talk to people, they say that’s exactly what they want. So our vision is to kind of march towards delivering on that.

Gardner: Kurt, I am trying to recall from VMworld whether this was offered on-premises, as a service from a cloud, or some combination?

Service offerings

Milne: That’s the other interesting part of this. We're starting to go down the path of offering a number of our management products as a service. For example, at VMworld, we announced the availability of a beta for our vCAC product as a software as a service (SaaS), so you can without installing any software get a service portal, get that workflow and policy engine, and deploy infrastructure services across different environments.

We'll be rolling out betas for our other products in subsequent quarters over the next year or so. Then potentially we could have the SaaS services interact with and combine with the services that are available through the products that are installed on-premise. Our goal is to get these out there and then understand what the best use cases are, but that kind of mix and match is part of the vision.

Gardner: It’s interesting. We might have a reverse boomerang when it comes to the management of all of this. Does that sound appealing Brian? Is that something you would look to as a cloud service, comprehensive management?
Our goal is to get these out there and then understand what the best use cases are, but that kind of mix and match is part of the vision.

Pietrewicz: Absolutely, but it’s largely dependent on return on investment (ROI). It’s that balance of, when you get to a certain level in an IT shop, it’s sometimes cheaper to do things in-house than it is to outsource it, and sometimes not. You have to do the analysis on the ROI on what makes more sense to bring it in or to use a SaaS.

As an example, we completely outsourced all of our email, because it’s a lot of work. It's very simple and easy to do as a SaaS solution, but it’s a lot more work to do in-house. It’s definitely something that we would look into.

Milne: In a mid-sized organization that might have 300 different applications that the IT organization supports, maybe 50 of those are IT tools. Already we've seen progress with companies like ServiceNow that have a SaaS-based service desk. It makes sense to start to turn more of those management products into a SaaS delivery model.

Gardner: I'm afraid we're getting near our time limit, but I wanted to see, Brian, if you had some thoughts about others who are starting to move in your direction, perhaps their own Lobo Cloud, their own portal rationalizing these services, being able to measure them better. What in 20/20 hindsight do you have that you could recommend for them as they go about this? Any learned lessons you could share?

Process orientation

Pietrewicz: The biggest lesson learned, without a doubt, is the focus on the process orientation, the ITIL model. The technology is really not that hard. It’s determining what your service is, what are you trying to deliver, and then how do you build that into a consistently delivered service, complete with SLAs and service descriptions that meet the customer needs. That's the most difficult part.

The technical folks can definitely sling the technology. That doesn’t seem to be that big of a deal. The partners and providers do a very good job of putting together products that make it happen, but the hard part is defining the processes and defining the services and making sure that they are meeting the customer needs.

Gardner: Kurt, any thoughts in reaction to what Brian said in terms of getting started on the right path around cloud rationalization of your IT organization?

Milne: One of the things that I've seen is a lot of organizations go through this process that Brian has described, trying to clearly define their services and figure out which parts of those services they're going to automate.
The hard part is defining the processes and defining the services and making sure that they are meeting the customer needs.

A lot of organizations start that service definition effort from an inside-out perspective, get a bunch of IT guys together, and try to define what you do on a daily basis in a service. That's hard.

The easier approach is just to go talk to your customers and users and ask, "If I were going to give you a button you could click to get what you need, what would you put behind the button?" Then, you define your services more from an outside-in perspective. It seems to be where companies get anyway and you just shortcut a lot of teeth gnashing and internal meetings when you do it that way.

Gardner: It always comes back to the requirements list, doesn’t it?

Milne: That’s right.

Gardner: I'm afraid we'll have to leave it there. You've been listening to a sponsored BriefingsDirect discussion on one of the toughest balancing acts, seeking the best of cloud computing benefits, while also empowering your users.

And we've seen at a large university how this balance comes from attaining a proper degree of centralization or common good for infrastructure services through a portal, while preserving that sufficient culture of decentralization and agility. We've also heard how there are going to be new ways to better manage cloud architectures across a variety of different models, and then perhaps ultimately as a service in and of itself.

So I'd like to thank our guests, Brian Pietrewicz, Director of Computing Platforms at the University of New Mexico in Albuquerque. Thank you so much, Brian.

Pietrewicz: Thanks, Dana. Thanks, Kurt.

Gardner: We've also been here with Kurt Milne, Director of Product Marketing in the Management Business Unit at VMware. Thank you so much, Kurt.

Milne: Thank you.

Gardner: And thank you also to our audience for joining us for this BriefingsDirect discussion. This is Dana Gardner, Principal Analyst at Interarbor Solutions. Thanks again for listening, and come back next time.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: VMware.

Transcript of a BriefingsDirect podcast on how a major university is moving toward achieving the best cloud-computing benefits while empowering users. Copyright Interarbor Solutions, LLC, 2005-2014. All rights reserved.

You may also be interested in: