Showing posts with label VMworld. Show all posts
Showing posts with label VMworld. Show all posts

Tuesday, February 24, 2015

Columbia Sportswear Sets Torrid Pace for Reaping Global Business Benefits From Software-Defined Data Center

Transcript of a BriefingsDirect discussion on how a major sportswear company has leveraged virtualization, SDDC and hybrid cloud to reap substantial business benefits.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: VMware.

Dana Gardner: Hello, and welcome to the special BriefingsDirect podcast series coming to you directly from the recent VMworld 2014 Conference. I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host throughout this series of BriefingsDirect IT Strategy Discussions.

Gardner
We’re here in San Francisco to explore the latest developments in hybrid cloud computing, end-user computing, and software-defined data center (SDDC).

Our next innovator case study interview focuses on Columbia Sportswear in Portland, Oregon. We're joined by a group from Columbia Sportswear, and we'll learn more about how they've made the journey to SDDC. We'll see how they’ve made great strides in improving their business results through IT, and where they expect to go next with their software-defined efforts.

To learn more, please join me in welcoming our guests, Suzan Pickett, Manager of Global Infrastructure Services at Columbia Sportswear; Tim Melvin, Director of Global Technology Infrastructure at Columbia, and Carlos Tronco, Lead Systems Engineer at Columbia Sportswear. Welcome.

Gardner: People are familiar with your brand, but they might not be familiar with your global breadth. Tell us a little bit about the company, so we appreciate the task ahead of you as IT practitioners.

Pickett: Columbia Sportswear is in its 75th year. We're a leader in global manufacturing of apparel, outdoor accessories, and equipment. We're distributed worldwide and we have infrastructure in 46 locations around the world that we manage today. We're very happy to say that we're 100 percent virtualized on VMware products.

Pickett
Gardner: And those 46 locations, those aren't your retail outlets. That's just the infrastructure that supports your retail. Is that correct?

Pickett: Exactly, our retail footprint in North America is around 110 retail stores today. We're looking to expand that with our joint venture in China over the next few years with Swire, distributor of Columbia Sportswear products.

Gardner: You're clearly a fast-growing organization, and retail itself is a fast-changing industry. There’s lots going on, lots of data to crunch -- gaining more inference about buyer preferences --  and bringing that back into a feedback loop. It’s a very exciting time.

Tell me about the business requirements that you've had that have led you to reinvest and re-energize IT. What are the business issues that are behind that?

Global transformation

Pickett: Columbia Sportswear has been going through a global business transformation. We've been refreshing our enterprise resource planning (ERP). We had a green-field implementation of SAP. We just went live with North America in April of this year, and it was a very successful go-live. We're 100 percent virtualized on VMware products and we're looking to expand that into Asia and Europe as well.

So, with our global business transformation, also comes our consumer experience, on the retail side as well as wholesale. IT is looking to deliver service to the business, so they can become more agile and focused on engineering better products and better design and get that out to the consumer.

Gardner: To be clear, your retail efforts are not just brick and mortar. You're also doing it online and perhaps even now extending into the mobile tier. Any business requirements there that have changed your challenges?

Pickett: Absolutely. We're really pleased to announce, as of summer 2014, that Columbia Sportswear is an AirWatch customer as well. So we get to expand our end-user computing and our VMWare Horizon footprint as well as some of our SDDC strategies.

We're looking at expanding not only our e-commerce and brick-and-mortar, but being able to deliver more mobile platform-agnostic solutions for Columbia Sportswear, and extend that out to not only Columbia employees, but our consumer experience.

Gardner: Let’s hear from Tim about your data center requirements. How does what Suzan told us about your business challenges translate into IT challenges?

https://www.linkedin.com/pub/tim-melvin/1/654/609
Melvin
Melvin: With our business changing and growing as quickly as it is, and with us doing business and selling directly to consumers in more than 100 countries around the world, our data centers have to be adaptable. Our data and our applications have to be secure and available, no matter where we are in the world, whether you're on network or off-premises.

The SDDC has been a game-changer for us. It’s allowed to take those technologies, host them where we need them, and with whatever cost configuration makes sense, whether it’s in the cloud or on-premises, and deliver the solutions that our business needs.

Gardner: Let's do a quick fact-check in terms of where you are in this journey to SDDC. It includes a lot. There are management aspects, network aspects, software-defined storage, and then of course mobile. Does anybody want to give me the report card on where you are in terms of this journey?

100 percent virtualized

Pickett: We're 100 percent virtualized with our compute workloads today. We also have our storage well-defined with virtualized storage. We're working on an early adoption proof of concept (POC) with VMware's NSX for software-defined networking.

It really fills our next step into defining our SDDC, being able to leverage all of our virtual workloads, being able to extend that into the vCloud Air hybrid cloud, and being able to burst our workloads to expand our data centers our toolsets. So we're looking forward to our next step of our journey, which is software-defined networking via NSX.

Gardner: Taking that network plunge, what about the public-cloud options for your hybrid cloud? Do you use multiple public clouds, and what's behind your choice on which public clouds to use?

Melvin: When you look at infrastructure and the choice between on-premise solutions, hybrid clouds, public and private clouds, I don't think it's a choice necessarily of which answer you choose. There isn't one right answer. What’s important for infrastructure professionals is to understand the whole portfolio and understand where to apply your high-power, on-premises equipment and where to use your lower-cost public cloud, because there are trade-offs in each case.
What’s important for infrastructure professionals is to understand the whole portfolio and understand where to apply your high-power, on-premises equipment and where to use your lower-cost public cloud

When we look at our workloads, we try to present the correct tool for the correct job. For instance, for our completely virtualized SAP environment we run that on internal, on-premises equipment. We start to talk about development in a sandbox, and those cases are probably best served in a public cloud, as long as we can secure and automate, just like we can on-site.

Gardner: As you're progressing through SDDC and you're exploring these different options and what works best both technically and economically in a hybrid cloud environment, what are you doing in terms of your data lifecycle. Is there a disaster recovery (DR) element to this? Are you doing warehousing in a different way and disturbing that, or are you centralizing it? I know that analysis of data is super important for retail organizations. Any thoughts about that data component on this overall architecture?

Pickett: Data is really becoming a primary concern for Columbia Sportswear, especially as we get into more analytical situations. Today, we have our two primary data centers in North America, which we do protect with VMWare’s vCenter Site Recovery Manager (SRM), a very robust DR solution.

We're very excited to work with an enterprise-class cloud like vCloud Air that has not only the services that we need to host our systems, but also DR as a service, which we're very interested in pursuing, especially around our remote branch office scenarios. In some of those remote countries, we don't have that protection today, and it will give a little more business continuity or disaster avoidance, as needed.

As we look at data in our data centers, our primary data centers with big data, if you will, and/or enterprise data warehouse strategies, we've started looking at how we're replicating the data where that data lives. We've started getting into active data center scenarios -- active, active.

We're really excited around some of the announcements we've heard recently at VMworld around virtual volumes (VVOLs) and where that’s going to take us in the next couple of years, specifically around vMotion over long-distance. Hopefully, we'll follow the sun, and maybe five years from now, we'll able to move our workloads from North America to Asia and be able to take those workloads and have them follow where the people are using them.

Geographic element

Gardner: That’s really interesting about that geographic element if you're a global company. I haven't heard that from too many other organizations. That’s an interesting concept about moving data and workloads around the world throughout the day.

We've seen some recent VMware news around different types of cloud data offerings, Cloud Object Store for example, and moving to a virtual private cloud on demand. Where do you see the next challenge is in terms of your organization and how do you feel that VMware is setting a goal post for you?
vCloud Air, being an enterprise-class offering, gives us the management capability and allows us to use the same tools that we would use on site.

Tronco: The vCloud Air offerings that we've heard so much about are an exciting innovation.

Public clouds have been available for a long time. There are a lot of places where they make sense, but vCloud Air, being an enterprise-class offering, gives us the management capability and allows us to use the same tools that we would use on-site.

It gives us the control that we need in order to provide a consistent experience to our end-users. I think there is a lot of power there, a lot of capability, and I'm really excited to see where that goes.

Gardner: How about some of the automation issues with the vRealize Suite, such Air Automation. Where do you see the component of managing all this? It becomes more complex when you go hybrid. It becomes, in one sense, more standardized and automated when you go software-defined, but you also have to have your hands on the dials and be able to move things.

https://www.linkedin.com/in/ctronco
Tronco
Tronco: One of the things that we really like about vCloud Air is the fact that we'll be able to use the same tools on-premises and off-premises, and won't have to switch between tools or dashboards. We can manage that infrastructure, whether it's on-premise or in the public cloud, will be able to leverage the efficiencies we have on-premise in vCloud Air as well.

We also can take advantage of some of those new services, like ObjectStore, that might be coming down the road, or even continuous integration (CI) as a service for some of our development teams as we start to get more into a DevOps world.

Customer reactions

Gardner: Let’s tie this back to the business. It's one thing to have a smooth-running, agile IT infrastructure machine. It's great to have an architecture that you feel is ready to take on your tasks, but how do you translate that back to the business? What does it get for you in business terms, and how are you seeing reactions from your business customers?

Pickett: We're really excited to be partnering with the business today. As IT comes out from underground a little bit and starts working more with the business and understanding their requirements -- especially with tools like VMware vRealize Automation, part of the vCloud Suite -- we're now partnering with our development teams to become more agile and help them deliver faster services to the business.

We're working on one of our e-commerce order confirmation toolsets with vRealize Automation, part of the vCloud Suite, and their ability to now package and replicate the work that they're doing rather than reinventing the wheel every time we build out an environment or they need to do a test or a development script.

By partnering with them and enabling them to be more agile, IT wins. We become more services-oriented. Our development teams are winning, because they're delivering faster to the business and the business wins, because now they're able to focus more on the core strategies for Columbia Sportswear.

Gardner: Do you have any examples that you can point to where there's been a time-to-market benefit, a time-to-value faster upgrade of an application, or even a data service that illustrates what you've been able to deliver as a result of your modernization?
Our development teams are winning, because they're delivering faster to the business and the business wins, because now they're able to focus more on the core strategies.

Pickett: Just going back to the toolset that I just mentioned. That was an upgrade process, and we took that opportunity to sit down with our development team and start socializing some of the ideas around VMware vRealize Automation and vCloud Air and being able to extend some of our services to them.

At the same time, our e-commerce teams are going through an upgrade process. So rather than taking weeks or months to deliver this technology to them, we were able to sit down, start working through the process, automate some of those services that they're doing, and start delivering. So, we started with development, worked through the process, and now we have quality assurance and staging and we're delivering product. All this is happening within a week.

So we're really delivering and we're being more agile and more flexible. That’s a very good use case for us internally from an IT standpoint. It's a big win for us, and now we're going to take it the next time we go through an upgrade process.

We've had this big win and now we're going to be looking at other technologies -- Java, .NET, or other solutions -- so that we can deliver and continue the success story that we're having with the business. This is the start of something pretty amazing, bringing development and infrastructure together and mobilizing what Columbia Sportswear is doing internally.

Gardner: Of course, we call it SDDC, but it leads to a much more comprehensive integrated IT function, as you say, extending from development, test, build, operations, cloud, and then sourcing things as required for a data warehouse and applications sets. So finally, in IT, after 30 or 40 years, we really have a unified vision, if you will.

Any thoughts, Tim, on where that unification will lead to even more benefits? Are there ancillary benefits from a virtuous adoption cycle that come to mind from that more holistic whole-greater-than-the-sum-of-the-parts IT approach?

Flexibility and power

Melvin: The closer we get to a complete software-defined infrastructure, the more flexibility and power we have to remove the manual components, the things that we all do a little differently and we can't do consistently.

We have a chance to automate more. We have the chance to provide integrations into other tools, which is actually a big part of why we chose VMware as our platform. They allow such open integration with partners that, as we start to move our workloads more actively into the cloud, we know that we won't get stuck with a particular product or a particular configuration.

The openness will allow us to adapt and change, and that’s just something you don't get with hardware. If it's software-defined, it means that you can control it and you can morph your infrastructure in order to meet your needs, rather than needing to re-buy every time something changes with the business.

Gardner: Of course, we think about not just technology, but people and process. How has all of this impacted your internal IT organization? Are you, in effect, moving people around, changing organizational charts, perhaps getting people doing things that they enjoy more than those manual tasks? Carlos, any thought about the internal impact of this on your human resources issues?

Tronco: Organizationally, we haven’t changed much, but the use of some thing like vRealize Automation allows us to let development teams do some of those tasks that they used to require us to do.

Now, we can do it in an automated fashion. We get consistency. We get the security that we need. We get the audit trail. But we don’t have to have somebody around on a Saturday for two minutes of work spread across eight hours. It also lets those application teams be more agile and do things when they're ready to do them.
We can all leverage the tools and configurations. That's really powerful.

Having that time free lets us do a better job with engineering, look down the road better with a little more clarity, maybe try some other things, and have more time to look at different options for the next thing down the road.

Melvin: Another point there is that, in a fully software-defined infrastructure, while it may not directly translate into organizational changes, it allows you to break down silos. Today, we have operations, system storage, and database teams working together on a common platform that they're all familiar with and they all understand.

We can all leverage the tools and configurations. That's really powerful. When you don't have the network guys sitting off doing things different from what the server guys are doing, you can focus more on comprehensive solutions, and that extends right into the development space, as Carlos mentioned. The next step is to work just as closely with our developers as we do with our peers and infrastructure.

Gardner: It sounds as if you're now also in a position to be more fleet. We all have higher expectations as consumers. When I go to a website or use an application, I expect that I'll see the product that I want, that I can order it, that it gets paid for, and then track it. There is a higher expectation from consumers now.

Is that part of your business payback that you tie into IT? Is there some way that we can define the relationship between that user experience for speed and what you're able to do from a software-defined perspective?

Preventing 'black ops'

Pickett: As an internal service provider for Columbia Sportswear, we can do it better, faster, and cheaper on-premise and with our toolsets from our partners at VMware. This helps prevent black ops situations, for example, where someone is going out to another cloud provider outside the parameters and guidelines from IT.

Today, we're partnering with the business. We're delivering that service. We're doing it at the speed of thought. We're not in a position where we're saying "no," "not yet," or "maybe in a couple of weeks," but "Yes, we can do that for you." So it's a very exciting position to be in that if someone comes to us or if we're reaching out, having conversations about tools, features, or functionality, we're getting a lot of momentum around utilizing those toolsets and then being able to expand our services to the business.

Tronco: Using those tools also allows us to turn around things faster within our development teams, to iterate faster, or to try and experiment on things without a lot of work on our part. They can try some of it, and if it doesn’t work, they can just tear it down.
Today, we're partnering with the business. We're delivering that service. We're doing it at the speed of thought.

Gardner: So you've gone through this journey and you're going to be plunging in deeper with software-defined networking. You have some early-adopter chops here. You guys have been bold and brave.

What advice might you offer to some other organizations that are looking at their data-center architecture and strategy, thinking about the benefits of hybrid cloud, software-defined, and maybe trying to figure out in which order to go about it?

Pickett: I'd recommend that, if you haven’t virtualized your workloads -- to get them virtualized. We're in that no-limit situation. There are no longer restrictions or boundaries around virtualizing your mission-critical or your tier-one workloads. Get it done, so you can start leveraging the portability and the flexibility of that.

Start looking at the next steps, which will be automation, orchestration, provisioning, service catalogs, and extending that into a hybrid-cloud situation, so that you can focus more on what your core offerings are going to be your core strategies. And not necessarily offload, but take advantage of some of those capabilities that you can get in VMware vCloud Air for example, so that you can focus on really more of what’s core to your business.

Gardner: Tim, any words of advice from your perspective?

Melvin: When it comes to solutions in IT, the important thing is to find the value and tie it back to the business. So look for those problems that your business has today, whether it's reducing capital expense through heavy virtualization, whether it's improving security within the data center through NSX and micro-segmentation, or whether it's just providing more flexible infrastructure for your temporary environments like SAN and software development through the cloud.

Find those opportunities and tie it back to a value that the business understands. It’s important to do something with software-defined data centers. It's not a trend and it's not really even a question anymore. It's where we're going. So get moving down that path in whatever way you need to in order to get started. And find those partners, like VMware, that will support you and build those relationships and just get moving.

20/20 hindsight

Gardner: Carlos, advice, thoughts about 20/20 hindsight?

Tronco: As Suzan said, it's focusing on virtualizing the workloads and then being able to leverage some of those other tools like vRealize Automation. Then you're able to free staff up to pursue activities and add more value to the environment and the business, because you're not doing repeatable things manually. You'll get more consistency now that people have time. They're not down because they're doing all these day two, day three operations and things that wear and grate on you.

Gardner: I suppose there's nothing like being responsive to your business constituents. That, then, enables them to seek for more help, which then adds to your value, when we get into that virtuous cycle, rather than a dead end of people not even bothering to ask for help or new and innovative ideas in business.
It’s important to do something with software-defined data centers. It's not a trend and it's not really even a question anymore.

Congratulations. That sounds like a very impactful way to go about IT. We've been learning about how Columbia Sportswear in Portland, Oregon has been adjusting to the software-defined data center strategy and we've heard how that's brought them some business benefits in their fast-paced retail organization worldwide.

So a big thank you to our guests, Suzan Pickett, Manager of Global Infrastructure Services at Columbia Sportswear; Tim Melvin, Director of Global Technology Infrastructure, and Carlos Tronco, Lead Systems Engineer at Columbia Sportswear. Thanks so much.

And a big thank you to our audience for joining us for this special discussion series, coming to you directly from the recent 2014 VMworld Conference in San Francisco.

I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host throughout this series of VMware-sponsored BriefingsDirect IT discussions. Thanks again for listening, and come back next time.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: VMware.

Transcript of a BriefingsDirect discussion on how a major sportswear company has leveraged virtualization, SDDC and hybrid cloud to reap substantial business benefits. Copyright Interarbor Solutions, LLC, 2005-2015. All rights reserved.

You may also be interested in:

Tuesday, February 17, 2015

California Natural Resources Agency Gains Agility Through Software-Defined Data Center Strategy

Transcript of a BriefingsDirect discussion on how a large state agency harnesses broad virtualization to do more with less in IT while remaining agile and efficient.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: VMware.

Dana Gardner: Hello, and welcome to the special BriefingsDirect podcast series coming to you directly from the recent VMworld 2014 Conference. I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host throughout this series of BriefingsDirect IT Strategy Discussions.

Gardner
We’re in San Francisco to explore the latest developments in hybrid cloud computing, end-user computing, software-defined data center (SDDC), and virtualization infrastructure management.

Our next innovator case study interview focuses on the California Natural Resources Agency (CNRA) in Sacramento. They have a large purview, overseeing some 25 different agencies. They've set up a SDDC and are deep into the process of maturing its value and utility.

To learn more about how the CNRA gains agility from a SDDC strategy, we welcome Tony Morshed, Chief Technology Officer for the California Resources Data Center in the California Department of Water Resources. Welcome, Tony.

Tony Morshed: Thanks.

Gardner: We’re also here with Michael Hom, Data Center Chief in the IT Infrastructure Services Branch for the California Department of Water Resources. Welcome, Michael.

Michael Hom: Good morning.

Gardner: First, gentlemen, help us understand a little bit about the size of your organization. This is a large state government department, but you're really a department of departments. Help me understand the breadth of your agency.

Morshed: Our department, Water Resources, consists of 3,500 people that are part of the agency that comprises many departments. The bigger ones are Parks and Rec, Cal Fire, and Fish and Wildlife. There are about 28 agencies and conservancies and 25,000 people onboard.

http://www.cold.ca.gov/agency_display.asp?ATRID=WTRRSC
Morshed
Gardner: So in order to support all these people and all these different agencies, a common infrastructure is important, but also it sounds like you need to have differentiation and customization for their specific needs. How do you accomplish both the goal of a common infrastructure for efficiency, but also still be able to meet all your requirements for all those different people?

Morshed: When we started our consolidation effort, we decided to transform ourselves more to an ISP-style setup. We know that most departments have their own IT shops, and you know there is still that trust thing. So we just built the common infrastructure. We let them share the infrastructure, but they have their own security posture. We segregate all their traffic, so each department can still feel like they're autonomous, but yet we all share the infrastructure, in which case we all share the savings.

Mandate to consolidate

Hom: With that, we had a mandate to also consolidate. First, the State of California is really about cost savings. Each of the 25-plus organizations mostly had their own IT shops. By combining the infrastructure as a service (IaaS) in our multitenancy data center, they're able to reap the benefits of cost savings for infrastructure, but also concentrate each department's specific needs and applications.

Hom
Gardner: It sounds as if you have a private-cloud approach, multi-tenancy, and trying to get that elasticity and efficiency going. It also sounds like you’re well on the way to an  SDDC, but you know that means different things to different people. I'd like to get your take on what SDCC means.

Also, are you exploring software-defined storage, software-defined networking, or more on the workload support for the servers. How does it shape up in your mind and in your particular implementation?

Morshed: When the software-defined stuff came out, for me, one of the big things was disaster recovery (DR). If I could stretch my data center into another facility, DR becomes a non-issue, because those workloads can shift between sites without any trouble with automation.

That was the next piece for us -- automation. We realize that we’re part-way there, but to get all the way there, we need to do fuller automation. This means that we need to quit tinkering with the network and storage every time we want to do something new.

Those were big driving factors for going to SDDC. To us, it means that we’re obfuscating the hardware. The hardware’s there. It’s just running. We’re working and tweaking everything at the software layer -- so we could be a lot more agile.

Gardner: Michael, SDDC, how do you define that and to what degree are you into that journey?

Separating the physical

Hom: For the SDDC we really wanted to provide a logical data center to each of our organizations. We wanted to separate the physical, which allows our folks to support more of a logical infrastructure, where they still have autonomy, but the physical layer is basically one and the same.

Today, from a functional point of view, they get what they’ve had before, but without the overhead of physical support. We've used the VMware vSphere and vCloud Suite to provide that software-defined queuing. Right now, we're embarking on a software-defined networking, using VMware NSX and third-party vendors to support that.

We’re looking to use automation soon to help us decrease overhead, and pass those savings on to each of the organizations.

Gardner: Given that you have a common set of infrastructure and, I imagine, lot of common data, are you well into the VMware Virtual SAN adoption for software-defined storage as well? How does that shape up?

Morshed: For Virtual SAN, we're looking at these cases right now and we see some. We’re not quite there yet, and so we’ll focus on the software-defined networking or automation as well. Prior to Virtual SAN, which will probably come later, we’re still evaluating and determining what our needs are in that area.
For me, the greater organizational problems and the structure of your processes become the harder things, because we’re not as used to dealing with those things.

Gardner: It’s complicated, right? There are lot of interdependencies. Certain things happen that ripple across others, and so it’s a crawl-walk-run. Yet the benefits come from a whole greater than the sum of the parts, if I could use a couple of clichés, but it is an interesting time in the business.

What are some of the challenges you have in terms of getting closer to a full SDDC? Are these technology, culture, process, or all of the above?

Morshed: At first it was technology, but the culture and the organizational mindset are the bigger challenge. You can find solutions and work through technology. We're IT people, and we’re used to the technology problems. For me, the greater organizational problems -- and the structure of your processes -- become the harder things, because we’re not as used to dealing with those things.

Gardner: Now, Michael, part of the adoption of a complex, long-term journey on IT is to show results early and get buy-in. Are there instances where you look at that? Maybe, it's the DR, where you can have a sense of better dependability on your resources? Any way to describe what you’ve done early on that has lead to a greater emphasis to adopt more aggressively?

Better service levels

Hom: Definitely. One of the key things with early wins is providing a better service level for provisioning, and that’s something that everybody has been struggling with. With the cloud infrastructure, we've been able to provision within days, if not a day. And that typically beats most of the service tools that each of the organizations have had. So that was an early win.

It’s things like that where we decrease overhead and make IT more accessible for business. That makes it a win, and starts to have the ball rolling as far as other features, such as DR, greater capacity, and things that would be tough for each individual organization to do on their own.

Gardner: Of course, when you do well in a state agency, it’s apparent, right? They're representing services to the public, and you’re the largest US state. You have a large bureaucracy, probably the size of what many countries have. Is there something different about the public sector in terms of accountability or response? How do you have a different set of requirements as a public organization and perhaps enterprises, too?

Morshed: We do have a difference. Our procurement is much harder. Getting people is much harder. We live within a lot of constraints that the private sector doesn’t realize. We have a hard time adjusting our work levels. Can we get more people now? No. It takes forever to get more people, if you can ever get them.
We live within a lot of constraints that the private sector doesn’t realize.

Gardner: So it’s doing more with less over and over again.

Morshed: Constantly doing more with less. Part of this virtualization is survivability. We would never be able to survive or give our business the tools they need to do their business without this. We would just be a sinking ship.

Gardner: So the whole philosophy, Michael, of SDDC and virtualization, is of doing more with less, of automating, of boiling out the manual processes and going to more real-time responsive technology driven infrastructure makes total sense for your organization?

Hom: Definitely. To go with what Tony says, we really don’t have much overhead when we need to respond to future projects. When there's an uptick in activity, there is no way to have more resources available. So we need to build that into our infrastructure to allow for that dynamic bandwidth to happen from a personal level.

Gardner: We’re here of course at VMworld 2014. There’s lot of news going on. Anything in particular piquing your interest, perhaps with the OpenStack support in the EVO Hyper-Converged Infrastructure? What is now on your agenda after hearing some news to reach those goals as you describe them?

EVO looks pretty nice

Morshed: There are a couple of things. EVO looks pretty nice. I was out on the floor and looking at it yesterday and talking with the CIO and I see it as something that we might be able to use for some of our outlying offices, where we have around 100 to 150 people. We can drop something like that in, put virtual desktop infrastructure (VDI) on it, and deliver VDI services to them locally, so they don't have to worry about that traffic going over the wide area network (WAN).

The other piece is the acquisition made recently with CloudVolumes and looking at how we can use that to leverage our VDI structure. We're using another product right now in that space, but again with CloudVolumes it’s been a part of VMworld. It’s more interesting, because we know that the chances of all the software being upgraded and updated in at the same time in interoperability is greater if it’s a VMware product.

For us, it’s been a real struggle to make sure that all the products that we use, interact and as there’s an upgrade, everything upgrades at the same time. To me, those are the two biggest things that I'm getting out of the announcements.
The business could come up with more dollars, but to be able to be more agile and more flexible is where it really pays off.

Gardner: Right and it’s like building a virtuous cycle of adoption benefit because when you do the SDDC built on virtualization that provides a public-cloud benefit. Then, you can start realizing those end-user computing benefits like VDI. So, there’s really this snowball effect.

Is that something that you’ve been able to demonstrate? Do you have any metrics of success that you can point to?

Morshed: We do have some tangible benefits. We have reduced our CAPEX by somewhere around 40 percent and our OPEX around 32 percent. I don’t have the numbers, but we have deployed VDI in the Department of Water Resources and we already virtualized about 600 to 800 desktops. Not only is it helping us save costs there; it’s also used as a strategy for a remote access as a strategy to help protect our server infrastructure by using VDI for admins.

So there are those tangible things that you can reach out and measure and those intangible things, where it’s allowing us to do something easier and more flexible. That, for me, is the bigger win. The business could come up with more dollars, but to be able to be more agile and more flexible is where it really pays off.

Gardner: So we get productivity, we've got DR, which reduces your risk, and we've got some hard savings and economics. It's pretty compelling. Michael, any thoughts about how those fit together, and which ones are more important to you?

More flexible

Hom: Definitely, this allows us to be more flexible, as Tony said, and there are some things that we're trying to do that we would never imagine without a SDDC. So they increased security, greater capacity, capabilities to our business.

Gardner: How about VMware specifically? Is there some differentiator in terms of how they produce this products that has allowed you to follow this journey? Is this more of a partnership than a procurement relationship? It sounds like the track that VMware takes in its strategy very much aligns with yours.

Morshed: It’s very much a partnership. In fact, we basically only want to work with business partners. We don’t want to work with vendors, because we don’t need someone to sell us something and walk away. VMware has been hand-in-hand with us for this whole journey.

When we look at other products for the mix, we look for the deep partners with VMware because we know virtual machines (VMs) are core. So, when we look in the storage partners and we’re looking at networking partner, we’re making sure that those partners are partners of VMware.
We don’t want to work with vendors, because we don’t need someone to sell us something and walk away. VMware has been hand-in-hand with us for this whole journey.

One of the things that we find is the inoperability once everything has been virtualized. Everything has to connect, and it’s not a single stack. So if one thing gets upgraded, we need to make sure that everything across a stack can accept that upgrade. Otherwise, we lose the ability to take the advantage of the upgrade until everybody else catches up.

Early on, we were in that position and we’re doing everything we can to remove ourselves from that position.

Gardner: Michael, any thoughts on the nature of the VMware relationship that you could point to in terms of legacy in an approach that others might want to consider.

Hom: Definitely. We consider VMware a strategic partner. A couple of things that illustrate that is that we’ve been involved with the VMware’s Excellware and Velocity program and that's been two-fold. For the velocity side, we have marked up a fully working SDDC with SX, with virtualized automation, operations in business as a stack.

Gardner: One of the things we've heard here at VMworld is be bold, be brave, be a little bit aggressive. Go out there and do these things. Any thoughts for other organizations that are just dipping their toes into the water? Is it higher risk than reward to be bold and brave in getting early? Or is it perhaps something that allows you to then be a differentiator and be better in your own environment, whether it’s a public or private sector?

Set in stone

Morshed: The first thing is always question what you’ve got that's set in stone, because most of it is not set in stone. We've all heard a lot of things that you can't do. You can’t virtualize Oracle, but you can. You can't do this, you can't do that, you can't get the network focused at the top of the storage. That's all that stuff that you actually can do.

You have to really look at it, peel it back, make sure that "you can't" is an actual thing, and then figure out how to get around it. The way I see it is that, as the world turns, things morph, and if you don’t move into this virtualization space, you're going to be left behind. You're going to be the guy making buggy whips. There are no buggy whips running around. There’s no use for them.

We’re all being asked to do so much more with the same resources or fewer resources. We're all being pushed to keep up with how the demand is going out there. Technology is just jumping, and this is the only way on the infrastructure side to keep up with that.
We’re all being asked to do so much more with the same resources or fewer resources. We're all being pushed to keep up with how the demand is going out there.

Gardner: Michael, any other thoughts in terms of 20/20 hindsight on your experience and why being aggressive and being bold has paid off?

Hom: Virtualization is definitely up and running, at least in state organizations. It’s probably something that we might do that or we might use as a toolset, but from looking at VMware this week, virtualization is the industry standard.

If you don’t take it on, then you really won’t be able to respond to business needs. What happens is that when the official IT organization becomes obsolete, there are going to be ad-hoc IT organizations and those would become the norm. If you want to be relevant, you have to use every tool set that you can to provide the business needs.

Gardner: Very good. I'm afraid we’ll have to leave it there. We've been learning from the California Natural Resources Agency in Sacramento how they’ve been embarking and benefiting from a SDDC strategy. I'd like to thank our guests, Tony Morshed, Chief Technology Officer for the California Natural Resources Data Center in the California Department of Water Resources. Thank you so much, Tony.

Morshed: No problem. It’s been a pleasure.

Gardner: And we've also been joined by Michael Hom, Data Center Chief in the IT infrastructure services branch for the California Department of Water Resources. Thanks so much, Michael.

Hom: Thank you.

Gardner: And also a big thank you to our audience for joining this special podcast series coming to you directly from the recent 2014 VMworld Conference in San Francisco. I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host throughout this series of VMware-sponsored BriefingsDirect IT strategy discussions. Thanks again for listening, and come back next time.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: VMware.

Transcript of a BriefingsDirect discussion on how a large state agency harnesses broad virtualization to do more with less in IT while remaining agile and efficient. Copyright Interarbor Solutions, LLC, 2005-2015. All rights reserved.

You may also be interested in:

Tuesday, October 07, 2014

MIT Media Lab Computing Director Details the Virtues of Cloud Computing for Agility and DR

Transcript of a Briefings Direct podcast on how MIT researchers are reaping the benefits of virtualization.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: VMware.

Dana Gardner: Hello, and welcome to a special BriefingsDirect podcast series coming to you directly from the VMworld 2014 Conference. I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host throughout this series of BriefingsDirect IT strategy discussions.

Gardner
We’re here in San Francisco the week of August 25 to explore the latest developments in hybrid cloud computing, user computing, software-defined data center (SDDC), and virtualization infrastructure management.

Our next innovator case study interview focuses on the MIT Media Lab in Cambridge, Massachusetts and how they're exploring the use of cloud and hybrid cloud and enjoying such use benefits as speed, agility and disaster recovery (DR)

To learn more about how the MIT Media Lab is using cloud computing, we’re joined by Michail Bletsas, research scientist and Director of Computing at the MIT Media Lab. Welcome.

Michail Bletsas: Thank you. 

Gardner: Tell us about the MIT Media Lab. How big is the organization? What’s your charter?

Bletsas: The organization is one of the many independent research labs within MIT. MIT is organized in departments, which do the academic teaching, and research labs, which carry out the research.

http://web.media.mit.edu/~mbletsas/
Bletsas
The Media Lab is a unique place within MIT. We deviate from the normal academic research lab in the sense that a lot of our funding comes from member companies, and it comes in a non-direct fashion. Companies become members of the lab, and then we get the freedom to do whatever we think is best.

We try to explore the future. We try to look at what our digital life will look like 10 years out, or more. We're not an applied research lab in the sense that we're not looking at what's going to happen two or three years from now. We're not looking at short-term future products. We're looking at major changes 15 years out.

I run the group that takes care of the computing infrastructure for the lab and, unlike a normal IT department, we're kind of heavy on computing. We use computers as our medium. The Media Lab is all about human expression, which is the reason for the name and computers are one of the main means of expression right now. We're much heavier than other departments in how many devices you're going to see. We're on a pretty complex network and we run a very dynamic environment.

Major piece

A lot has changed in our environment in recent years. I've been there for almost 20 years. We started with very exotic stuff. These days, you still build exotic stuff, but you're using commodity components. VMware, for us, is a major piece of this strategy because it allows us a more efficient utilization of our resources and allows us to control a little bit the server proliferation that we experienced and that everybody has experienced.

We normally have about 350 people in the lab, distributed among staff, faculty members, graduate students, and undergraduate students, as well as affiliates from the various member companies. There is usually a one-to-five correspondence between virtual machines (VMs), physical computers, and devices, but there are at least 5 to 10 IPs per person on our network. You can imagine that having a platform that allows us to easily deploy resources in a very dynamic and quick fashion is very important to us.

We run a relatively small operation for the size of the scope of our domain. What's very important to us is to have tools that allow us to perform advanced functions with a relatively short learning curve. We don’t like long learning curves, because we just don’t have the resources and we just do too many things.

You are going to see functionality in our group that is usually only present in groups that are 10 times our size. Each person has to do too many things, and we like to focus on technologies that allow us to perform very advanced functions with little learning. I think we've been pretty successful with that.
We really need to interact with our infrastructure on a much shorter cycle than the average operation.

Gardner: So your requirements are to support those 350 people with dynamic workloads, many devices. What is it that you needed to do in your data center to accommodate that? How have you created a data center that’s responsive, but also protects your property, and that allows you to reduce your security risk?

Bletsas: Unlike most people, we tend to have our resources concentrated close to us. We really need to interact with our infrastructure on a much shorter cycle than the average operation. We've been fortunate enough that we have multiple, small data centers concentrated close to where our researchers are. Having something on the other side of the city, the state, or the country doesn’t really work in an environment that’s as dynamic as we are.

We also have to support a much larger community that consists of our alumni or collaborators. If you look at our user database right now, it’s something in the order of 3,500, as opposed to 350. It’s a very dynamic in that it changes month to month. The important attributes of an environment like this is that we can’t have too many restrictions. We don’t have an approved list of equipment like you see in a normal corporate IT environment.

Our modus operandi is that if you bring it to us, we’ll make it work. If you need to use a specific piece of equipment in your research, we’ll try to figure out how to integrate it into your workflow and into what we have in there. We don’t tell people what to use. We just help them use whatever they bring to us.

In that respect, we need a flexible virtualization platform that doesn’t impose too many restrictions on what operating systems you use or what the configuration of the VMs are. That’s why we find that solutions, like general public cloud, for us are only applicable to a small part of our research. Pretty much every VM that we run is different than the one next to it. 

Flexibility is very important to us. Having a robust platform is very, very important, because you have too many parameters changing and very little control of what's going on. Most importantly, we need a very solid, consistent management interface to that. For us, that’s one of the main benefits of the vSphere VMware environment that we’re on.

Public or hybrid

Gardner: Of course, virtualization sounds like a great fit when you have such dynamic, different, and varied workloads. But what about taking advantage of cloud, public cloud, and hybrid cloud to some degree, perhaps for disaster recovery (DR) or for backup failover. What's the rationale, even in your unique situation, for using a public or hybrid cloud?

Bletsas: We use hybrid cloud right now that’s three-tiered. MIT has a very large campus. It has extensive digital infrastructure running our operations across the board. We also have facilities that are either all the way across campus or across the river in a large co-location facility in downtown Boston and we take advantage of that for first-level DR.

A solution like the vCloud Air allows us to look at a real disaster scenario, where something really catastrophic happens at the campus, and we use it to keep certain critical databases, including all the access tools around them, in a farther-away location.

It’s a second level for us. We have our own VMware infrastructure and then we can migrate loads to our central organization. They're a much larger organization that takes care of all the administrative computing and general infrastructure at MIT at their own data centers across campus. We can also go a few states away to vCloud Air [and migrate our workloads there in an emergency].
We know that remote events are remote, until they happen, and sometimes they do.

So it’s a very seamless transition using the same tools. The important attribute here is that, if you have an operation that small, 10 people having to deal with such a complex set of resources, you can't do that unless you have a consistent user interface that allows you to migrate those workloads using tools that you already know and you're familiar with.

We couldn’t do it with another solution, because the learning curve would be too hard. We know that remote events are remote, until they happen, and sometimes they do. This gives us, with minimum effort, the ability to deal with that eventuality without having to invest too much in learning a whole set of tools, a whole set of new APIs to be able to migrate.

We use public cloud services also. We use spot instances if we need a high compute load and for very specialized projects. But usually we don’t put persistent loads or critical loads on resources over which we don’t have much control. We like to exert as much control as possible.

Gardner: I'd like to explore a little bit more this three-tiered cloud using common management, common APIs. It sounds like you're essentially taking metadata and configuration data, the things that will be important to spin back up an operation should there be some unfortunate occurrence, and putting that into that public cloud, the vCloud Air public cloud. Perhaps it's DR-as-a-service, but only a slice of DR, not the entire data. Is that correct?

Small set of databases

Bletsas: Yes. Not the entire organization. We run our operations out of a small set of databases that tend to drive a lot of our websites. A lot of our internal systems drive our CRM operation. They drive our events management. And there is a lot of knowledge embedded in those databases.

It's lucky for us, because we're not such a big operation. We're relatively small, so you can include everything, including all the methods and the programs that you need to access and manipulate that data within a small set of VMs. You don’t normally use them out of those VMs, but you can keep them packaged in a way that in a DR scenario, you can easily get access to them.

Fortunately, we've been doing that for a very long time because we started having them as complete containers. As the systems scaled out, we tended to migrate certain functions, but we kept the basic functionality together just in case we have to recover from something.
We are fortunate enough to have a very good, intimate knowledge of our environment. We know where each piece lies. That’s the benefit of running a small organization

In the older days, we didn’t have that multi-tiered cloud in place. All we had was backups in remote data centers. If something happened, you had to go in there and find out some unused hardware that was similar to what you had, restore your backup, etc.

Now, because most of MIT's administrative systems run under VMware virtualization, finding that capacity is a very simple proposition in a data center across campus. With vCloud Air, we can find that capacity in a data center across the state or somewhere else.

Gardner: For organizations that are intrigued by this tiered approach to DR, did you decide which part of those tiers would go in which place? Did you do that manually? Is there a part of the management infrastructure in the VMware suite that allowed you to do that? How did you slice and dice the tiers for this proposition of vCloud Air holding a certain part of the data?

Bletsas: We are fortunate enough to have a very good, intimate knowledge of our environment. We know where each piece lies. That’s the benefit of running a small organization. We occasionally use vSphere’s monitoring infrastructure. Sometimes it reveals to us certain usage patterns that we were not aware of. That’s one of the main benefits that we found there.

We realized that certain databases were used more than we thought. Just looking at those access patterns told us, “Look, maybe you should replicate this." It doesn’t cost much to replicate this across campus and then maybe we should look into pushing it even further out.

It is a combination of having a visibility and nice dashboards that reveal patterns of activity that you might not be aware of even in an environment that's not as large as ours.

Gardner: We’re here at VMworld 2014. There's been quite a bit of news, particularly in the vCloud Air arena. We've talked and heard about betas for ObjectStore and for virtual private cloud. Are these of interest to you now that you’ve done a hybrid cloud using DR-as-a-service? Does anything else intrigues you?

Standard building blocks

Bletsas: We like the move toward standardization of building blocks. That’s a good thing overall, because it allows you to scale out relatively quickly with a minor investment in learning a new system. That’s the most important trend out there for us. As I've said, we're a small operation. We need to standardize as much as possible, while at the same time, expanding the spectrum of services. So how do you do that? It’s not a very clear proposition.

The other thing that is of great interest to us is network virtualization. MIT is in a very peculiar situation compared to the rest of the world, in the sense that we have no shortage of IP addresses. Unlike most corporations where they expose a very small sliver of their systems to the outside world and everything happens on the back-end, our systems are mostly exposed out there to the public internet.

We don’t run very extensive firewalls. We're a knowledge dissemination and distribution organization and we don’t have many things to hide. We operate in a different way than most corporations. That shows also with networking. Our network looks like nothing like what you see in the corporate world. The ability to move whole sets of IPs around our domain, which is rather large and we have full control over, is a very important thing for us.

It allows for much faster DR. We can do DR using the same IPs across the town right now because our domain of control is large enough. That is very powerful because you can do very quick and simple DR without having to reprogram IP, DNS Servers, load balancers, and things like that. That is important.
That is very powerful because you can do very quick and simple DR without having to reprogram IP, DNS Servers, load balancers, and things like that.

The other trend that is also important is storage virtualization and storage tiering and you see that with all the vendors down in the exhibit space. Again, it allows you to match the application profile much easier to what resources you have. For a rather small group like ours, which can't afford to have all of its disk storage and very high-end systems, having a little bit of expensive flash storage, and then a lot of cheap storage, is the way for us to go.

The layers that have been recently added to VMware, both on the network side and the storage side help us achieve that in a very cost-efficient way.

Gardner: The benefits of having a highly virtualized environment -- including the data center, including the end user computing endpoints -- gives you that flexibility of taking workloads and apps from development to test to deployments. So there's a common infrastructure approach there, but also a common infrastructure across cloud, hybrid cloud, and DR.

So it’s sort of a snowball effect. The more virtualization you're adapting, the more dynamic and agile you can be across many more aspects of IT.

Bletsas: For us, experimentation is the most important thing. Spinning out a large number of VMs to do a specific experiment is very valuable and being able to commandeer resources across campus and across data centers is a necessary requirement for something like an environment like this. Flexibility is what we get out of that and agility and speed of operations.

In the older days, you had to go and procure hardware and switch hardware around. Now, we rarely go into our data centers. We used to live in our data centers. We go there from time to time but not as often as we used to do, and that’s very liberating. It’s also very liberating for people like me because it allows me to do my work anywhere.

Gardner: Very good. I'm afraid we’ll have to leave it there. We’ve been discussing the virtues of cloud computing and hybrid cloud computing with the MIT Media Lab. I’d like to thank our guest, Michail Bletsas, research scientist and Director of Computing at the MIT Media Lab in Cambridge, Mass. Thanks so much.

Bletsas: Thank you.

Gardner: And also a big thank you to our audience for joining this special podcast series coming to you directly from the 2014 VMworld Conference in San Francisco.

I'm Dana Gardner; Principal Analyst at Interarbor Solutions, your host throughout this series of VMware-sponsored BriefingsDirect IT discussions. Thanks again for listening and come back next time.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: VMware.

Transcript of a Briefings Direct podcast on how MIT researchers are reaping the benefits of virtualization. Copyright Interarbor Solutions, LLC, 2005-2014. All rights reserved.

You may also be interested in:

Monday, August 04, 2014

A Gift That Keeps Giving, Software-Defined Storage Now Demonstrates Architecture-Wide Benefits

Transcript of a BriefingsDirect podcast on the future of software-defined storage and how it will have an impact on storage-hungry technologies, especially VDI.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: VMware.

Dana Gardner: Hi, this is Dana Gardner, Principal Analyst at Interarbor Solutions, and you're listening to BriefingsDirect.

Our latest podcast explores how one of the most costly and complex parts of any enterprises IT infrastructure -- storage -- is being dramatically improved by the accelerating adoption of software-defined storage.

Gardner
The ability to choose low-cost hardware, to manage across different types of storage, and radically simplified data storage via intelligent automation means a virtual rewriting of the economics of data.

But just as IT leaders seek to simultaneously tackle storage pain points of scalability, availability, agility, and cost -- software-defined storage is also providing significant strategic- and architectural-level benefits.

We're here now with two executives from VMware to unpack these efficiencies and examine the broad innovation behind the rush to exploit software-defined storage. Please join me now in welcoming our guests, Alberto Farronato, the Director of Product Marketing for Cloud Infrastructure Storage and Availability at VMware. Hello, Alberto.

Alberto Farronato: Hello, Dana. Glad to be here, thanks.

Gardner: We're also here with Christos Karamanolis, Chief Architect and a Principal Engineer in the Storage and Availability Engineering Organization at VMware. Welcome, Christos.

Christos Karamanolis: Thank you. Glad to be here.


Gardner: Alberto, we often focus on the speeds and feeds and the costs -- the hard elements -- when it comes to storage and modernization of storage. But what about the wider implications?

Software-defined storage is really changing something more fundamental than just data and economics of data. How do you see the wider implications of what’s happening now that software-defined storage is becoming more common?

Farronato: Software-defined storage is certainly about addressing the cost issue of storage, but more importantly, as you said, it’s also about operations. In fact, the overarching goal that VMware has is to bring to storage the efficient operational model that we brought to compute with server virtualization. So we have a set of initiatives around improving storage on all levels, and building a parallel evolution of storage to what we did with compute. We're very excited about what’s coming.

Gardner: Christos, one of my favorite sayings is that "architecture is IT destiny." How you see software-defined storage at that architectural level? How does it change the game?

Concept of flexibility

Karamanolis: The fundamental architectural principle behind software-defined storage is the concept of flexibility. It's the idea of being able to adapt to different hardware resources, whether those are magnetic disks, flash storage, or other types of non-volatile memories in the future.

Karamanolis
How does the end user adapt their storage platform to the needs they have in terms of the capabilities of the hardware, the ratios of the different types of storage, the networking, the CPU resources, and the memory resources needed for executing and providing their service to what's ahead?

That’s one part of flexibility, but there is another very interesting part, which is a very acute problem for VMware customers today. Their operational complexity of provisioning storage for applications and virtual machines (VMs) has been one way of packaging applications.

Today, customers virtualize environments, but also in general have to provision physical storage containers. They have to anticipate their uses over time and have make an investment up front in resources that they'll need over a long period of time. So they create those logical unit number (LUN) file services, or whatever that is needed, for a period of time that spans anything from weeks to years.

Software-defined storage advocates a new model, where applications and VMs are provisioned at the time that the user needs them. The storage resources that they need are provisioned on-demand, exactly for what the application and the user needs -- nothing more or less.

The idea is that you do this in a way that is really intuitive to the end-user, in a way that reflects the abstractions that user understands -- applications, the data containers that the applications need, and the characteristics of the application workloads.


So those two aspects of flexibility are the two fundamental aspects of any software-defined storage.

Gardner: As we see this increased agility, flexibility, the on-demand nature of virtualization now coupled with software-defined storage, how are organizations benefiting at a business level? Is there a marker that we can point to that says, "This is actually changing things beyond just a technology sphere and into the business sphere?"

Farronato
Farronato: There are several benefits and several outcomes of adopting software-defined storage. The first that I would call out is the ability to be much more responsive to the business needs -- and the changing business needs -- in the form of what your application needs faster.

As Christos was saying, in the old model, you had to guess ahead of time what the applications will need, spend a lot of time trying to preconfigure and predetermine the various services levels, performance, availability and other things that our storage really would be required by your application, and so spend a lot of time setting things up, and then hopefully, down the line, consume it the way you thought you would.

Difficult change management

In many cases, this causes long provisioning cycles. It causes difficult change management after you provision the application. You find that you need to change things around, because either the business needs have changed or what you guessed was wrong. For example, customers have to face constant data migration.

With the policy-driven approach that Christos has just described -- with the ability to create these storage services on-the-fly for a policy approach -- you don’t have to do all that pre-provisioning and preconfiguring. As you create the VMs and specify the requirements, the system responds accordingly. When you have to change things, you just modify the policy and everything in the underlying infrastructure changes accordingly.

Responsiveness, in my opinion, is the one biggest benefit that IT will deliver to the business by shifting to software-defined storage. There are many others, but I want to focus on the most important one.
When you have to change things, you just modify the policy and everything in the underlying infrastructure will change accordingly.

Gardner: As we gain more agility, that prompts more use of software-defined storage, or in your case, Virtual SAN. With that acceleration of adoption, we begin to see more beneficial consequences, such as better manageability of data as a lifecycle, perhaps operations being more responsive to developers so that a DevOps benefit kicks in.

Can you explain what happens when software-defined storage becomes strategic at the applications level, perhaps with implications across the entire data lifecycle?

Karamanolis: One thing we already see, not only among VMware customers, but as a more generic trend, is that infrastructure administrators -- the guys who do the heavy-lifting in the data centers day in and day out, who manage much more beyond what is traditionally servers and applications -- are getting more and more into managing networks and data storage.

Find SDS technical insights and best practices on the VSAN storage blog.

Talking about changing models here, what we see is that tools have to be developed and software-defined storage is a key technology evolution behind that. These are tools for those administrators to manage all those resources that they need to make their day-to-day jobs happen.

Here, software-defined storage is playing a key role. With technology like Virtual SAN, we make the management of storage visible for people who are not necessarily experts in the esoterics of a certain vendor's hardware. It allows more IT professionals to specify the requirements of their applications.

Then, the software storage platform can apply those requirements on the fly to provision, configure, and dynamically monitor and enforce compliance for the policy and requirements that are specified for the applications. This is a major shift we see in the IT industry today, and it’s going to be accelerated by technologies like Virtual SAN.

Gardner: When you go to software-defined storage, you can get to policy level, automation, and intelligence when it comes to how you're executing on storage. How does software-defined storage simplify storage overall?

Distributed platform

Karamanolis: That's an interesting point, because if you think about this superficially, we’ll now go from a single, monolithic storage entity to a storage platform that is distributed, controlled by software, and can span tens or sometimes hundreds of physical nodes and/or entities. Isn’t complexity harder in the latter case?

The reality is that whether it's because of necessity or because we've learned a lot over the last 10 to 15 years about how to manage and control large distributed systems, that there is a parallel evolution of these ideas of how you manage your infrastructure, including the management of storage.
The user has to be exposed to the consequences of the policy they choose. There is a cost there for every one of those services.

As we alluded to already, the fundamental model here is that the end user, the IT professional that manages this infrastructure, expresses in a descriptive way, what they need for their applications in terms of CPU, memory, networking, and, in our case, storage.

What do I mean by descriptive? The IT professional does not need to understand all the internal details of the technologies or the hardware used at any point in time, and which may evolve over a period of time.

Instead, they express at a high level a set of requirements -- we call them policies -- that capture the requirements of the application. For example, in the case of storage, they specify the level of availability that is required for certain applications and performance goals, and they can also specify things like the data protection policies for certain data sets.


Of course, for all those things, nothing comes for free. So the user has to be exposed to the consequences of the policy that they choose. There is a cost there for every one of those services.

But the key point is that the software platform automatically configures the appropriate resources, whether they're arrayed across multiple physical devices, arrayed across the network, or whether they get asynchronous data as specified in a remote location in order to comply with certain disaster recovery (DR) policies.

All those things are done by the software, without the user having to worry about whether the storage underneath is highly available storage, in which case they need to be able to create only two copies of the data, or whether it is of some low-end hardware for which that would require three or four copies of the data. All those things are determined automatically by the platform.

This is the new mode. Perhaps I'm oversimplifying some of these problems, but the idea is that the user should really not have to know the specific hardware configurations of a disk array. If the requirements can not be met, it is because these new technologies are not incorporated into the storage platform.

Policy driven

Farronato: Virtual SAN is a completely policy-driven product, and we call it VM-centric or application-centric. The whole management paradigm for storage, when you use Virtual SAN, is predicated around the VM and the policies that you create and you assign to the VMs as you create your VMs, as you scale your environment.

One of the great things that you can achieve with Virtual SAN is providing differentiated service levels to individual VMs from a single data store. In the past, you had to create individual LUNs or volumes, assign data services like replication or RAID levels to each individual volume, and then map the application to them.

With Virtual SAN, you're simply going to have a capacity container that happens to be distributed across a number of nodes in your cluster -- and everything that happens from that point on is just dropping your VMs into this container. It automatically instantiates all the data services by virtue of having built-in intelligence that interprets the requirements of the policy.
One of the great things that you can achieve with Virtual SAN is providing differentiated service levels to individual VMs from a single data store.

That makes this system extremely simple and intuitive to use. In fact, one of the core design objectives of Virtual SAN is simplicity. If you look at a short description of the system, the radically simple hypervisor-converged storage means bringing that idea of eliminating the complexity of storage to the next level.

Gardner: We've talked about simplicity, policy driven, automation, and optimization. It seems to me that those add up very quickly to a fit-for-purpose approach to storage, so that we are not under-provisioning or over-provisioning, and that can lead to significant cost-savings.

So let’s translate this back to economics. Alberto, do you have any thoughts on how we lower total cost of ownership (TCO) through these SDS approaches of simplicity, optimization, policy driven, and intelligence?


Farronato: There are always two sides of the equation. There is a CAPEX and an OPEX component. Looking at how a product like Virtual SAN reduces CAPEX, there are several ways, but I can mention a couple of key components or drivers.

First, I'd call out the fact that it is an x86 server-based storage area network (SAN). So it leverages server-side components to deliver shared storage. By virtue of using server-side resources right off the bat there are significant savings that you can achieve through lower-cost hardware components. So the same hard drive or solid-state drive (SSD) that you deploy on a shared external storage array could be on the order of 80 percent cheaper.

The other aspect that I would call out that reduces the overall CAPEX cost is more along the lines of this, as you said, consume on-demand approach or, as we put it in many other terms, grow-as-you-go. With a scale-out model, you can start with a small deployment and a small upfront investment.

You can then progressively scale out as your environment grows by the much finer granularity that you would with a monolithic array. And as you scale, you scale both compute, but also IOPs  and that goes hand in hand with often the number of VMs that you are running out of your cluster.

System growth
 
So the system grows with the size of your environment, rather than requiring you to buy a lot of resources upfront that many times remain under-utilized for a long time.

On the OPEX side, when things become simpler, it means that overall administration productivity increases. So we expect a trend where individual administrators will be able to manage a greater amount of capacity, and to do so in conjunction with management of the virtual infrastructure to achieve additional benefits.

Gardner: Christos, Virtual SAN has been in general availability now for several months, since March 2014, after being announced last year at VMworld 2013. Now that it’s in place and growing in the market, are there any unintended benefits or unintended consequences from that total-cost perspective in real-world day-in, day-out operations?
The system grows with the size of your environment, rather than require you to buy a lot of resources upfront that many times remain under-utilized for a long time.

I'm looking for ways in which a typical organization is seeing software-defined storage benefiting them culturally and organizationally in terms of skills, labor, and that sort of softer metric.

Karamanolis: That’s a very interesting point. Our technologists sometimes tend to overlook the cultural shifts that technology causes in the field. In the case of Virtual SAN, we see a lot of, as one customer put it, being empowered to manage their own storage, in the vertical that we are controlling in their IT organization, without having to depend on the centralized storage organization in this company.

Find SDS technical insights and best practices on the VSAN storage blog.

What we really see here is a shift in paradigm about how our customers use Virtual SAN today to enable them to have a much faster turnaround for trying new applications, new workloads, and getting them from test and dev into production without having to be constrained by the processes and the timelines that are imposed by a central storage IT organization.

This is a major achievement, and the major tool for VMware administrators in the field, which we believe is going to lead the way to a much wider adoption of Virtual SAN and software-defined storage in general.

Gardner: It sounds as if there's a simultaneous decentralized benefit here, similar to what we saw 30 years ago in manufacturing. Back in the day, you used to have an assembly line approach where one linear process would lead to another, but when you do simultaneous things, you can see a lot more productivity and innovation.

Do you think that there is a parallel between software modernization and manufacturing 30 years ago?

Managing storage

Karamanolis: Certainly we have a parallel here, taking into account the fact that the customers, the IT professionals that manage storage, understand the processes and the workflows without necessarily having to understand the internals of the technology that implement those workflows.

This is very much like being part of a production line and understanding the big picture, but without having to understand all the little details of every station of that production line. In both cases, you have a fundamental scalability benefit going down that path.

I say this this being fully aware that the real world is demanding. I understand that there may be situations where the IT administrator, whether a VMware admin or a storage expert, has to go and jump into the situation and troubleshoot something that is going wrong.
With this approach you have a more granular way to control the service levels that you deliver to your customers.

He has to troubleshoot, for example, a performance issue, or understanding what's happening under the covers when the requirements specified don’t seem to match what they're getting.

And what we do is we deliver, together with Virtual SAN in an integrated fashion, sophisticated monitoring and reporting tools that help customers not only understand what's happening in their system, but also do an analysis of any situation end-to-end, all the way from the application, down to the VM, the hypervisor and the resources the hypervisor assigns to those VMs, and including the storage resources that are consumed at any point in time across the cluster.


Those are the tools that always have to come together with those simple models we're introducing, because you need to be able to handle those exceptional situations.

Gardner: How does this simplification and automation have a governance, risk, and compliance (GRC) benefit?

Farronato: With this approach you have a more granular way to control the service levels that you deliver to your customers, to your internal customers, and a more efficient way to do it by standardizing through polices rather than trying to standardize service levels over a category of hardware.

Self-service consumption

You can more easily keep track of what each individual application is receiving, whether it’s in compliance to that particular policy that you specified. You can also now enable self-service consumption more easily and effectively.

We have, as part of our Policy-Based Management Engine, APIs that will allow for integration with cloud automation frameworks, such as vCloud Automation Center for OpenStack, where end users will be able to consume a predefined category of service.

It will speed up the provisioning process, while at the same time, enabling IT to maintain that control and visibility that all the admins want to maintain over how the resources are consumed and allocated.
You can also now enable self-service consumption more easily and effectively.

Gardner: I'm interested in hearing more examples about how this is being used. But before we go to that, there's one questions that I get a lot as an analyst.

Perhaps it's because people come from different parts of IT, or they have specializations, but people say, "We have software-defined storage, we have software-defined networking, a highly virtualized data center, and the goal is to become a software-defined data center, but I don't necessarily understand how these come together in what order. How do I go about that?"


Help us understand the role and impact of software-defined storage in the context of a larger software-defined data center.

Karamanolis: This is a challenging question, and I don’t know how far I can go in answering this. What we're trying to do at VMware is allow our customers to experience the various concepts of software-defined data center in a piecemeal fashion.

They can address the most acute of their problems, whether those are the traditional computer utilization questions, or more recently, whether that is a network scalability and flexibility question or a question of an easy-to-enter, low-cost storage platform. So, yes, we provide integration and fully support integration of all our software-defined aspects of the data center. That is in the three dimensions I mentioned.

We will soon be posting some demos of this kind of working with NSX, for example. But we do not prescribe that our IT professional has to use Virtual SAN with NSX, or vice versa, and only in that way. So Virtual SAN can be used on its own, with more traditional network configurations. NSX can replace those network infrastructure and it will work seamlessly with Virtual SAN. 

We see different parts of adoption by different customers. Some of the bigger enterprises, including financials, being more sophisticated and perhaps more forward-looking, they are more aggressive with total software-defined data center approach. Other customers are a bit more cautious and apply software-defined principles in the main areas they are concerned with.

Value proposition

Farronato: When you look at a product like Virtual SAN, one interesting finding, after the first three months that the product has been available, is that the value proposition is really resonating across pretty much all customer segments, from the smaller SMBs, all the way up to the larger enterprise customers.

While it’s difficult to comment on the exact sequence as to how software-defined data center has been deployed, it is interesting to see that a technology like Virtual SAN is resonating pretty much across all the market segments, and so it expresses a value proposition that is broadly applicable.

Gardner: I suppose there are as many on-ramps to software-defined data center as there are enterprises. So it's interesting that it can be done at that custom level, based on actual implementation, but also have a strategic vision or a strategic architectural direction. So, it's future-proof as well as supporting legacy.
The value proposition is really resonating across pretty much all customer segments, from the smaller SMBs, all the way up to the larger enterprise customers.

How about some examples? Do we have either use-case scenarios or an actual organization that we can look to and say that they have deployed these VSAN and they have benefited in certain ways and they are indicative of what others should expect? 

Farronato: Let me give you some statistics and some interesting facts. We can look at some of the early examples where, in the last three months since the product has become available, we've found a significant success already in the marketplace, with a great start in terms of adoption from our customers.

Find SDS technical insights and best practices on the VSAN storage blog.

We already have more than 300 paying customers in just one quarter. That follows the great success of the public beta that ran through the fall and the early winter with several thousand customers testing and taking a look at the product. 

We are finding that virtual desktop infrastructure (VDI) is the most popular use case for Virtual SAN right now. There are a number of reasons why Virtual SAN fits this model from the scale out, as well as the fact that the hyper-converged storage architecture is particularly suitable to address the storage issues of a VDI deployment.

DevOps, or if you want, preproduction environments, loosely defined as test dev, is another area. There are disaster recovery targets in combination with vSphere Replication and Site Recovery Manager. And some of the more aggressive customers are also starting to deploy it in production use cases.
In the last three months since the product has become available, we've found a significant success already in the marketplace.

As I said, the 300 customers that we already have span the gamut in terms of size and names. We have large enterprises, banking, down to the smaller accounts and companies, including education or smaller SMBs. 

There are a couple of interesting cases that we'll be showcasing at VMworld 2014 in late-August. If you look at the session list, they're already available as actual use cases presented by our customers themselves.

Adobe will be talking about their massive implementation of Virtual SAN. And for their our production environment, on their data analytics platform, there will be another interesting use case with TeleTech talking about how they have leveraged Cisco UCS to progress VDI deployments.

VDI equation

Gardner: I'd like to revisit the VDI equation for a moment, because one of the things that’s held people up is the impact on storage, and the costs associated with the storage to support VDI. But if you're able to bring down costs by 50 percent, in some cases, using software-defined storage. That radically changes the VDI equation. Isn’t that the case, Christos, where you can now say that you can do VDI cheaper than almost any other approach to a virtualized desktop?

Karamanolis: Absolutely, and the cost of storage is the main impediment in organizations to implement a VDI strategy. With Virtual SAN, as Alberto mentioned earlier, we provide a very compelling cost proposition, both in terms of the capacity of the storage, as well as the performance you gain out of the storage.
You get the needs, both capacity and performance of your VDI workloads for a fraction of the cost you would pay for with a traditional disk array storage.

Alberto already touched on the cost of the capacity, referring to the difference in prices one can get from server vendors and from the market, as opposed to single hardware being procured as part of a traditional disk array.

I'd like to touch on something that is an unsung hero of Virtual SAN and of VDI deployment especially, and that's performance. Virtual SAN, as should be clear by now, is a storage platform that is strongly integrated with our hypervisor. Specifically, the data path implementation and the distributed protocols that are implemented in Virtual SAN are part of the ESXi kernel.

That means that, because of that, we can actually achieve very high performance goals, while we minimize the CPU cycles that are consumed to serve those high I/Os per second. What that means, especially for VDI, is that we use a small slice of the CPU and memory of every single ESXi host to implement this distributed software-driven storage controller.


It doesn't affect all the VMs that run on the same ESXi host, who have already published extensive and detailed performance evaluations, where we compare VDI deployments only on Virtual SAN versus using an external disk array.

And even though Virtual SAN use percentage is cut to be 10 percent of local CPU and memory on those hosts, the consolidation ratio, the number of virtual desktops we run on those clusters, is virtually unaffected, while we get the full performance that is realized with an external, all-flash disk array. So this is the value of Virtual SAN in those environments.

Essentially, you get the needs, both capacity and performance of your VDI workloads, for a fraction of the cost you would pay for with a traditional disk array storage.

Gardner: We're only a few weeks from VMworld 2014 in San Francisco, and I know there's going to be a lot of interest in mobile and in desktop infrastructure for virtualized desktops and applications.

Do you think that we can make some sort of a determination about 2014? Maybe this is the year that we turn the corner on VDI, and that that is a bigger driver to some of these higher efficiencies. Any closing thoughts on the vision for software-defined data center and VDI and the timing with VMworld. Alberto?

Last barrier

Farronato: Certainly, one of the goals that we set ourselves for this Virtual SAN release, solving the VDI use case, eliminating probably the last barrier, and enabling a broader adoption of VDI across the enterprise, and we hope that will materialize. We're very excited about what the early findings show.

With respect to VMworld and some of the other things that we'll be talking about at the conference with respect to storage, we'll continue to explain our vision of software-defined storage, talk about the Virtual SAN momentum, some of the key initiatives that we are rolling out with our OEM partners around things such as Virtual SAN Ready Nodes.

We're going to talk about how we will extend the concept of policy management and dynamic composition of storage services to external storage, with a technology called Virtual Volumes.

There are many other things, and it's gearing up to be a very exciting VMworld Conference for storage-related issues.


Gardner: Last word to you, Christos. Do you have any thoughts about why 2014 is such a pivotal time in the software-defined storage evolution?

Karamanolis: I think that this is the year where the vision that we've been talking about, us and the industry at large, is going to become real in the eyes of some of the bigger, more conservative enterprise IT organizations.

With Virtual SAN from VMware, we're going to make a very strong case at VMworld that this is a real enterprise-class storage system that's applicable across a very wide range of use cases and customers.

With actual customers using the product in the field, I believe that it is going to be a strong evidence for the rest of the industry that software-defined storage is real, it is solving real world problems, and it is here to stay.

Together with opening up some of the management APIs that Virtual SAN uses in VMware products to third parties through this Virtual Volumes technology that Alberto mentioned, we'll also be initiating an industry-wide initiative of making, providing, and offering software-defined storage solutions beyond just VMware and the early companies, mostly startups so far, that have been adopting this model. It’s going to become a key industry direction.
I believe that it is going to be a strong evidence for the rest of the industry that software-defined storage is real, it is solving real world problems, and it is here to stay.

Gardner: You've been listening to a sponsored BriefingsDirect podcast discussion on how one of the most costly and complex parts of any enterprise’s IT infrastructure, storage, is being dramatically changed by the accelerating adoption of software-defined storage.

And we've heard how IT leaders are simultaneously tackling storage pain points, such as scalability, availability, agility, and cost, while also gaining significant strategic and architectural level benefits through software-defined storage. Of course, probably the poster child application for that is VDI.

So a big thank you to our guests, Alberto Farronato, Director of Product Marketing for Cloud Infrastructure, Storage, and Availability at VMware. Thank you so much, Alberto.

Farronato: Thank you. It was great being with you.

Gardner: And we've been joined also by Christos Karamanolis, Chief Architect and a Principal Engineer in the Storage and Availability Engineering Organization at VMware. Thanks so much, Christos.

Karamanolis: Thank you. It was a pleasure talking with you.

Gardner: And also a big thank you to our audience for joining us once again on BriefingsDirect. This is Dana Gardner, Principal Analyst at Interarbor Solutions. Thanks for listening, and don't forget to come back next time.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: VMware.

Transcript of a BriefingsDirect podcast on the future of Virtual SAN and how it will have an impact on storage-hungry technologies, especially VDI. Copyright Interarbor Solutions, LLC, 2005-2014. All rights reserved.

You may also be interested in: