Wednesday, August 29, 2012

Performance Management Tools Help Services Provider Savvis Scale to Meet Cloud of Cloud Needs

Transcript of a BriefingsDirect podcast on how an IT-as-a-service provider Savvis has met the challenge of delivering better experiences for users at scale.

Listen to the podcast. Find it on iTunes/iPod. Download the transcript. Sponsor: HP.

Dana Gardner: Hello, and welcome to the next edition of the HP Discover Performance podcast series. I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your co-host and moderator for this ongoing discussing of IT innovation and how it's making an impact on people’s life.

Once again, we're focusing on how IT leaders are improving performance of their services to deliver better experiences and payoffs for businesses and end users alike. Our next innovation case study interview highlights how cloud infrastructure and hosted IT services provider Savvis has been able to automate out complexity and add deep efficiency to its operations.

Using a range of performance, operations orchestration and Business Service Automation (BSA) solutions from HP, Savvis has improved its incident resolution and sped the delivery of new cloud services to its enterprise clients.

To learn more about how they did it, we're joined by Art Sanderson, Senior Manager Enterprise Management Tools at Savvis. Welcome, Art. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

Art Sanderson: Hi, Dana. Thank you.

Gardner: Tell me first a little bit about Savvis. What kind of organization is is, what you’re doing, what is your main service drivers in the marketplace?

Sanderson: Savvis is recognized as a global IT leader in providing IT as a service (ITaaS) to many of today’s most recognizable enterprise customers around the world. We offer cloud services and hosting infrastructure services to those customers.

Gardner: What are some of the challenges you've faced in terms of managing the scale, building out the business, adapting to some of these new requirements for infrastructure as a service (IaaS)?

Sanderson: Being an IT department of IT departments, or a dynamic service provider, has a lot of unique challenges that you don’t face in every IT shop that you run into. In fact, we have thousands of customers that we have to support with their own IT departments. So our solutions have to be able to scale beyond what you would find in a typical IT organization.

Gardner: And I should think that efficiency is super-important. It's all margin to you, when you can save and do things efficiently?

Better SLAs

Sanderson: Absolutely. There are just the efficiencies alone for operational cost, as well as the value that we provide to our customers, being able to provide better service-level agreements (SLAs), so their businesses are up and running and available to them to service their own customers.

Gardner: So, in effect, you have to be better at IT than your customers, or they wouldn’t be interested in using you.

Sanderson: Absolutely. There are definitely some economies of scale there.

Gardner: So tell me a bit about what you've done in terms of management and allowing for better automation, orchestration, and then, how those benefits get passed on.

Sanderson: Sure. We've adopted the HP BSA set of tools as our automation platform and we’ve used that in a number of different ways and areas within Savvis. It's been quite a journey. We’ve been using the tools for approximately three to four years now within Savvis. We started out with some of our operational uses, and they've matured to the point now where a lot of our automation-type monitoring is solved by automation rather than by our operational staff.

There is definitely labor saving there, as well as time savings in mean time to resolution values that we’re adding to our customers. That's just one of the benefits that we’re seeing from the automation tools, not to mention the fact that we build a lot of our own key product offerings for the marketplace that we service, using the BSA offerings on the back end as well.

Gardner: What's an example of some of those services that you've built out?

Sanderson: Our premier services are our Symphony cloud offerings, our Symphony VPDC, Symphony Open and Dedicated cloud, as well as Symphony Database. All, in some form or fashion in various degrees, use the BSA tools on the back end to do their own offerings, and their own automations that we offer our customers.

Gardner: How do you measure performance benefits? You had a couple of numbers there about efficiency, but is there a set of key performance indicators (KPIs) or some benchmarks? How do you decide that you’re doing it well enough?

In just this first quarter of 2012 alone, we recognized somewhere in the neighborhood of $250,000 in labor savings.



Sanderson: From an operational perspective, we do monitor the number of automations that we run that we can capture from the operational side of the house. For example, on a typical day we run anywhere from 10,000-20,000 types of automations through our systems, and that would actually add value back to the business from a labor-savings perspective.

In just this first quarter of 2012 alone, we recognized somewhere in the neighborhood of $250,000 in labor savings just from the automations from an operational perspective. Again, it's hard to quantify the value of adding to the business side, because those are solutions that we’re offering to the market space that are generating new value back to the organization as a whole.

Gardner: So it's important not only to reduce your cost, improve your productivity but you have to create new revenue as well. So, it's sort of a multiple-level trick here. You’re cutting cost and you’re also creating new products and services.

Tell me a bit about the process behind that? How are the people adapting to some of these systems? How do you manage the people and process side of this in order to get those innovations out?

Mature process

Sanderson: From the people and process side, we didn’t start out necessarily doing it the right way from the operations side of the house. But we have matured the process to where we're now delivering solutions in a much more rapid fashion. The business is driving the priorities from an operational perspective as far as what we’re spending our time on.

Then, we can typically turn around automations in a very short time. In some cases, we’ve built frameworks using these tools where we can turn around an automation that used to take two to three weeks. Now, it can take less than an hour to turn around that same automation.

So we’ve gotten really smart at what we’re doing with the tools, not just building something net new every time, but also making the tools more reusable themselves.

From the value to the organization, we’ve also had many groups within the product engineering side of the house take on and learn tools like HP Operations Orchestration (HPOO) and HP Service Activator (HPSA), and leverage their own domain knowledge as network engineers or storage engineers to build net new solutions that we then turn around and offer to our customers.

That eliminates a lot of the business analyst type of work and things like that that would typically go into the normal systems development lifecycle (SDLC)-type process that you would see. We’re able to cut the time to market for the offerings that we’re producing for our customers.

It does make us much more agile and responsive to the needs of our customers and the industry.



Gardner: And of course, that has a direct bearing on how you can compete in the marketplace, a very dynamic and fast moving marketplace?

Sanderson: Yes, absolutely. It does make us much more agile and responsive to the needs of our customers and the industry.

Gardner: Let's go back to the scale of what you’re doing here just for our audience’s benefit. How large is Savvis? How many physical and virtual servers do you have? Are there any wow numbers you can provide for us about the extent and the size of your operations?

Sanderson: Today, we have about 25,000 servers under management, spread across 50 data centers worldwide, and just to give you an idea, we have approximately 9,000-10,000 automations on a typical day running through HPOO.

As far as the scale and break down of the servers, two-thirds of our servers today are virtualized, and either through the cloud or actual traditional orders that customers are placing. So, we’re seeing a lot of growth in the virtual machines (VMs) and the cloud space. This is where things are going for our organization as well as the industry.

Gardner: That's really impressive. I understand you've got a self-service portal and you've been talking about things called self-healing. Maybe you could explain why it's important to have self-service, but then also explain how behind-the-scenes you have self-healing?

Self healing

Sanderson: Our self-healing infrastructure is what I was referring to earlier, where we’ve actually matured our process and recognized the reusability of using a meta-model to drive our HPOO flows that we’re writing. We've taken those patterns that we’ve identified and have been able to build a meta-model that we now have built a user interface in front of.

That's what I was referring to earlier when I said that if somebody wants a new request, they can go in and request that from us, and then we can, within a matter of minutes, produce the data through the user interface and publish a new flow, without ever having to write new operations orchestrations flows.

Gardner: Tell me a little bit about what your future plans are? Do you have an upgrade path? You’re here at Discover learning about new products and services. Do you have any idea of what some of the next steps will be for continuing on this march to improve both innovation and productivity?

Sanderson: Obviously, the reason we come to these conferences, is to learn about where HP is going, so we can make sure that we're in alignment, both from our business needs, as well as where the products are going that we use to drive our own solution.

It's critical that we're able to maintain an upgrade path and we're able to support our business. We've already started to plan, based on what we see coming down the path from HP's future infrastructure and even dedicated infrastructure as our business continues to grow. For example, for the Symphony products that we were referring to earlier, we have to break off more-and-more dedicated infrastructure to the scale and capacity that they’re growing.

We've already started to plan, based on what we see coming down the path from HP's future infrastructure and even dedicated infrastructure as our business continues to grow.



We would have never have anticipated, when we started a few years ago, that a customer would have come to us to say that we want to order 400 VMs or we want to order 1,000 VMs, but customers are coming us today doing that. That's the kind of scale that we’re seeing, even just a year into the offerings that we’re providing to the marketplace.

Gardner: So clearly, there's an opportunity for you to be able to dig in and provide that scale when it's called for, and I guess that's almost a definition of cloud is having that elasticity and dynamic agility.

Sanderson: Absolutely. That's spot on.

Gardner: Very good. We’ve been talking with Savvis, a cloud services provider, and I want to thank our guest. We were talking with Art Sanderson. He is a Senior Manager of Enterprise Management Tools at Savvis. Thank you very much.

Sanderson: Thank you, Dana.

Gardner: And I also want to thank our audience for joining this special HP Discover Performance podcast. I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host for this ongoing series of HP sponsored discussions. Thanks again for listening and come back next time.

Listen to the podcast. Find it on iTunes/iPod. Download the transcript. Sponsor: HP.

Transcript of a BriefingsDirect podcast from HP Discover 2012 on how an IT-as-a-service provider has met the challenge of delivering better experiences for users. Copyright Interarbor Solutions, LLC, 2005-2012. All rights reserved.

You may also be interested in:

Tuesday, August 28, 2012

Why Success Greets NYSE Euronext's Community Platform for Capital Markets Cloud

Transcript of a BriefingsDirect podcast from the 2012 VMworld Conference focusing on applying the cloud model to providing a range of services to the financial industry.

Listen to the podcast. Find it on iTunes/iPod. Download the transcript. Sponsor: VMware.

Get the latest announcements about VMware's cloud strategy and solutions by tuning into VMware NOW, the new online destination for breaking news, product announcements, videos, and demos at: http://vmware.com/go/now.

Dana Gardner: Hello, and welcome to a special BriefingsDirect podcast series coming to you from the 2012 VMworld Conference in San Francisco. We're here the week of August 27 to explore the latest in cloud computing and software-defined datacenter infrastructure developments.

I'm Dana Gardner, Principal Analyst at Interarbor Solutions, and I'll be your host throughout this series of VMware sponsored BriefingsDirect discussions.

It has been a full year since we first spoke to NYSE Euronext at the last VMworld Conference. We heard then about their Capital Markets Community Platform of vertical industry services cloud targeting the needs of Wall Street IT leaders.

As an early adopter of innovative cloud delivery and a groundbreaking cloud business model, we decided to go back and see how things have progressed at NYSE. We will learn now, a year on, how NYSE's specialized cloud offerings have matured, how the business of the financial services industry has received them, and explore how providing cloud services as a business has evolved.

We're joined by Feargal O'Sullivan, the Global Head of Alliances at NYSE Technologies. Welcome to BriefingsDirect, Feargal. [Disclosure: VMware is a sponsor of BriefingsDirect podcasts.]

Feargal O'Sullivan: Thank you very much, Dana. Nice to be here.

Gardner: Tell me how it's going. The Capital Markets Community Platform, as we discussed, is a set of cloud services that you're providing to other IT organizations to help them better support their companies and their customers. How have things progressed over the past year?

O'Sullivan: We've been very happy with the progress we've made over the past year. When we announced at VMworld last year, we had just gone into early access for our first clients in our data center in the New York, New Jersey, Connecticut tri-state area, where we have all of our US-based markets running the New York Stock Exchange Markets, the Arca Electronic Markets, and AMEX.

That has since gone into production, has a number of clients on it, is being perceived very well by the community, and is really driving as a lynchpin of our strategy of building a global capital markets community.

Since the success of that, we've actually progressed further, to the point of having deployed the same environment in a second data center that we own and run just outside of London, in a town called Basildon, which is where we run all of our European markets, the Euronext side of NYSE Euronext.

We now have an equivalent VMware-based cloud environment and a range of ancillary services for the capital markets industry available in that location. Clients can now access, as a service, both infrastructure and platform capabilities in both of those facilities.

Furthermore, we've extended to two other financial centers in the world, one in Toronto and one in Tokyo. That's a slightly more stripped-down version of the community platform, but it's very useful for clients who are really expanding the business and gone globally.

Four locations

Now, we have those four locations up and running in production with production clients, so we are very happy with that progress.

Gardner: That's very impressive growth. In order to move this set of capabilities across these different geographies and in the data centers that you have created or acquired there has the whole software-defined datacenter model helped? I would think that in the older days -- 10 or 15 years ago with individually supported applications on individual stacks of hardware and storage -- that that would have been a far more difficult expansion project.

So what is it about the way that we're doing things now in the modern data center that's allowed you to build out so quickly?

O'Sullivan: Clearly, the technology has advanced significantly from the old days. The capability around virtualization on the the hardware server level with the VMware Hypervisors, and in particular the vCloud service, gives clients their own control over their environment.

Also on the networking side, it's become much more viable for clients to actually deploy into shared environment, still maintaining confidence that they're going to get both the security profile that they're looking for, as well as the performance capability.

We use the EMC VNX array with the FAST Cache capability to give a very stable performance profile based on demand. It allows different workloads, and yet each gets very good performance and response time. So there are many components along the way. Also, management and monitoring of these types of infrastructures have improved.

Our clients have certainly seen that enhancement in the technology. The financial services industry is unique in the way it leverages technology on two aspects.

One, security profile is absolutely critical. Security isn't just around customer data, but around application development and tools of the trade, intellectual property that firms might have, trading strategies, different analysis, analytics, and other types of components that they develop and build,. They feel they're highly proprietary in nature and don't want to allow anybody to get access to them. So they place security extremely high on the list.

The other unique aspect is performance aspect. It's a slightly different performance model from your typical sort of three-tier web store type of environment. Financial services, first of all, push very high volumes of content through their applications. They need to do so in microseconds, or at least milliseconds, of response time and latency measurements, and they also most importantly need to do so predictably.

With a big batch job of some kind, say a genetic folding job, you drop off a job, go away for 12 hours, and you come back. A little bit of clearly inefficient processing time is not great, because that drags out the whole thing over time, but there is no sort of critical "need it here," "need it now" requirement. So latency spikes are less of a problem.

Latency spikes

But in our industry, latency spikes are a real problem. People look for predictive latency, so we had to make sure that we applied a very tight security profile to our cloud, and a very high performance profile as well.

Gardner: So as you've expanded across different market regions and brought this into more of your portfolio for more of your customers, have you also increased the services? Last time, we talked about some services that were very impressive, but how have you been able to build on this cloud in terms of those value-added services that you deliver specifically to a financial clientele?

O'Sullivan: That's why we built our cloud, because there are many service providers who offer very valuable cloud capabilities that are based on core infrastructure and core computing capabilities, and they do so very well. However, we consider ourselves a vertical industry community. We're specifically focused on capital markets participants. We try to support and make it cheaper, more cost-effective, and more readily accessible to a wider range of participants to be able to get access to the markets.

So in our cloud and our community, we provide a range of platform and services that we have added. The core is "Come into our vCloud Director environment and access your compute infrastructure." By the way, we have a Compute On Demand Virtual Edition, we also have a Compute On Demand Physical Edition for those cases where that latency issue is of the utmost importance.

Then, we provide clients with the value-added features that we know they need, because they're in the capital markets business. The key one is market data. This is something that is absolutely critical in financial services, because every trade, no matter what you are buying or selling, always starts with a quote. Even if you walk into the shop and you ask how much it would it be for a can of soda, they say it's $1 or $1.20, whatever it is, and then you decide if you want to buy.

So in the financial services industry market data is the starting point, the driver of all the business. And the volumes on this, the sheer size of the content that comes down, is really outstanding. It's at the point now that even if you were to just subscribe to all North American equities and options, you'd need a 10-gigabit Ethernet pipe, and at points during the day, you're probably using upwards of 8 gigabits of that pipe just to get all that content.

Obviously, we can provide raw content, but we've added a range of services into our cloud and into the community. We can say, "We can offer you a nice filtered market data feed, where you just present us with the list of instruments you want, and we can add value-added calculations, do analytics, and provide that to you."

We've also developed an historical market-data access service. So if you want to go back and test your strategies against previous days of trading, back for many, many years, we have a database that's deployed in the cloud. So you can query the database, load it into your virtual environment, and analyze and back-test your strategies.

We've added order-routing capabilities, so when you are ready to send your orders to the market, if you are a market maker yourself, you might go direct to our gateway. If you're a sponsored participant, you might go through our risk-managed gateway, which would be sponsored by a broker.

Or if you are just a regular buy-side firm, a money manager, you might use our routing network and ask us to write your orders to the different brokers or the different markets, and we can handle that. Those are either ends of the trade.

Get the latest announcements about VMware's cloud strategy and solutions by tuning into VMware NOW, the new online destination for breaking news, product announcements, videos, and demos at: http://vmware.com/go/now.

Integration pieces

On Thursday, Aug. 30, I'm going to be presenting with VMware and EMC in one of the breakout sessions about us moving up the stack to start offering more of the integration pieces of this. We're using the Spring environment and a range of other VMware tools, GemFire, and so on, to demonstrate a full trading system deployed in the virtual environment with the integration tools -- all running hosted in our environment.

It's more of a framework that we're showing, but it provides platform as a service (PaaS), not just the market data in, which is our specialty, and the order routing out. Once you're within your environment, the range of additional tools makes it easy for you to develop and customize your own trading tools and your own trading strategies. That's something I will be talking about on Thursday.

Gardner: That's very interesting. It appears that what you've done here with your intermediary cloud is developed a fit-for-purpose value to such things as data services. Then, you've applied that to other value services like order services and now even integration services.

I think it's a harbinger of what we should expect in many other industries. Rather than a fire hose of either services or data, picking and choosing and letting an intermediary like yourselves provide that with the value-add, seems to be more efficient and valuable.

Looking at this as a value proposition, how has this been going as a business? Have you been enjoying uptake? I know you can't go into too much detail, but has the reception in the market satisfied your initial or hopeful business requirements around this as a business, as a profit and loss center?

O'Sullivan: The good news is that we've definitely had great progress here. We have a number of clients in all of the locations I mentioned. We're continuing to grow. It's a tough environment, as you can imagine, both just in the general economy and in particular in the financial services industry. So we expect to continue to grow this significantly further.

We have been certainly very happy with the uptake so far. We knew that we were going out well ahead of everybody else and we were very keen to do so, because we see and understand the vision that VMware and EMC in particular have been promoting over the past few years. We agree with it fully. We feel like we're uniquely positioned within the capital markets industry as the neutral party.

Remember, we're just a place where people go to trade. We don't decide what you buy or what you sell or how much it should be. We just provide the facility, the rules, and the oversight to ensure an orderly market. We wanted to make it easier and more cost-effective for firms to get access to that environment.

So by providing all of this capability, we think we're in a fantastic position now, that as more and more firms continue to explore virtualization and outsourcing of non-business critical functions, which for a while used to be running on your own servers, but which are now nothing but overhead.

We see them moving more and more into the cloud. We expect over the next two or three years, that this is really going to explode. We intend to be there, established, fully in production, tried and tested, and leading the industry from the front, as we think we should be with the a name like the New York Stock Exchange.

Well-known brand

That’s a brand that's so well-known globally. It's the best place to trade. It's the most reliable and most secure place to trade stocks, with the best oversight, and we want to apply that model to all of the services that we offer our clients.

Gardner: Let's drill just a little bit down into the notion of being able to add on these services, whether it's integration orders or data services. Is there something particular about the architecture that you've adopted that allows you to progress into these newer areas, maybe even in the future delivering feeds through a different format, satisfying needs around mobile devices, say HTML5.

I'm not focused so much on the application that you will be pursuing, but the ability to pursue more applications without necessarily a whole lot of additional infrastructure investment. How does that work?

O'Sullivan: The key for us was that we developed and built our own data center, which we operate and manage. It's a unique environment in Mahwah, New Jersey. We also built and developed our own in Basildon, just outside London. Those two facilities were built as Tier-4 guided data centers to the highest standards of reliability and security. Every time I go there, I'm amazed at the level of attention, the attention to detail that our engineers put into designing it to handle all sorts of occurrences.

The reason is that there is so much content created in these facilities. Traders gravitate towards liquidity, and we're a source of liquidity. We're probably the single biggest equity and options venue in North America, so traders are attracted to be there.

Given the electronic nature of the market, forgetting about high frequency trading, everything is electronic. So rather than take applications and deploy them in Timbuktu or wherever you choose to deploy your application, somewhere away from this facility and pay the expense of wide area network connections and so on, it makes more sense to deploy your applications close to the content that you care about.

If there is 8 gigabit bursts of market data on the network, why would you try to bring that 50 miles away to your own office? Why not take the applications that process that data and deploy them in there? With that sort of thought process in mind, we continue to build out a range of value-added services that we think clients would require.

We're also well aware that our main purpose in life is to be this neutral venue that creates markets and allows people to come and trade. So we're never going to be the best person, the best firm, or the best vendor at developing every possible requirement that every particular capital market’s participant might need. That's where our Global Alliance Program comes in.

I've been focused on working on our partnerships and ensuring that, as clients deploy into the cloud and they need market data, routing, risk management, back-office processing, and historical analysis. They also need different types of analytics, and they might need other services like email archiving and storage. They need to comply with regulation and so they need regulatory reporting services.

Not generic

There is such a wide range of capabilities required that are very specific. They're not generic. You're not going to go to some telco provider’s cloud and have all these firms that can offer you all these services there. There needs to be enough potential clients before a vendor is going to want to deploy their applications in this environment.

So we're building this community. We're basically saying that we have over 2,000 firms connected to our network, hundreds in our data centers. We have a wide range of vendors and we're continually working to add more so that it can offer services to those firms.

You can use our infrastructure, our cloud, and some of the integration capability that we've developed, both ourselves and through our relationships with vendors like VMware and EMC, to add on these capabilities that the firms are going to need and make a one-stop shop, a community, a place where you can go to get all the applications needed, similar to the app store model.

Gardner: You've defined what we should expect for public-cloud services. There is some thinking in the marketplace that there will be two or three public cloud providers, and everyone will go there, but I really think you have defined it by having a community close to their customers, recognizing that the architecture and the association with data and the integration is essential. Then, that value-add for applications and services on top of that means an ecosystem of cloud providers and not just a handful. So I really think you've painted the picture of the true future on cloud.

O'Sullivan: Thank you. We certainly see it that way. Our clients have taken us up on it already. While we still think it's early days, we're confident that we're going in the right direction, and that this will definitely, definitely take off in a big way, and within five years we will be looking back at how quaint this conversation was.

Gardner: I really enjoyed speaking with you, Feargal. We have been talking about the success of specialized vertical industry cloud delivery models and how they are changing the IT game in such mission critical industries as financial services.

I would like to thank our guest, Feargal O'Sullivan, the Global Head of Alliances at NYSE Technologies. Thank you, sir.

O'Sullivan: Thank you very much, Dana. I really appreciate the time to speak with you.

Gardner: And I also thank our audience for joining this special podcast coming to you from the 2012 VMworld Conference in San Francisco. I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host throughout this series of podcast discussions. Thanks again for listening and come back next time.

Get the latest announcements about VMware's cloud strategy and solutions by tuning into VMware NOW, the new online destination for breaking news, product announcements, videos, and demos at: http://vmware.com/go/now.

Listen to the podcast. Find it on iTunes/iPod. Download the transcript. Sponsor: VMware.

Transcript of a BriefingsDirect podcast from the 2012 VMworld Conference focusing on applying the cloud model to providing a range of services to the financial industry. Copyright Interarbor Solutions, LLC, 2005-2012. All rights reserved.

You may also be interested in:

Thursday, August 23, 2012

Legal Services Leader Foley & Lardner Makes Strong Case for Virtual Desktops

Transcript of a BriefingsDirect podcast on how a major law firm has adopted desktop virtualization and BYOD to give employees more choices and flexibility.

Listen to the podcast. Find it on iTunes/iPod. Download the transcript. Sponsor: VMware.

Dana Gardner: Hi, this is Dana Gardner, Principal Analyst at Interarbor Solutions, and you're listening to BriefingsDirect.

Today, we present a sponsored podcast discussion on how global legal services leader Foley & Lardner LLP has adopted virtual desktops and bring-your-own-device (BYOD) innovation to enhance end-user productivity across their far-flung operations.

We'll see how Foley has delivered applications, data, and services better and with improved control, even as employees have gained more choices and flexibility over the client devices, user experiences, and applications usage.

Stay with us now to learn more about adapting to the new realities of client computing and user expectations. We're joined by Linda Sanders, the CIO at Foley & Lardner LLP. Welcome to BriefingsDirect, Linda.

Linda Sanders: Thank you for having me.

Gardner: We're also here with Rick Varju, Director of Engineering & Operations at Foley. Welcome, Rick. [Disclosure: VMware is a sponsor of BriefingsDirect podcasts.]

Rick Varju: Thank you.

Gardner: My first question to you, Linda. When you look back on how you've come to this new and innovative perspective on client computing, what was the elephant in the room, when it came to the old way of doing client-side computing? Was there something major that you needed to overcome?

Sanders: Yes, we had to have a reduction in our technology staffing, and because of that, we just didn't have the same number of technicians in the local offices to deal with PCs, laptops, re-imaging, and lease returns, the standard things that we had done in the past. We needed to look at new ways of doing things, where we could reduce the tech touches, as we call it, and find a different way to provide a desktop to people in a fast, new way.

Gardner: Rick, same question. What was it from more of a technical perspective that you needed to overcome or that you wanted to at least do differently?

Varju: From a technical perspective, we were looking for ways to manage the desktop side of our business better, more efficiently, and more effectively. Being able to do that out of our centralized data center made a lot of sense for us.

Other benefits have come along with the centralized data center that weren't necessarily on our radar initially, and that has really helped to improve efficiencies and productivity in several ways.

Gardner: We'll certainly want to get into that in a few moments, but just for the context for our listeners and readers, tell us a bit about your organization at Foley. Linda, how big are you, where do you do business?

Virtualized desktops

Sanders: Foley has approximately 900 attorneys and another 1,200 support personnel. We're in 18 U.S. offices, where we support virtualized desktops. We have another three international offices. At this time, we're not doing virtualized desktops there, but it is in our future.

Gardner: So you obviously have a very large set of requirements across all those different users and types of users and you're dealing, of course, with very sensitive information, so control and compliance and security are all very top of mind for you.

Sanders: Absolutely.

Gardner: Okay. Let's move to what you've done. As I understand it, desktop virtualization has played an enabling role with the notion of BYOD or allowing your end users to pick and choose their own technology and even perhaps own that technology.

Going to you now, Rick, how has virtual desktop infrastructure (VDI) been an enabler for this wider choice?

Varju: The real underlying benefit is being able to securely deliver the desktop as a service (DaaS). We are no longer tied to a physical desktop and that means you can now connect to that same desktop experience, wherever you are, anytime, from any device, not just to have that easy access, but to make it secure by delivering the desktop from within the secure confines of our data center.

That's what's behind deploying VDI and embracing BYOD at the same time. You get that additional security that wouldn't otherwise be there, if you had to have all your applications and all data reside on that endpoint device that you no longer have control over.

With VMware View and delivering the DaaS from the data center, very little information has to go back to the endpoint device now, and that's a great model for our BYOD initiatives.

Gardner: Just to be clear, of your 2,000 users, how many of them are taking advantage of the BYOD policy?

Varju: Well, there are two answers to that question. One is our more formal Technology Allowance Program, which I think Linda will cover in a little more detail, that really focuses on attorneys and getting out of the laptop and mobility device business.

Then there are other administrative staff within the firm who may have personal devices that aren’t part of our Technology Allowance Program, but are still leveraging some of the benefits of using their personal equipment.

Mobile devices

In terms of raw numbers, every attorney in the firm has a mobile device. The firm provides a BlackBerry as part of our standard practice and then we have users who now are bringing in their own equipment. So at least 900 attorneys are taking advantage of mobility connectivity, and most of those attorneys have laptops, whether they are firm issued or BYOD.

So the short answer to the question is easily 1,500 personnel taking advantage of some sort of connectivity to the firm through their mobile devices.

Gardner: That's impressive, a vast majority of the attorneys and a significant portion, if not a majority, of the rest. This seems to be a win-win. As IT and management, you get a better control and a sense of security, and the users get choice and flexibility. You don't always get a win-win when it comes to IT, isn't that right, Linda?

Sanders: That's correct. Before, we were selecting the equipment, providing that equipment to people, and over and over again, we started to hear that that's not what they wanted. They wanted to select the machine, whether it be a PC, a Mac, an iPad, or smartphone. And even if we were providing standard equipment, we knew that people were bringing in their own. So formulating a formal BYOD program worked out well for us.

In our first year, we had 300 people take advantage of that formal program. This year, to date, we have another 200 who have joined, and we are expecting to add another 100 to that.

As Rick mentioned, we did also open this up to some of our senior level administrative management this year and we now have some of those individuals on the program. So that too is helping us, because we don't have to provision and lease that equipment and have our local technology folks get that out to people and be swapping machines.

Now, when we're taking away a laptop, for example, we can put a hosted desktop in and have people using VMware View. They're seeing that same desktop, whether they're sitting in the office or using their BYOD device.
They're seeing that same desktop, whether they're sitting in the office or using their BYOD device.


Gardner: Of course, with offices around the United States, this must be a significant saver in terms of supporting these devices. You're able to do it for the most part remotely, and with that single DaaS provision, control that much more centrally, is that correct?

Sanders: Yes.

Gardner: Do you have any metrics in terms of how much that saved you? Maybe just start at the support and operations level, which over time, is perhaps the largest cost for IT?

Sanders: Over three years, we'll probably be able to reduce our spend by about 22 percent.

Gardner: That’s significant. I'd love to hear more, Linda, about your policy. How did you craft a BYOD policy? Where do you start with that? What does it really amount to?


Realistic number

Sanders: Of course, there's math involved. We did have our business manager within technology calculate for us what we were spending year after year on equipment, factoring in how much tech time is involved in that, and coming up with a realistic number, where people could go out and purchase equipment over a three-year time frame.

That was the start of it, looking at that breakdown of the internal time, selecting a dollar amount, and then putting together a policy, so that individuals who decided to participate in it would know what the guidelines were.

Our regional technology managers met one on one or in small groups with attorneys who wanted to go on the program, went through the program with them, and answered any questions upfront, which I think really served us well. It wasn’t that we just put something out on paper, and people didn’t understand what they were signing up for.

Those meetings covered all the high points, let them know that this was personal equipment and that, in the end, they're responsible for it should something happen. That was how we put the program together and how we decided to communicate the information to our attorneys.

Gardner: You've been ranked very high for client services by outside organizations in the past few years. You have a strong focus on delivering exceptional client services. Has something about the DaaS allowed you to extend these benefits beyond just your employees? Is there some aspect of this that helps on that client services equation? I'll throw that to you, Rick.
That does provide some additional benefit for our attorneys, when it comes to delivering the best possible service we can to our clients.


Varju: The ease of mobility and some of the productivity gains make a big difference. The quicker we can get access to people and information for our attorneys, no matter where they are and no matter what the device they're using, is really important today. That does provide some additional benefit for our attorneys, when it comes to delivering the best possible service we can to our clients.

Gardner: I know this might be a little bit in the future, but is there any possibility, of being able to extend the desktop experience to your actual clients. That is, to deliver applications data, views of content and documents, and so forth through some sort of a device-neutral manner to their endpoint device?

Varju: One of the things that we're looking at now is unified communications, and trying to pull everything to the desktop, all the experiences together, and one of those important components is collaboration.

If we can deliver a tool that will allow attorneys and clients to collaborate on the same document, from within the same desktop view, that would provide tremendous value. There are certainly products out there that will allow you to federate with other organizations. That’s the line of thinking we're looking at now and we'll look to deploy something like that in the near future.

Gardner: Before we get into how you've been able to do this, I'd like to learn a little bit more about the client satisfaction, that being your internal clients, your employees. Have you done any surveying or conducted any research into how folks adapt to this? Is this something that they like, and why? How about to you, Linda?

The biggest plus

Sanders: The biggest plus is, as Rick mentioned, for people who are mobile, is that they have the same desktop, no matter where they are. As I talked about before, whether they're in the office or out of the office, they have the same experience.

If we have a building shut down, we are not trapped into not being able to deliver a desktop, because they can’t get into the building and they can’t work inside. They're working from outside and it’s just like they are sitting here. That’s one of the biggest pluses that we've seen and that we hear from people -- just that availability of the desktop

Gardner: So flexibility in terms of location. I suppose also flexibility in terms of choosing what form factor suits their particular needs at a particular time. Perhaps a smartphone access at one point, a tablet at another time, or another type of engagement, and of course the full PC or laptop, when that’s required.

Sanders: Correct.

Varju: Before deploying VDI and VMware View, we delivered a more generic desktop for remote access. So to Linda’s point, being able to have your actual desktop follow you around on whatever device you are using is big. Then it's the mobility, even from within the office.

When an attorney signs up for the Technology Allowance Program, we provide them a thin client on their desk, which they use when they're sitting in their office. Then, as part of the Technology Allowance Program and Freedom of Choice, they purchase whatever mobility technology suits them and they can use that technology when working out of conference rooms with clients, etc.
The ability to move and work within the office, whether in a conference room, in a lobby, you name it, those are powerful features for the attorneys.


So remote access and having their own personal desktop follow them around, the ability to move and work within the office, whether in a conference room, in a lobby, you name it, those are powerful features for the attorneys.

Gardner: I have to believe that this is the wave of the future, but I'm impressed that you've done this to the extent you have done it and across so much of your user base. It seems to me that you're really on the forefront of this. Do you have any inkling to whether you're unique, not only in legal circles, but perhaps even in business in general?

Varju: We're definitely ahead of the curve within the legal vertical. Other verticals have ventured into this. Two in particular have avoided it longer than most, the healthcare and financial industries. But without a doubt, we're ahead of the curve amongst our peers, and there are some real benefits that go along with being early adopters.

Gardner: That provides us an opportunity to get a little bit more information about how you've done this. My understanding is that you were largely virtualized at your server level already. Tell me if that helped, and when you decided to go about this, without getting into too much of the weeds on the technology, how did you architect or map out what your requirements were going to be from that back end?

A lot of times people find that VDI comes with some strings attached that they weren’t anticipating, that there were some issues around storage, network capacity, and so forth. Explain for me, Rick, how you went about architecting and perhaps a little bit about the journey, and both good and bad experiences there?

Process and strategy

Varju: Your comment was correct about how server virtualization played into our decision process and strategy. We've been virtualizing servers for quite some time now. Our server environment is just over 75 percent virtualized. Because of the success we have had there, and the great support from VMware, we felt that it was a natural fit for us to take a close look at VMware View as a virtual desktop solution.

We started our deployment in October of 2009. So we started pretty early, and as is often the case with being an early adopter, you're going to go through some pain being among the first to do what you are doing.

In working with our vendor partners, VMware, as well as our storage integrators, what we learned early on is that there wasn’t a lot of real-world experience for us to draw from when designing or laying out the design for the underlying infrastructure. So we did a lot of crawling before we walked, walking before we ran, and a lot of learning as we went.

But to VMware’s credit, they have been with us every step of the way and have really taken joint ownership and joint responsibility of this project with Foley. Whenever we have had issues, they have been very quick to address those issues and to work with us. I can't say enough about how important that business relationship is in a project of this magnitude.

While there was certainly some pain in the early stages of this project and trying to identify what infrastructure components and capacities needed to be there, VMware as a partner truly did help us get through those, and quite effectively.
To VMware’s credit, they have been with us every step of the way and have really taken joint ownership and joint responsibility of this project with Foley.


Gardner: Rick, as we discussed, you're extending these desktops across hundreds and even thousands of users and many of them at different locations -- homes, remote offices, and so forth. How have you been able to manage your performance across all of those different endpoints, and how critical has the PC-over-IP technology been in helping with that?

Varju: PC-over-IP Protocol is critical to the overall VDI solution and delivering the DaaS, whether it's inside the Foley organization and the WAN links that we have between our offices, or an attorney who is working from home, a Starbucks or you name it. PC-over-IP as a protocol is optimized to work over even the lowest of bandwidth connections.

The fact that you're just sending changes to screens really does optimize that communication. So the end result is that you get a better user experience with less bandwidth consumption.

Gardner: I'd like to hear more too, Rick, about what you mentioned earlier, in that there are some adjacencies in terms of benefits. When you get to that higher level of server virtualization, when you start to identify your requirements and meet them to bring a full DaaS experience out to your end users, what were some of those unintended consequences that seemed to be positive for you?

Varju: I don’t know if they were unintended, but certainly it was the centralized management of the desktop environment, and being able to deploy patches and software updates from the centralized data center to the VDI infrastructure.

Finding different ways

I
t's a different way of doing things. Going back to Linda’s comments earlier, given the economic situation back in 2009 and 2010, we had to find different ways to do things. VDI just really helped us get there.

So for the centralized management, the secure benefits of delivering a virtual desktop from the data center, being able to deliver desktops faster, the provisioning side of what we do, we just saw great efficiencies and improvements there.

We had a separate production facility at Foley, where physical desktops and laptops were all shipped, set up, burned in, configured, and then shipped out to the offices that needed them. With virtualizing the desktop, we're now able to ship zero client or thin client hardware directly to the office from the manufacturer and eliminate the need for a separate production facility.

That was a benefit that we didn’t think about early on, but certainly something we enjoyed once we really got into our deployment.

Gardner: And how about the applications themselves, on an application lifecycle management (ALM) level? Have you been able to get a better handle on your lifecycle of applications -- which ones to keep, which ones to update or upgrade, which ones to sunset? Have you been able to allow your users to request applications and then deliver them at least faster? What's been the baseline impact on the application process?
You don’t have to be in the office to still be productive and serve our clients. You can do that anywhere.


Varju: I don’t think we have seen a lot of impact on the application delivery side yet, but we will gain more benefit in that area as we move forward and virtualize more of our applications. We do have a number of our core apps virtualized today. That makes it easier for us to deliver application, but we haven’t done that in any large scale yet.

Gardner: Anything on business continuity or disaster recovery that's easier or better now that you have gone to a more of a DaaS approach?

Varju: Absolutely. All you need is an Internet connection and the View client. It's that simple. Like many organizations, we've have had our share of natural disasters impacting business. We had a flood in our D.C. office, wildfires in California, and a snowstorm in the Midwest, and in each of those instances it resulted in shutting down an office for a period of time.

Today, delivering DaaS, our attorneys can connect using whatever device they have via the Internet to their personal Foley desktop, and that's powerful. You don’t have to be in the office to still be productive and serve our clients. You can do that anywhere.

Gardner: Linda, how would you characterize the overall success of this program, and then where do you take it next? Are there some other areas that you can apply this to? You mentioned unified communication and collaboration. What might be in the pipeline for leveraging this approach in the future?

Freedom of choice

Sanders: The success that we've had, as we have spoken about throughout this call, has been the ability to deliver that desktop and to have attorneys speak to their peers and let them know. Many times, we have attorneys stop us in the hallway to find out how they too can get on a hosted desktop.

Leveraging with the BYOD program helped us, giving people that freedom of choice, and then providing them with a work desktop that they can access from wherever.

We're really looking at unified communications. One of the things that I'm very interested in is video at the desktop. It's something that I am going to be looking at, because we use video conferencing extensively here, and people really like that video connection.

They want to be able to do video conferencing from wherever they are, whether it's in a conference room, outside the office, on their laptop, on a smartphone. Bringing in that unified communication is going to be one of the next things we're going to focus on.

Gardner: Rick, we hear so much these days about cloud computing. If you decide to exploit some of the cloud models or hybrid cloud, where you can pick and choose among different sources and ways of serving up workloads, might your approach be a stepping stone to that? Have you considered what the impact of cloud computing might be, given what you have already been able to attain with BYOD and VDI?
Any time we look at a change in technology, especially the underlying infrastructure, we always take a look at what cloud services are available and have to offer.


Varju: Cloud computing is certainly an interesting topic and one that you can spend a day on, in and of itself. At Foley, any time we look at a change in technology, especially the underlying infrastructure, we always take a look at what cloud services are available and have to offer, because it's important for us to keep our eye on that.

There is another area where Foley is doing things differently than a lot of our peers, and that's in the area of document management. We're using a cloud-based service for document management now. Where VMware View and VMware, as an organization, will benefit Foley as we move forward is probably more along the lines of the Horizon product, where we can pull our SaaS-based applications or on-premise based applications all together in a single portal.

It all looks the same to our users, it all opens and functions just as easily, while also being able to deliver single sign-on and two-factor authentication. Just pulling the whole desktop together that way is going to be real beneficial. Virtualizing the desktop, virtualizing our servers, those are key points in getting us to that destination.

Gardner: I'm afraid we'll have to leave it there. We've been talking about how global legal services leader Foley & Lardner LLP has adopted virtual desktops and BYOD innovations, and we have heard about how using a VMware centric VDI and BYOD approach has helped enhanced end user productivity, cut total cost, and extended their ability to leverage the future of IT perhaps much sooner than their competitors, and this all of course across many -- up to 20 remote offices.

I'd like to thank our guests for sharing their story. It's been very interesting. We've been here with Linda Sanders, CIO at Foley. Thanks so much, Linda.

Sanders: Thank you.

Gardner: And also Rick Varju, Director of Engineering & Operations there at Foley. Thank you so much, Rick.

Varju: Thank you.

Gardner: This is Dana Gardner, Principal Analyst at Interarbor Solutions. Thanks again to you also, our audience, for listening, and don’t forget to come back next time.

Listen to the podcast. Find it on iTunes/iPod. Download the transcript. Sponsor: VMware.

Transcript of a BriefingsDirect podcast on how a major law firm has adopted desktop virtualization and BYOD to give employees more choices and flexibility. Copyright Interarbor Solutions, LLC, 2005-2012. All rights reserved.

You may also be interested in:

Wednesday, August 22, 2012

VMware CTO Steve Herrod on How the Software-Defined Datacenter Benefits Enterprises

Transcript of a BriefingsDirect podcast on how pervasive software enablement helps battle IT datacenter complexity.

Listen to the podcast. Find it on iTunes/iPod. Download the transcript. Sponsor: VMware.

Get the latest announcements about VMware's cloud strategy and solutions by tuning into VMware NOW, the new online destination for breaking news, product announcements, videos, and demos at: http://vmware.com/go/now.

Dana Gardner: Hi, this is Dana Gardner, Principal Analyst at Interarbor Solutions, and you're listening to BriefingsDirect.

Today, we present a sponsored podcast discussion on the intriguing concept of the software-defined datacenter. We'll look at how some of the most important attributes of datacenter capabilities and performance are now squarely under the domain of software enablement.

We'll see how those who are now building and managing datacenters are gaining heightened productivity, delivering far better performance, and enjoying greater ease in operations and management -- all thanks to innovations at the software-infrastructure level.

A top technology leader at VMware, Steve Herrod has championed this vision of the software-defined datacenter and how the next generation of foundational IT innovation is largely being implemented above the hardware. [Disclosure: VMware is a sponsor of BriefingsDirect podcasts.]

We're here with him now to further explore how advances in datacenter technologies and architecture are, to an unprecedented extent, being driven primarily through software. Please join me in welcoming to BriefingsDirect, Steve Herrod, Chief Technology Officer and Senior Vice President of Research & Development at VMware. Welcome, Steve.

Steve Herrod: Thanks, Dana. It’s a great topic. I'm really looking forward to sharing some thoughts on it.

Gardner: We appreciate your being here. We've heard a lot over the decades about improving IT capabilities and infrastructure management, but it seems that many times we peel back a layer of complexity and we get some benefits, and we find ourselves like the proverbial onion, back at yet another layer of complexity.

Complexity seems to be a recurring inhibitor. I wonder if this time we're actually at a point where something is significantly different. Are we really gaining ground against complexity at this point?

Herrod: It’s a great question, because complexity is associated with IT and why we'll do it differently this time. I see two things happening right now that give us a great shot at this.

One is purely on expectations. All of the opportunities we have as consumers to work with cloud computing models have opened up our imagination as to what we should expect out of IT and computing datacenters, where we can sign up for things immediately, get things when we want them, and pay for what we use. All those great concepts have set our expectations differently.

A good shot

Simultaneously, a lot of changes on the technology side give us a good shot at implementing it. When you combine technology that we'll talk about with the loosened-up imagination on what can be, we're in a great spot to deliver the software-defined datacenter.

Gardner: You mentioned cloud and this notion that it’s a liberating influence. Is this coming from the technologists or from the business side? Is there a commingling on that concept quite yet?

Herrod: It’s funny. I see it coming from the business side, which is the expectation of an individual business unit launching a product. They now have alternatives to their own IT department. They could go sign up for some sort of compute service or software-as-a-service (SaaS) application. They have choices and alternatives to circumvent IT. That's an option they didn't have in the past.

Fundamentally, it comes down to each of us as individuals and our expectations. People are listening to this podcast when they want to, quickly downloading it. This also applies to signing up for email, watching movies, and buying an app on an app store. It's just expected now that you can do things far more agilely, far more quickly than you could in the past, and that's really the big difference.

Gardner: Steve, folks are getting higher expectations based on what they encounter on their consumer side of technology consumption. We see what the datacenters are capable of from the likes of Google and Facebook. Is it possible for enterprises to also project that sort of productivity and performance onto what they're doing, and maybe now that we've gone through an iteration of these vast datacenters, to do it even better?

Herrod: I have a lot of friends at Facebook, Zynga, and Google, running the datacenters there, and what’s exciting for me is that they have built a fully software-defined datacenter. They're doing a lot of the things we are talking about here. But there are two unique things about their datacenters.

When you go into the business world, they don't have legions of people to run the infrastructure.



One is that they have hundreds or even thousands of PhDs who are running this infrastructure. Second, they're running it for a very specific type of application. To run on the Google datacenter, you write your applications a very specific way, which is great for them. But when you go into the business world, they don't have legions of people to run the infrastructure, and they also have a broad set of applications that they can’t possibly consider rewriting.

So in many ways, I see what we're doing is taking the lesson learned in those software-defined datacenters, but bringing it to the masses, and bringing it to companies to run all of their applications and without all of the people cost that they might need otherwise.

Gardner: Let’s step back for some context. How did we get here? It seems that hardware has been sort of the cutting edge of productivity, when we think of Moore’s Law and we look at the way that storage, networks, and server architecture have come together to give us the speeds and feeds that have led to a lot of what we take for granted now. Let’s go through that a little bit and think about why we're at a point where that might not be the case anymore.

Herrod: I like to look at how we got to where we are. I think that's the key to understanding where we're likely to go from here.

History of IT decisions

W
e started VMware out of a university, where we could take the time to study history and look at what had happened. I liked looking at existing datacenters. You can look through the datacenter and see the history of IT decisions of the past.

It's traditionally been the case that a particular new need led the IT department to go out and buy the right infrastructure for that new need, whether it’s batch processing, client/server applications, or big web farms. But these individually made decisions ended up creating the silos that we all know about that exist all over datacenters.

They now have the group that manages the mainframe, the UNIX administration group, and the client PC group, and none of them is using common people or common tools as much as they certainly would like to. How we got to where we are were isolated decisions for the right thing at the right time, without recognizing the opportunity to optimize across a broader set of the datacenter.

The whole concept of software-defined datacenters is looking holistically at all of the different resources you have and making them equally accessible to a lot of different application types.

Gardner: Earlier, I used the metaphor of an onion. You peel back complexity and you get more. But when it comes to the architecture of datacenters, it seems that the right comparison might be a snowball, which is layered on another layer, or it has been rolling and gathering as it goes, but not rationalized, not looked at holistically.

Every single day you hear about a new case where a business unit or an employee is able to circumvent IT to scratch the itch they have for some particular type of technology.



Are there some sorts of imperatives now that are driving people to do that? We talked about the cloud vision, but maybe it’s security, maybe it’s the economics, maybe it’s the energy issues, or maybe it's all those things together.

Herrod: It’s a little of each. First of all, I like the onion analogy, because it makes you cry, and I think that’s also key. But it’s a combination of requirements coming in at the same time that's really causing people to look at it.

Going back to the original discussion, it starts with the fact that there are choices now. Every single day you hear about a new case where a business unit or an employee is able to circumvent IT to scratch the itch they have for some particular type of technology, whether it's using Dropbox instead of the file servers that the company has, buying their own device and bringing it in, or just signing up for Amazon EC2, instead of using their local datacenter. These are all examples of them being able to go around IT.

But what often happens subsequently is that, when a security problem happens, when you realize that you are not in compliance, IT is left holding the bag. So we get an environment here where the user demand can be handled other ways, but IT has to be able to compete with those.

We have to let IT be a service provider and be able to be as responsive with those, so that they can avoid people going around them. But they still need to be responsible to the business when it comes time to show that Sarbanes-Oxley (SOX) compliance is appropriate or to make sure that your customer records aren’t leaked out to everyone else on the Internet.

That unique balance between the user choice and IT control is something we've all seen over the last several decades, and it’s showing up again at an even larger state.

New competition


Gardner: As you pointed out, Steve, IT isn’t just competing against itself. That is to say, maybe a 5 percent or 10 percent improvement over how well it did last year will be viewed as very progressive. But they're competing now against other datacenter architects. Maybe it’s a SaaS provider, maybe it’s a cloud provider, maybe it’s managed service provider (MSP) or telco that's now offering additional services.

We're really up against this notion that if you don’t architect your datacenter with that holistic software-defined mentality, and someone else does that, you're in trouble.

Herrod: It’s a great point. There are rate cards now for what you can use something else for. You might pay 7 cents per hour for this, or "this much" per transaction. IT departments in general have not traditionally had a good way of, first, even knowing how much they are costing, but second, optimizing to be competitive. So there's this awareness now of how much I'm spending and how long it takes. These metrics are causing this.

Gardner: Let’s revisit the context and the history here, looking at virtualization in particular. We've seen it extend beyond servers to data, storage, and also networking. Is this part of what you've got in your vision of software defined? Is it strictly virtualization, or does it encompass more? Help me understand how you've progressed in your thinking along these lines, particularly in regard to virtualization?

Herrod: We'll step back a little bit. VMware, over the last 13 years or so, has done a very good job of completely optimizing how servers are used in the datacenter. You can provision a new virtual machine (VM) in seconds. The cost has gone down in orders of magnitude. We've really done a good job on the compute and memory aspect of a datacenter.

It's absolutely crucial to look at the breadth of things that are involved in the datacenter.



But as you said, a couple of things have to happen from there. It's absolutely crucial to look at the breadth of things that are involved in the datacenter. We talk to customers now, and often they say, "Great, you've just lowered the cost and time taken to provision a new server. But when I put this in production, by the way, I care what LUN it ends up on, I have to look at what VLAN is there, and if it's in the right section of my firewall setup."

It might take seconds to provision a VM, but then it takes five days to get the rest of the solutions around it. So we see, first of all, the need to get the entire datacenter to be as flexible and fast moving as the pure server components are right now.

Again, if you look at the last couple of years, I would rate the industry -- ourselves and others -- as moving forward quite well on the storage side of things. There are still some things to do for sure, but storage, for the most part, has gotten a good head start on being fully virtualized and automated.

The big buzz around the industry right now has been the recognition that the network is the huge remaining barrier to doing what you want in your datacenter. Plenty of startups and all kinds of folks are working on software-defined networking. In fact, that's what we use as the term for the software-defined datacenter, because as networking follows as this big inhibitor, you'll be opened up to having a truly planned datacenter solution in place.

Now, we can break that down a little bit. It's important to talk about the technology piece of this. But when I say software-defined, I really look at three phases of how software comes in and morphs this existing hardware that you have.

The first step

The first step is to abstract away what people are trying to use from how it is being implemented. That's the core of what virtual even means, separating the logical from the physical. It gives you hardware independence. It enables basic mobility and all sorts of other good things.

The second phase is when you then pool all of these abstracted resources into what we call resource pools. Anyone who uses VMware software knows that we create these great clusters of computing horsepower and we allow vMotion and mobility within it.

But you need to think about that same notion of aggregation of resources at the storage and networking levels, so they become this great pool of horsepower that you can then dole out quite effectively. So after you've abstracted and pooled, the final phase is how you now automate the handling of this. This is where the real savings and speed come from.

Once you have pools of resources, when a new request comes in, you should be able to allocate storage, security, networking, and CPU very quickly. Likewise, when it goes away, you should be able to remove it and put it back into the pool.

That's a bit of a mouthful, but that's how I see the expansion. It first goes from just compute into storage, networking, security, and the other parts of the datacenter. Then simultaneously, you're abstracting each of these resources, pooling them, and then automating them.

When a new request comes in, you should be able to allocate storage, security, networking, and CPU very quickly.



Gardner: What's really fascinating to me are the benefits you get by abstracting to a virtualization and software-defined level -- the ability to implement with greater ease -- but that comes with underlying benefits around operations and management.

It seems to me that you can start to dial up and down, demonstrate elasticity at a far greater level, almost at that data-center level, looking at the service-level agreements (SLAs) and the key performance indicators (KPIs) that you need to adhere to and defining your datacenter success through a business metric, like an SLA.

Does it ring true with you that we're talking about some real management and operational efficiencies, as well as implementation efficiencies?

Herrod: It is, Dana, and we talk about it a few different ways. The transformation of datacenters, as we got started, was all about cost savings and capital expenses in financial terms. Let's buy fewer servers. "Let's not build another datacenter."

Get the latest announcements about VMware's cloud strategy and solutions by tuning into VMware NOW, the new online destination for breaking news, product announcements, videos, and demos at: http://vmware.com/go/now.

But the second phase, and where most customers are today, is all about operational efficiency. Not only am I buying less hardware, but I can do things where I'm actually able to satisfy, as you said, the KPIs or the SLAs.

Doing even more


I
can make sure that applications are up and running with the level of availability they expect, with less effort, with fewer people, and with easier tools. And when you go from capital expense savings to operational improvements, you impact the ability for IT to do even more.

To take that one level further, whenever I hear people talk about cloud computing -- and everyone talks about this with all sorts of different impressions in mind -- I think of cloud as simply being about more speed. You can do something more quickly. You can expand something more quickly. And that's what this third phase after capital and operational savings is about, that agility to move faster.

As businesses’ success ties so closely to how IT does, the ability to move faster becomes your strategic weapon against someone else. Very core to all this is how can we operate more efficiently, while satisfying the specific needs of applications in this new datacenter.

Gardner: Another area that I hear about benefiting from this software defined datacenter is the ability to better reduce and manage risk, particularly around security issues. You're no longer dealing with multiple parties, like the group overseeing UNIX, the group overseeing PC, the group doing the x86 architectures. The likelihood for process cracks to develop and security issues to unfortunately crop up seem to be more likely under those circumstances.

But when you have got a more organized overview of management operations and architecting at a similar level, you can instantiate the best practices around security. Please address this issue of security as another fruit to be harvested from a software-defined datacenter.

Security means a lot of different things, and it has been affected by a number of different aspects.



Herrod: Security means a lot of different things, and it has been affected by a number of different aspects.

First of all, I agree that the more you can have a homogenous platform or a homogenous team working on something, the less variation and process you end up with, exactly as you said, Dana. That can allow you to be more efficient.

This is a replacement for the traditional world of ITIL, where they had to try to create some standard across very different back ends. That's a natural progression for getting rid of some of the human errors that come into problems.

A more foundational thing that I am excited about with the software-defined datacenter is how, rather than security being these physical concepts that are deployed across the datacenter today, you can really think of security logically as wrapping up your application. You can do some pretty interesting new things.

A quick segue on that -- the way most security works in datacenters today is through statically placed appliances, whether they're firewalls, intrusion detection, or something else. Then the onus is on you to fit your application in the right part of the datacenter to get the right level of protection that you have, and hopefully it doesn’t move out of that protection zone.

Follows the application

What we're able to deliver with the software-defined datacenter is a way that security is a trait associated with the application, and it essentially wraps and follows the application around. You've virtualized your firewall and you've built it into the fabric of how you're automating deployments. I see that as a way to change the game on how tight the security can be around an application, as well as making sure it's always around there when you deploy it.

Gardner: For end users the proof is in how they actually consume, relate to, and interact with the applications. Is there something about the applications specifically that the software-defined datacenter brings, a higher level of user productivity benefits? What's really going to be noticeable for the application level to end users?

Herrod: That's a great question. I'm an infrastructure guy, as are probably many people listening here, and it’s easy to forget that infrastructure is simply a means to an end. It's the way that you run applications that ultimately matters. So you have to look at what an application is and what its ideal state looks like. The idea of the software-defined datacenter is to optimize that application experience.

That very quickly translates into how quickly can I get my application from the time I want it until it's running. It dictates how often this application is up, what kind of scale it can handle as more people come in, and how secure it is. Ultimately, it's about the application. I believe the software-defined datacenter is the way to optimize that application experience for all the users.

Gardner: Steve, how about not just repaving cow paths in terms of how we deploy existing types of applications. Is there something inherent in a software-defined datacenter benefit that will work to our advantage on innovative new types of applications?

We are at a point where, depending on who you listen to, about 60 percent of all server applications are running virtual.



They could be for high performance computing, big data and analytics, or even when we go to mobile and we have location services folded into some of the way that applications are served up, and there is sort of a latency sensitive portion to this. Are there new types of apps that will benefit from this software-defined architecture?

Herrod: This is one of the most profound parts, if we get it right. I've been talking about can we collapse the silos that were created. Can we get all of our existing apps onto this common platform? We're doing quite well on that. We are at a point where, depending on who you listen to, about 60 percent of all server applications are running virtual, which is pretty amazing. But that also means there is 40 percent that aren’t. So I spend a lot of time understanding why they might not be today.

Part of it is that just as businesses get more comfortable and get there, their business critical apps will get onto the system, and that's working well. But there are applications that are emerging, as you talked about, where if we're not careful, they'll create the next generation of silos that we'll be talking about 10 years from now.

I see this all the time. I'll visit a company that has a purely virtualized pool, but they have also created their grid for doing some sort of Monte Carlo simulations or high-performance computing. Or they have virtualized everything except for their unified communication environment, which has a special team and hardware allocated to it.

We spend quite a bit of time right now looking at the impediments to having those run on top of virtualization, which might be performance related or something else. Then going beyond impediments to how can we make them even better when they are run on top of the virtualized platform.

Great applications


Some of the really interesting things we're able to show now with our partners are things I would have never dreamed of as great candidates when we started the company. But we're able to satisfy very strict real-time requirements, which means we can run some great applications used in various sorts of stock trading, but also used in things like voice over IP (VoIP) or video conferencing.

Another big area that's liable to create the next round of silos, if we're not careful, is the big data and Hadoop world. Lots of customers are kicking the tires and creating special clusters and teams to work on that. But just recently, we've shown that the performance of Hadoop on top of vSphere, our virtualization platform, can be great.

We can even show that we can make it far easier to set up. We can make Hadoop more available, meaning it won’t crash as often. And we can even do things where we make it more elastic than it already is. It can suck up as many resources in the software-defined datacenter as it wants, when it needs them, but it can also give them all back when it's not using them.

It’s really exciting to look across all these apps. At this point, I don’t see a reason why we can't get almost any type app that we're looking at today to fit into the software-defined datacenter model.

Gardner: That’s exciting, when we don’t have any of the stragglers or large portions of business functions that are cast off. It seems to me that we've reached the capability of mirroring the entire datacenter, whether it’s for purposes of business continuity or disaster recovery (DR), or backup and recovery. It gives us the choice of where to locate these resources, not at the individual server, virtual machine level, or application level, but really to move the whole darn datacenter, if that’s important, without a penalty.

Very rapidly, this notion of DR has been a driving reason for people to virtualize their datacenter.



For our last blue-sky direction with this conversation, are we at the point where we have fungibility, if you will, of datacenters, or are we getting to that point in the near future, where we can decide at a moment’s notice where we're going to actually put our datacenter, almost location independent?

Herrod: It’s a ways out, before we're just casually moving datacenters around, for sure. But I have seen some use cases today that are showing what's possible, and maybe I'll just give you a couple of examples.

DR has long been one of the real pains for IT to deal with. They have to replicate things across the country and keep two datacenters completely in sync, literally the same hardware, the same firmware layer, and all of that that goes into it.

Very rapidly, this notion of DR has been a driving reason for people to virtualize their datacenter. We have seen many cases now, where you're able to failover your entire datacenter, effectively copying the whole datacenter over to another one, keeping the logical constructs in place, but hosting in a completely different area.

To get that right, your storage needs to be moved, your network identities need to be updated, and those are things that you can script and do in an automated way, once you've virtualized the whole datacenter.

Fun example


A
nother really fun example I see more and more now is, as mergers and acquisitions happen, we've seen several cases where one company buys another. They both had fully virtualized their datacenter and they could put on a giant storage drive the datacenter at one company and begin to bring it up on the other side, once they copied it over there.

So the entire datacenter isn't moved yet, but I think there are clear indications of once you separate out where something runs and how it runs from what you are really after, it opens up the door for a lot of different optimizations.

Gardner: We're coming up on the end of our time, but we also have the big annual VMworld show in San Francisco coming up toward the end of August. I know you can’t pre-announce anything, but perhaps you can give us some themes. We've talked about a lot of things here today, but is there any particular themes that we have hit on that you think are going to be more impactful or more important in terms of what we should expect at VMworld?

Herrod: It will be exciting as always. We have more than 20,000 people expected. What I'm doing here is talking about a vision and generalities of what's happening, but you can certainly imagine that what we will be showing there will be the realities -- the products that prove this, the partnerships that are in place that can help bring it forward, and even some use cases and some success stories.

You need to get to the point where you are leveraging the full automation and mobility that exists today.



So expect it to be certainly giving more detail around this vision and making it very real with announcements and demonstrations.

Gardner: Last question, if I'm a listener here today, I'm intrigued, and I want to start thinking about the datacenter at the software-defined level in order to generate some of the benefits that we have been discussing and some of the vision that we have been painting, what’s a good way to start? How do you begin this process? What are a few foundational directives or directions that you recommend?

Herrod: I think it can sound very, very disruptive to create a new software-defined datacenter, but one of the biggest things that I have been excited about in this technology versus others is that there are a set of steps that you go through, where you're able to get some value along the way, but they are also marching you toward where you ultimately end up.

So to customers who are doing this, presumably most of you have done some basic virtualization, but really you need to get to the point where you are leveraging the full automation and mobility that exists today.

Once you start doing that, you'll find that it obviously is showing you where things can head. But it also changes some of the processes you use at the company, some of the organizational structures that you have there, and you can start to pave the way for the overall datacenter to be virtualized, as you take some of these initial steps.

It’s actually very easy to get started. You can make benefits along the way. Your existing applications and hardware work. So that would be my real entreaty -- use what exists today and get your feet wet, as we deliver the next round heading forward.

Gardner: We've been talking about the intriguing concept of the software-defined datacenter and we've been exploring how advances in datacenter technologies and architectural benefits that are being driven through software innovation can provide a number of technological and business benefits.

Please join me now in thanking our guest, Steve Herrod, Chief Technology Officer and Senior Vice President of Research & Development at VMware. Thanks so much, Steve.

Herrod: Great. I've enjoyed the time, Dana. Thanks.

Gardner: My pleasure. This is Dana Gardner, Principal Analyst at Interarbor Solutions. Thanks also to our audience for reading and listening to our discussion, and don't forget to come back next time for the next edition of BriefingsDirect.

Get the latest announcements about VMware's cloud strategy and solutions by tuning into VMware NOW, the new online destination for breaking news, product announcements, videos, and demos at: http://vmware.com/go/now.

Listen to the podcast. Find it on iTunes/iPod. Download the transcript. Sponsor: VMware.

Transcript of a BriefingsDirect podcast on how pervasive software enablement helps battle IT datacenter complexity.
Copyright Interarbor Solutions, LLC, 2005-2012. All rights reserved.

You may also be interested in: