Friday, October 05, 2012

Internet of Mobile and Cloud Era Demands New Kind of Diverse and Dynamic Performance Response, Says Akamai GM

Transcript of a BriefingsDirect podcast on the inadequacy of the old one-size-fits-all approach to delivering web content on different devices and different networks.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: Akamai Technologies.

Dana Gardner: Hi, this is Dana Gardner, Principal Analyst at Interarbor Solutions, and you're listening to BriefingsDirect.

Today, we present a sponsored podcast discussion on the new realities of delivering applications and content in the cloud and mobile era. We'll examine how the many variables of modern Internet usage demand a more situational capability among and between enterprises, clouds, and the many popular end devices.

That is, major trends have conspired to make inadequate a one-size-fits-all approach to today’s complex network optimization and applications performance demands. Rather, more web experiences now need a real-time and dynamic response tailored and refined to the actual use and specifics of that user’s task.

We're here with an executive from Akamai Technologies to spotlight the trends leading to this new dynamic cloud-to-mobile network reality, and to evaluate ways to make all web experiences remain valued, appropriate, and performant.

With that, please join me now in welcoming our guest, Mike Afergan, Senior Vice President and General Manager of the Web Experience Business Unit at Akamai Technologies in Cambridge, Massachusetts. Welcome back, Mike. [Disclosure: Akamai Technologies is a sponsor of BriefingsDirect podcasts.]

Michael Afergan: Hi, thanks, Dana.

Gardner: Trends that seem to be spurring a different web, a need for a different type of response, given the way that people are using the web now. Let’s start at the top. What are the trends, and what do you mean by a "situational response" to ameliorating this new level of complexity?

Afergan: There are a number of trends, and I'll highlight a few. There’s clearly been a significant change, and you and I see it in our daily lives in how we, as consumers and employees, interact with this thing that we call the web.

Only a few years ago, most of us interacted with the web by sitting in front of the PC, typing on a keyboard and with a mouse. Today, a large chunk, if not a majority, of our interaction with the web is through different handheld devices or tablets, wi-fi, and through cellular connections. More and more it's through different modes of interaction.

For example, Siri is a leader in having us speak to the web and ask questions of the web verbally, as opposed to using a keyboard or some sort of touch-screen device. So there are some pretty significant trends in terms of how we interact as consumers or employees, particularly with devices and cellular connectivity.

Behind the scenes there’s a lot of other pretty significant changes. The way that websites have been developed has significantly changed. They're using technology such as JavaScript and CSS in a much heavier way than ever before.

Third-party content

We're also seeing websites pull in a variety of content from third parties. Even though you're going to a website, and it looks like it’s a website of a given retailer, more often than not a large chunk of what you are seeing on that page is actually coming from their business partners or other people that they are working with, which gets integrated and displayed to you.

We're seeing cellular end-devices as a big trend on the experience side. We're seeing a number of things happen behind the scenes. What that means is that the web, as we thought about it even a few years ago, is a fundamentally different place today. Each of these interactions with the web is a different experience and these interactions are very different.

A user in Tokyo on a tablet, over a cellular connection, interacting with the website is a very different experience situation than me at my desk in Cambridge, in front of my PC right now with fixed connectivity. This is very different than me or you this evening driving home, with an iPhone or a handheld device, and maybe talking to it via Siri.

Each of these are very different experiences and each of these are what I call different situations. If we want to think about technology around performance and we want to think technology involving Internet, we have to think about these different situations and what technologies are going to be the most appropriate and most beneficial for these different situations.

Gardner: So we have more complexity on the delivery side, perhaps an ecosystem of different services coming together, and we also have more devices, and then of course different networks. And as people think about the cloud, I think the missing word in the cloud is the networks. There are many networks involved here.
There are some trends in which the more things change, the more they stay the same.

Maybe you could help us understand with these trends that delivery is a function of many different services, but also many different networks. How does that come together?

Afergan: There are some trends in which the more things change, the more they stay the same. The way the Internet works fundamentally hasn’t changed. The Internet is still, to use the terminology from over a decade ago, a network of networks. The way that data travels across the Internet behind the scenes is by moving through different networks. Each of those has different operating principles in terms of how they run, and there are always challenges moving from one network to another.

This is why, from the beginning, Akamai has always had a strategy of deploying our services and our servers as close to the users as possible. This is so that, when you and I make a request to a website, it doesn't have to traverse multiple networks, but rather is served from an Akamai location as close as possible to you.

And even when you have to go all the way across the Internet, for example, to buy something and submit a credit card, we're finding an intelligent path across the network. That's always been true at the physical network layer, but as you point out, this notion of networks is being expanded for content providers, websites, and retailers. Think about the set of companies that they work with and the other third parties that they work with almost as a network, as an ecosystem, that really comes together to develop and ultimately create the content that you and I see.

This notion of having these third party application programming interfaces (APIs) in the cloud is a very powerful trend for enterprises that are building websites, but it also obviously creates a number of challenges, both technical and operational, in making sure that you have a reliable, scalable, high-performing web experience for your users.

Big data

Gardner: I suppose another big trend nowadays -- we've mentioned mobile and cloud -- is this notion of analytics, big data, trying to be more intelligent, a word you used a moment ago. Is there something about the way that the web has evolved that's going to allow for more gathering of information about what's actually taking place on the networks and these end-devices, and then therefore be able to better serve up or produce value as time goes on?

Is the intelligence something that we can measure? Is there a data aspect to this that comes into that situational benefit path?

Afergan: One of the big challenges in this world of different web experience and situations is a greater demand for that type of information. Before, typically, a user was on a PC, using one of a few different types of browsers.

Now, with all these different situations, the need for that intelligence, the need to understand the situation that your user is in -- and potentially the changing situation that your user is in as they move from one location to another or one device to another -- is even more important than it was a few years ago.

That's going to be an important trend of understating the situations. Being able to adapt to them dynamically and efficiently is going to be an important trend for the industry in the next few years.
More and more employees are bringing their increasingly powerful devices into the office.

Gardner: What does this mean for enterprises? If I'm a company and I recognize that my employees are going to want more variety and more choice on their devices, I have to deliver apps out to those devices. I also have to recognize that they don't stop working at 5 pm. Therefore, our opportunity for delivering applications and data isn't time-based. It's more of a situational-based demand as well.

I don’t think enterprises want to start building out these network capabilities as well as data and intelligence gathering. So what does it mean for enterprises, as they move toward this different era of the web, and how should they think about responding?

Afergan: You nailed it with that question. Obviously one of the big trends in the industry right now, in the enterprise industry, bring your own device (BYOD). You and I and lots of people listening to this probably see it on a daily basis as we work.

In front of me right now are two different devices that I own and brought into the office today. Lots of my colleagues do the same. We see that as a big trend across our customer base.

More and more employees are bringing their increasingly powerful devices into the office. More and more employees want to be able to access their content in the office via those devices and at home or on the go, on a business trip, over those exact same devices, the way we've become accustomed to for our personal information and our personal experiences online.

Key trends

So the exact same trend that you think about being relevant for consumer-facing websites -- multiple devices, cellular connectivity -- are really key trends that are being driven from the outside-in, from the employees into the enterprise right now. It’s a challenge for enterprise to be able to keep up. It’s a challenge for enterprises to be able to adapt to those technologies, just like it is for consumer websites.

But for the enterprise, you need to make sure that you are mindful of security, authentication, and a variety of other principles, which are obviously important once you are dealing with enterprise data.

There’s tremendous opportunity. It is a great trend for enterprises, in terms of empowering their employees, empowering their partners, decreasing the total cost of ownership for the devices, and for their users to have access to the information. But it obviously presents some very significant trends and challenges. Number one, obviously, is keeping up with those trends, but number two, doing it in a way that’s both authenticated and secure at the same time.

Gardner: Based on a lot of the analyst reports that we're seeing, the adoption of cloud services and software-as-a-service (SaaS) services by enterprises is expected to grow quite rapidly in the coming years. If I'm an enterprise, whether I'm serving up data and applications to my employees, my business partners, and/or end consumers, it doesn’t seem to make sense to get cloud services, bring them into the enterprise, and then send them back out through a network to those people. It sounds like this is moving from a data center that I control type of a service into something that’s in the cloud itself as well.

So are we reading that correctly -- that even your bread and butter, Global 2000 enterprise has to start thinking about network services in this context of a situational web?
You're now talking about putting those applications into the cloud, so that those users can access them on any device, anywhere, anytime.

Afergan: Exactly. The good news is that most thoughtful enterprises are already doing that. It doesn’t make it easier overnight, but they're already having those conversations. You're exactly right. Once you recognize the fact that your employees, your partners are going to want to interact with these applications on their devices, wherever they may be, you pretty quickly realize that you can’t build out a dedicated network, a dedicated infrastructure, that’s going to service them in all the locations that they are going to need to be.

All of a sudden, you're now talking about putting those applications into the cloud, so that those users can access them on any device, anywhere, anytime. At that point in time, you're now building to a cloud architecture, which obviously brings a lot of promise and a lot of opportunity, but then some challenges associated with it.

Gardner: I'll just add one more point on the enterprise, because I track enterprise IT issues more specifically than the general web. IT service management, service level agreements (SLAs), governance policy and management via rules that can be repeatable are all very important to IT as well.

Is there something about a situational network optimization and web delivery that comes to play when it relates to governance policy and management vis-à-vis rules; I guess what you'd call service-delivery architecture?

Situational needs

Afergan: That’s a great question, and I've had that conversation with several enterprises. To some degree, every enterprise is different and every application is somewhat different, which even makes the situational point you are making all the more true.

For some enterprises, the requirements they have around those applications are ubiquitous and those need to be held true independent of the situation. In other cases, you have certain requirements around certain applications that may be different if the employee is on premises, within your VPN, in your country, or out of the country. All of a sudden, those situations became all the more complicated.

As each of these enterprises that we have been working with think through the challenges that you just listed, it's very much a situational conversation. How do you build one architecture that allows you to adapt to those different situations?

Gardner: I think we have described the problem fairly well. It's understood. What do we start thinking about when it comes to solving this problem? How can we get a handle on these different types of traffic with complexity and variability on the delivery end, on the network end, and then on the receiving end, and somehow make it rational and something that could be a benefit to our business?

Afergan: It's obviously the challenge that we at Akamai spend a lot of time thinking about and working with our customers on. Obviously, there's no one, simple answer to all of that, but I'll offer a couple of different pieces.
For some enterprises, the requirements they have around those applications are ubiquitous and those need to be held true independent of the situation.

We believe it requires starting with a good overall, fundamentally sound architecture. That's an architecture that is globally distributed and gives you a platform where you don't have to -- to answer some of your earlier questions -- worry about some of the different networks along the way, and worry about some of the core, fundamental Internet challenges that really haven't changed since the mid-'90s in terms of reliability and performance of the core Internet.

But then it should allow you to build on top of that for some of the cloud-based and situational-based challenges that you have today. That requires a variety of technologies that will, number one, address, and number two, adapt to situations that you're talking about.

Let's go through a couple of the examples that we've already spoken about. If you're an enterprise worrying about your user on a cellular connection in Hong Kong, versus you're the same enterprise worrying about the same application for a user on a desktop fixed-connection based in New York City, the performance challenges and the performance optimizations that you want to make are going to be fundamentally different.

There is a core set of things that you need to have in place in all those cases. You need to have an intelligent platform that's going to understand the situation and make an appropriate decision based on that situation. This will include a variety of technical variables, as well as just a general understanding of what the end user is trying to do.

Gardner: It seems like it wasn't that long ago, Mike, that people said, "I just want to make things 50 percent faster. I want to make my website speedier." But that's almost an obsolete question. It's more, "How do I make a specific circumstance perform in a specific way for a specific user and that might change in five minutes?"

So how do we rethink moving from fatter pipes and faster websites to these new requirements? Is this a cultural shift? Is it moving from a two-dimensional to a three-dimensional picture? How do we create a metaphor or analogy to better understand the difference and the type of problem we need to solve?

Complicated problem

Afergan: Again, it's a complicated problem. Start again with the good news that the reason we're having this problem is that there are these powerful situations and powerful opportunities for enterprises, but the smart enterprises we're working with are asking a couple of different questions.

First, there is a myriad of situations, but typically you can think about some of them that are the most important to you to start off with.

The second thing that enterprises are doing thoughtfully is rethinking how you even do performance measurement. You just gave a great example. Before, you could talk about how do I make this experience 50 percent faster, and that was a fine conversation.

Now, smart enterprises are saying, "Tell me about the performance of my users in Hong Kong over cellular connections. Tell me about the performance of my users in New York City over fixed connections." Then it's understanding the different dimensions and different variables that are important for you and then measuring performance based on those variables.

I work with several thoughtful enterprises that are going through that transformation of moving from a one-size-fits-all performance measurement metric to being a lot more thoughtful about what metrics they care about. Exactly as we've talked about, and exactly as you mentioned, that one-size-fits-all metric is becoming less relevant by the day.
You need to have an underlying architecture that allows you to operate across a variety of the parties.

Gardner: And as we have more moving parts, we perhaps could think about it as a need for a Swiss Army Knife of some sort, where multiple tools can be brought out quickly and applied to what's needed. But that needs to be something that's coordinated, not just by the enterprise, the Internet service provider (ISP), the networks, or the cloud providers -- but all of them. Getting them to line up, or having one throat to choke, if you will, has always been a challenge.

Is there something now, or is there something about Akamai in particular, that gets you neutrality? We mentioned the Swiss Army Knife. Is there some ability for you to get in and be among and in a positive value development relationship with all of these players that perhaps is what we are starting to get to when we think about the situational benefit?

Afergan: It's obviously something we spend a lot of time thinking about here. In general, not just speaking about Akamai for the moment, to be successful here, you need to have a few things.

You need to have an underlying architecture that allows you to operate across a variety of the parties you mentioned.

For example, we talked about a variety of networks, a variety of ISPs. You need to have one architecture that allows you to operate across all of them. You can't go and build different architecture and different solution ISP by ISP, network by network, or country by country. There's no way you're going to build a scalable solution there. So first and foremost, you need that overall ubiquitous architecture.

Significant intelligence

The second thing you need is significant intelligence to be able to make those decisions on the fly, determine what the situation, and what would be the most beneficial solution and technology applied to that situation.

The third thing you need is the right set of APIs and tools that ultimately allows the enterprise, the customer, to control what's happening, because across these situations sometimes there is no absolute right answer. In some cases, you might want to suddenly degrade the fidelity of the experience to have it be a faster experience for the user.

Across all of these, having the underlying overall architecture that gives you the ubiquity, having the intelligence that allows you to make decisions in real-time, and having the right APIs and tools are things that ultimately we at Akamai spend a lot of time worrying about.

We sit in a unique position to offer this to our customers, working closely with them and their partners. And all of these things, which have been important to us for over a decade now, are even more important as we sail into this more complicated situationally driven world.

Gardner: We're almost out of time, but I wonder about on-ramps or adoption paths for organizations like enterprises to move toward this greater ability to manage the complexity that we're now facing. Perhaps it’s the drive to mobility, perhaps it’s the consumption of more cloud services, perhaps it’s the security- and governance and risk and compliance-types issues like that, or all of the above. Any sense of how people would find the best path to get started and any recommendations on how to get started?
Each company has a set of challenges and opportunities that they're working through at any point in time.

Afergan: Ultimately, each company has a set of challenges and opportunities that they're working through at any point in time. For us, it begins with getting on the right platform and thinking about the key challenges that are driving your business.

Mobility clearly is a key trend that is driving a lot of our customers to understand and appreciate the challenges of situational performance and then try to adapt it in the right way. How do I understand what the right devices are? How do I make sure that when a user moves to a less performing network, I still give them a high quality experience?

For some of our customers, it’s about just general performance across a variety of different devices and how to take advantage of the fact that I have a much more sophisticated experience now, where I am not just sending HTML, but am sending JavaScript and things I could execute on the browser.

For some of our customers it's, "Wait a minute. Now, I have all these different experiences. Each one of these is a great opportunity for my business. Each one of these is a great opportunity for me to drive revenue. But each one of these is now a security vulnerability for my business, and I have to make sure that I secure it."

Each enterprise is addressing these in a slightly different way, but I think the key point is understanding that the web really has moved from basic websites to these much more sophisticated web experiences.

Varied experiences

The web experiences are varied across different situations and overall web performance is a key on-ramp. Mobility is another key on-ramp that you, and security would be a third initial starting point. Some of our customers are trying to take a very complicated problem and look at it through a much more manageable lens, so they can start moving in the right direction.

Gardner: I am afraid we will have to leave it there. We've been discussing how most cloud experiences now need a more real-time and dynamic response, perhaps tailored and refined to the actual use and specifics of a user’s task at hand.

And we've heard about how a more situational capability that takes into account many variables at an enterprise, cloud, and network level, and then of course across these end devices that are now much more diverse and distributed, all come together for a new kind of value.

I'd like to thank our guest. We've been here with Mike Afergan, the Senior Vice President and General Manager of the Web Experience Business Unit at Akamai Technologies.

Thank you so much, Mike.

Afergan: Thanks, Dana. I really appreciated the time.

Gardner: This is Dana Gardner, Principal Analyst at Interarbor Solutions. A big thank you also to our audience for listening, and don’t forget to come back next time.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: Akamai Technologies.


Transcript of a BriefingsDirect podcast on the inadequacy of the old one-size-fits-all approach to delivering web content on different devices and different networks. Copyright Interarbor Solutions, LLC, 2005-2012. All rights reserved.

You may also be interested in:

Thursday, October 04, 2012

Security Officer Sees Rapid Detection and Containment as New Best IT Security Postures for Enterprises


Transcript of a BriefingsDirect podcast on how companies can protect themselves, given that security breaches are an inevitable fact of life.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: HP.

Dana Gardner: Hello, and welcome to the next edition of the HP Discover Performance Podcast Series. I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your moderator for this ongoing discussion of IT innovation and how it’s making an impact on people’s lives.

Once again, we're focusing on how IT leaders are improving performance of their services to deliver better experiences and payoffs for businesses and end-users alike.

Our discussion today unpacks the concept of intelligent containment of risk as an important approach to overall IT security. We'll examine why rapid and proactive containment of problems and breaches, in addition to just trying to keep the bad guys out of your systems, makes sense in today's environment.

Here to share his perceptions on some new ways to better manage security from the vantage of containment is our guest, Kaivan Rahbari, Senior Vice President of Risk Management at FIS Global, based in Jacksonville, Florida. Welcome Kaivan. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

Kaivan Rahbari: Thank you, Dana, it's a pleasure to meet you today.

Gardner: Let's start off with trying to understand what's different about the overall security landscape today. How would you characterize it being different from five years ago or so?

Rahbari: A lot has changed in the past five years. Two key economic trends have really accelerated our security changes. First, the US recession pushed companies to consolidate and integrate technology footprints and leverage systems. New deployment models, such as software as a service (SaaS) and cloud, help address some of the lack of capital that we've been experiencing and the ability to push cost from fixed to variable.


The second major economic trend that has continued in the past five years is globalization for some of the companies. That means a network topology that's traversing multiple countries and with different laws that we have to deal with.

We always talk about how we're only as strong as our weakest link. When larger and more sophisticated companies acquire smaller ones, which is pretty commonplace now in the market, and they try to quickly integrate to cut cost and improve service, they're usually introducing weaker links in the security chain.

Strong acquirers now are requiring an acquisition to go through an assessment, such as an ISO 27001 certification, before they're allowed to join “that trusted network." So a lot of changes, significant changes, in the past five years.

Gardner: Tell us a bit about FIS Global, before we go into concepts around containment, so we've a sense of the scope and size of your organization, and a little bit about your role there.

Largest wholesaler

Rahbari: FIS is a Fortune 500 company, a global company with customers in over 100 countries and 33,000 employees. FIS has had a history in the past 10 years of acquiring 3-5 companies a year. So, it has experienced very rapid growth and expansion globally. Security is one of the key focuses in the company, because we're the world's largest wholesaler of IT solutions to banks.

Transaction and core processing, is an expertise of ours, and our financial institutions obviously expect their data to be safe and secure within our environments. I'm a Senior Vice President in the Risk Management Group. My current role is oversight over security and risk functions that are being deployed across North America.

Gardner: You've certainly painted a picture of how some of the requirements and pressure on organizations have changed, but what about the nature of security threats nowadays?

Rahbari: Attackers are definitely getting smarter and finding new ways to circumvent any security measure. Five years ago, a vast majority of these threats were just hackers and primarily focused on creating a nuisance, or there were criminals with limited technology skills and resources.

Cyber attacks now are a big business, at times involving organized crimes. These are intruders with PhDs. There could be espionage involved, and originate in countries with no extradition agreements with the US, making it very difficult for us to prosecute people even after we identify them.

You've also read some of the headlines in the past six months, things such as Sony estimating a data breach and cleanup of $171 million, or an RSA hack costing EMC $66 million. So this is truly a big business with significant impacted companies.
The nature of the threats are changing from very broad, scattered approach to highly focused and targeted.

Another key trend during the past five years that we've seen in this area is that the nature of the threats are changing from very broad, scattered approach to highly focused and targeted. You're now hearing things such as designer malware or stealth bots, things that just didn't exist five or 10 years ago.

Other key trends that you're seeing is that mobility and mobile computing have really taken off, and we now have to protect people and equipment that could be in very hostile environments. When they're open, there's no security.

The third key area is cloud computing, when the data is no longer on your premises and you need to now rely on combined security of your company, as well as vendors and partners.

The last major thing that's impacting us is regulatory environment and compliance. Today, a common part of any security expert terminology are words such as payment card industry (PCI), Gramm-Leach-Bliley Act (GLBA), and Sarbanes-Oxley (SOX), which were not part of our common vocabulary many years ago.

Gardner: So how do we play better defense? Most of the security from five years ago was all about building a better wall around your organization, preventing any entry. You seem to have a concept that accepts the fact that breaches are inevitable, but that focuses on containment of issues, when a breach occurs. Perhaps you could paint a picture here about this concept of containment.

Blocking strategies

Rahbari: As you said, it's easier to secure the perimeter -- just don't let anything in or out. Of course, that's really not realistic. For a vast majority of the companies, we need to be able to allow legitimate traffic to move in and out of our environments and try to determine what should be blocked.

I'll say that companies with reasonable security still focus on a solid perimeter defense. But companies with great security, not only guard their perimeter well, they assume that it can be breached at any time, as you stated.

Some examples of reasonable security would include intrusion protection, proxies to monitor traffic, and firewalls on the perimeter. You would then do penetration testing. On their PCs you see antivirus, encryption, and tools for asset and patch management. You also see antivirus and patch management on the servers and the databases. These are pretty common tools, defensive tools.

But companies that are evolving and are more advanced in that area have deployed solutions such a comprehensive logging solutions for DNS, DHCP, VPN, and Windows Security events. They have very complex security and password requirements.

As you know, password-cracking software is pretty common on the Internet nowadays. They also make sure that their systems are fully patched all the time. Proactively, as you know, Microsoft publishes patches every month. So it's no longer sufficient to upgrade a system or patch it once every few years. It's a monthly, sometimes daily, event.
As the costs of attacks have skyrocketed, we're now seeing in the market some pretty great solutions.

Gardner: How do you go about containing? I guess you also have to detect. So they go hand-in-hand, being able to know when something is going wrong. Is there a way of architecting to contain or is this something that you would do on a proactive basis, intelligently, when you've detected something amiss?

Rahbari: First, as the costs of attacks have skyrocketed, we're now seeing in the market some pretty great solutions that actively try to prevent things from happening or mitigating them when they happen.

A few of the examples that come to mind are on the perimeter. We're seeing a lot of denial-of-service (DoS) attacks in recent years. Basically what that means is you detect a massive attack toward a specific IP address, and it happens with financial institutions a lot.

With some of these great solutions on the market, you would swing all of your traffic another IP address, without bringing down the environment. The attackers still think they're attacking and shutting down an environment, but they really aren't.

Five years ago, the primary objective of a DoS attack like this was just to shut something down for malicious purposes. Now, it's a pretty common vehicle for fraud.

Long gone

Here's a scenario. In a small business, Joe's Landscaping, their internet banking gets compromised and someone steals their password. Then the hackers authenticate and do a wire transfer out of his account to some bank in the Cayman Islands. That attacker then mounts the DoS attack against the service provider that Joe's Landscaping is using, so that the fraudulent activity is not discovered or it's delayed for a few days. By the time it's discovered, the money is long gone.

Another example of proactive and great security is to have software white-listing on PCs and servers, so that only legitimate software is actually installed. A key method of obtaining credentials nowadays is to install keystroke logger software on a machine. That can easily be blocked by white-listing software.

A third example of strong security posture is not just to detect, but actually actively destroy things. Traditionally, when we were monitoring wireless access into companies, we would just report that there was wireless access that shouldn't have been granted.

Most companies would assign a password to it, but as you know, passwords are shared sometimes, so soon enough, everyone knows the password to the wireless system in the company. One of the things that we’ve started using are solutions that actively jam wireless signals, unless it's their authorized room or a known IP address.

Another great example of a proactive approach we see in the market is when a visitor or employee plugs a non-corporate device into network, either on premise or from home. That creates a significant amount of risk. There are some great solutions out there that provide network access control. If an unknown device plugs into the network, that's immediately rejected at the network level. You can't even authenticate.
The less obvious offensive posture that people don't think about is just around discovery and disclosure.

Probably the less obvious offensive posture that people don't think about is just around discovery and disclosure. Some of the statistics I’ve read indicate that more than 90 percent of the compromises are actually reported externally, rather than the company discovering it.

It's a PR and regulatory nightmare, when someone comes to us and says, "You've been attacked or breached," versus us discovering something and reporting it. Some of the examples I gave were technology, but some of it is just planning and making sure that we’re proactive and report and disclose, rather than seeing it in the headlines.

Gardner: Are there any particular types of technology that help contain the access that an intruder or some other breach would provide? I'm thinking about maybe some level of virtualization, where we’re walling off assets such as applications from other infrastructure or data. Is there either a technology or architecture approach that you’re aware of that can help when it comes to this issue of limiting the damage?

Rahbari: Oh, sure. Some of it is just process, and some of it is technology. When most companies discover a breach, they take people who are already in a full-time job function related to security and put them on the team to investigate. These people don't have the investigative skills and knowledge to deal with incident management, which is truly a specialized science now.

Dedicated team

Companies that have experienced breaches and now know how important this is have implemented best practices by having a dedicated team to plan and handle breaches. It's like assigning a SWAT team to a hostage situation versus a police patrol officer.

A SWAT team is trained to handle hostage situations. They're equipped for that. They have a machine gun and sniper rifle instead of a handgun, and they don't have another full-time job to worry about while they’re trying to deal with the crisis. So, from a process and team perspective, that's the first place I would start.

This is even more critical for targeted attacks, which are pretty common nowadays. You have to have the necessary infrastructure to prepare for an event or an incident ahead of time. During the attack, we usually don't disrupt the attacker, or alert them that we’re about to go remedy something. After the remediation is prepared, then we implement that and, in that process, learn about what the attacker is doing.

Some of the steps I recommend, once you know a breach has occurred, is to first pull the entire network off the internet until remediation is complete, blocking the known attacker, domains, and IP addresses. Other simple things include such things as changing compromised passwords.

We’re seeing Active Directory being compromised quite a bit nowadays. Unfortunately, that means that changing all passwords within an enterprise, including the service account. Then lastly, very quickly remove any compromised systems off the network and either rebuild or replace them. These are some basic blocking and tackling things that help address incidents very, very quickly.
You have to have the necessary infrastructure to prepare for an event or an incident ahead of time.

Gardner: Are there any trends now, while looking to the future, that will perhaps make some of these proactive and containment types of activities even more important. I'm thinking about bring your own device (BYOD), consumerization of IT, and cloud computing which you mentioned a bit earlier?

Rahbari: Before we talk about those trends, I think it's important to talk about the trends as they pertain to segregation and separation. The preparation ahead of time is more important than trying to just deal with some of those future trends, because the architecture is the fundamental way in which we can secure our environments.

Some of the steps that any good company should take is first make sure that their networks are separated and isolated. You need a long term network architecture and strategic plan. You also need to establish security zones to separate high risk domains, and make sure you have standards to govern the level of trust between sites and your networks, based on your business requirements.

As far as domain segregation, make sure you do things such as separating Active Directory domains with credentials for your production environments, versus your quality assurance developments and other employee-access environments.

From a trends perspective, there are a number of things that had really helped. Virtualization is one of them. It's a key technique to create segregation between applications.

Incident response

It used to take 10 different pieces of hardware to segregate 10 applications on 10 different machines. Now, you can run 10 of them in their own virtual machines (VMs) on a single machine piece of hardware. That not only helps with segregation, but also in incident response. If I have one application being attacked or compromised, I can bring down that virtual machine without impacting others in that production environment.

And in situations where you have a mission-critical transaction-processing system like gift cards, running active-active across two data centers, not only helps you with the business continuity and disaster recovery, but also helps you to continue to function in the events of an attack by simply severing that connection. Now, you have a data center that's completely isolated, while you’re dealing with another data center that is being compromised.

The other technology view and trends that you mentioned are things like BYOD. Clearly, that's created significant challenges for us. It's one of the most controversial issues with companies, and we see everything from companies forbidding them, all the way to completely accepting them.

Really the role of IT and security is to build infrastructure to support something like an iPhone that has your corporate application.There are plenty of solutions out there in the market that allow you to separate personal from corporate data. For example a virtual desktop infrastructure (VDI) environment set up on a personal device so that you can segregate your personal from your corporate data.

The other thing you mentioned about trends is cloud computing. The cloud trend is like internationalization and outsourcing. The financial advantages of cloud are similar to outsourcing, and it's hard to pass up for most IT and business leaders.
There are new applications every day, and the risk and security planning could be very, very complex.

There are new applications every day, and the risk and security planning could be very, very complex. For example, there are now cloud services for email archiving. That means your company’s email is stored in a data center that you have no control over and it can be breached.

Can it be done safely? Well, sure. But you just have to plan for it. In this example of email archive encryption, you can have a solution that's pretty common in the market where you would encrypt the data before it leaves your center. It never gets decrypted, while it's being stored at a place like Amazon, Yahoo, or some of the other providers. So, definitely, there are ways of taking advantage of a cloud without increasing and taking on additional risk.

Gardner: How can organizations get started on this? It looks like an awful lot to go after it once. Is there a path to this -- a crawl, walk, run approach -- that you would recommend in terms of improving your posture, when it comes to security and containment?

Rahbari: This is a very complicated and difficult thing to learn, and that's where partners and other firms really can be a tremendous help. First, I'd start with an existing organization. Five years ago, it was difficult to sell security to business leaders. Now, those same business leaders are seeing in the paper everyday -- the numbers are astronomical -- Sony, Google, and others who had breaches.

From the inside, good indication of a security culture change is when you have a dedicated chief information security officer (CISO), the company has a security or risk committee, security is a budget line in the item and not just buried within an IT budget, and security is the business issue.

Effects of breaches

With the incredible amount of regulatory burden, scrutiny, and oversight, a breach can really tank a company overnight. You read in the paper two months ago, we had a company that lost half of its stock value overnight, after a breach, after Visa was hinting that they might stop using them.

I'd highly recommend that you hire a reputable company to get started, if your particular firm cannot afford to invest in hiring the experts. There are lots of firms that can come. You can outsource to start with, and then as you feel comfortable, bring it in-house and leverage the expertise of this highly, highly specialized field to protect your company and assets.
I'd highly recommend that you hire a reputable company to get started, if your particular firm cannot afford to invest in hiring the experts.

Gardner: Very good. I'm afraid we'll have to leave it there. I'd like to thank our guest on this discussion about containment and security, Kaivan Rahbari, Senior Vice President of Risk Management at FIS Global, based in Jacksonville, Florida. Thanks so much, Kaivan.

Rahbari: My pleasure. It's good talking to you.

Gardner: And you can gain more insights and information on the best of IT performance management at http://www.hp.com/go/discoverperformance. And you can also always access this and other episodes in our HP Discover Performance Podcast Series on iTunes under BriefingsDirect.

I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your co-host and moderator for this ongoing discussion of IT innovation and how it’s making an impact on people’s lives. Thanks again for listening, and come back next time.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: HP.


Transcript of a BriefingsDirect podcast on how companies can protect themselves, given that security breaches are an inevitable fact of life. Copyright Interarbor Solutions, LLC, 2005-2012. All rights reserved.

You may also be interested in:

Wednesday, September 26, 2012

HP Discover Performance Podcast: McKesson Redirects IT to Become a Services Provider That Delivers Fuller Business Solutions

Transcript of a BriefingDirect podcast from HP Discover 2012 on how health-care giant McKesson has revamped it's IT approach and instituted a cultural shift toward services.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: HP.
  
Dana Gardner: Hello, and welcome to the next edition of the HP Discover Performance podcast series. I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your co-host and moderator for this ongoing discussing of IT innovation and how it's making an impact on people’s life.

Once again, we're focusing on how IT leaders are improving performance of their services to deliver better experiences and payoffs for businesses and end users alike. This time, we’re coming to you directly from the HP Discover 2012 Conference in Las Vegas. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

We’re exploring some award-winning case studies from leading enterprises to see how an IT transformation approach better supports business goals. And we'll see how IT performance improvements benefit these companies, their internal users, and their global customers.

Our next innovation case study interview highlights how pharmaceuticals distributor and healthcare information technology services provider McKesson has transformed the very notion of IT. We will see how a shift in culture and an emphasis on being a services provider has allowed McKesson to not only deliver better results, but elevate the role of IT into the strategic fabric of the company.

To learn more about how McKesson has recast the role of IT and remade its impact in a positive way, we're joined by Andy Smith, Vice President of Applications Hosting Services at McKesson. Welcome, Andy.

Andy Smith: Thank you, Dana. I really appreciate you inviting me and I am glad to be able to share my experiences with others.

Gardner: Let me start with this notion of IT transformation. We hear a lot about that. I wonder if you have any major drivers that you identified, as you were leading up to this, that allowed you to convince others that this was worth doing.

Smith: What we did, and this started several years ago, was to focus on what our competition was doing, not the competition to McKesson but the competition to IT. In other words, who was the outsourcer or who were the other data-center providers. From that, we were able to focus on our cost, quality, and availability and come up with a set of metrics that covered it all, so that we could know the areas we needed to transform and the areas where we were okay.

Gardner: So, in a sense, you had to redefine yourself as a services provider, because that's who you saw as your competition?

Smith: Exactly, and that's who our customers are talking to -- our competition. When they came to us for a service, they had already talked to third-party providers. And so we realized very quickly that our competition was the outside world, so we had to model ourselves to be more like them and less like an internal IT department.

Gardner: That, of course, cuts across not only technology, but culture and the whole idea of being accountable and to whom. So let's start at that higher level. How did you begin to define what the new culture for IT should be?

Balanced scorecard

Smith: We started out with a balanced scorecard. It really came down to whether the employees and the customers were satisfied. Did we do what we said – were we accountable -- and were the financials right?

So when we started setting up that balance scorecard, that on its own started to change the culture. Suddenly, customer satisfaction mattered, and suddenly, system availability mattered, because the customer cared, and we had to keep the employees trained, so that they were satisfied.

Over time, that really changed the culture, because we're looking at all four parts of the scorecard to make sure we're moving forward.

Gardner: I suppose it's essential, when you're a services provider rather than a technology products producer and deployer, that you understand what are the right metrics to measure. So is it a different set of metrics from IT to a service provider role of IT?

Smith: It really is, because when we were just an internal IT department, we spent more time saying, "The customer gave us an order, we hit the checkbox and finished that order, we're done." We were always asking, "Did we do it, and did we do it on time?"
What we really focused in on were the real drivers. A lot of the measures are more trailing indicators. Even money tended to be a trailing indicator.


That's not really what the customer was looking for. The customer was looking for. "Did you deliver what I needed, which may be different than what I asked for. Did you deliver it at a good price? Did you deliver it at a good quality." So it did switch from being measuring the ins and the outs of an order taker, to whether we are delivering the solution at the right price.

Gardner: As we've seen in a number of companies, when they’ve gone to more measurement using metrics, key performance indicators (KPIs), and working towards service-level agreements (SLAs), sometimes that can become daunting. Sometimes, there is too much, and you lose track of your goal. Is there a way that you work towards a triage or a management approach for those metrics, those KPIs, that allowed you to stay focused on these customer issues?

Smith: What we really focused in on were the real drivers. A lot of the measures are more trailing indicators. Even money tended to be a trailing indicator.

So we went into what's really driving our quality, what's really driving our cost. We got down to four or five that we are the ones that mattered. "Is the system up and running. Are changes causing outages. Are data protection services reliable. Are our events being handled quickly and almost like a first call resolution. Are they being resolved by the first person that gets the event?"

The focus was prevent the outage and shorten up the mean time to restore, because in the end, all of that will drop the cost. It worked, but it was focusing on a handful, rather than dozens.

Gardner: Is it fair to say that doing this well is, in fact, also a cost-saver? Is there a built-in mechanism for efficiency, when you start focusing on that service provider role, that brokering role?

Pulling down cost

Smith: It truly did bring down our cost within McKesson. I'll probably be off by several million, but each year we pull down our cost several million dollars. So every year my budget gets smaller, but every year my quality gets higher, my employee satisfaction gets higher, and my customer satisfaction gets higher.

It can really get both. You don't have to sacrifice quality to reduce cost. The trick was saying that I no longer needed a person to do this commodity factory work. I could use a machine to do that, which freed up the worker from being a reactive commodity person to being a proactive value-add person. It allowed the employee to be more valuable, because they weren't doing the busy work anymore. So it really did work.

Gardner: For those in our audience who might not be familiar with McKesson, tell us a little bit more about the company. Specifically, tell us about the scale of your IT organization to put those millions of dollars into some perspective in the total equation?

Smith: McKesson IT is roughly 1,000 employees. The company is roughly 45,000 employees. So percentage-wise, we're not that big. My personal budget to run the IT infrastructure is about a $100 million a year.

So pulling out a few million dollars a year may be only a few percent, but it's still a pretty significant endeavor. We've managed to pull that cost out, both through the typical things like maintenance contracts and improved equipment, but also by not having to grow the full-time employee (FTE) base. I haven't had to let any FTEs go, but what we've discovered was that, as we did these things, I needed fewer employees.
To get people to stop thinking about the technology and start thinking about the business solution is a slow transition, because it's a real mind-shift.

As employees resigned, I didn't have to replace them. My staff base has been shrinking, but I haven't had anybody lose a job. So that's been also very reassuring for the employees, because they kept waiting for that big shoe to drop, waiting for us to say, "We're going to outsource you," but we've never had to do it.

Gardner: I guess when you compete against the outsourcers better, then you are going to retain those jobs and keep that skill set going. There is a cliché that you're able to take people from firefighting and put them into innovation. Is there a truth to that in what you've done?

Smith: That really is truth. It took time, and we’re not done, but to get people to stop thinking about the technology and start thinking about the business solution is a slow transition, because it's a real mind-shift. In a lot of ways, these employees see the reactive work as the bread and butter work that puts the paycheck on the table. That lets them be a firefighter and a hero, and if you take that away, the motivators are different.

It takes time to get people comfortable with the fact that your brain is worth a lot more doing value-add work than it was just doing the firefighting. We're still going through that cultural shift. In some ways, it's easier for the older employees, because if you go back a few decades, IT was that. It was programmer analyst, system analyst, and business analyst. For me, "analyst" disappeared from all my job titles.

In the last couple of decades, for some reason, we erased analyst, and now you're just a programmer or an operator. In my mind, we're bringing the analyst back, which for the older employees, is easy, because they used to do it. For the younger employees, we've got to teach them how to be consultants. We've got to teach them how to be analyst. In some cases, it's a totally different, scary place to go, because you actually have to come out of the back office and talk to somebody, and they're not used to that.

Cultural shift

Gardner: Maybe there are methodologies that work here that you could discuss, services-oriented architecture (SOA) comes to mind and also ITIL. Have you been using ITIL approaches and SOA to help make those transitions? Is there a technology track is a cultural shift?

Smith: Yes, we went down the ITIL road, because we were manual before. Everybody was doing it with tribal knowledge. The way I did it today might be different than the way I'd do it tomorrow, because it's all manual, and it's all in people's heads.

We did go into ITIL version 3 and push it very hard to give that consistency, because the consistency really mattered. Then, we could really measure the quality. We could be ensured that no matter who did it or when it was done, it was done the same way, and that reliability mattered a lot.

We also got away from custom technology, and we got to where everything is going to be a certain type of machine. It's going to look the same. All the tools are going to be fully integrated and no longer be best-of-breed point solutions. Driving that standardization made a big difference. You don’t have to remember that machine on the left you reboot it this way, and that machine on the right you reboot it a different way. You don’t have to remember anymore, because they're all the same.

We made the equipment and tools standard and more of a commodity so that the people didn’t have to be that anymore. The people could be thought leaders. All those things really did work to drive out the cost and increase the quality, but it's a lot of different pieces. You can't do it with just one golden arrow. You have to hit it from every angle.
We had to increase the transparency to say we’re doing a good job or we’re doing a bad job.

We had to change the technology, the people, and the processes. We had to increase the transparency to say we’re doing a good job or we’re doing a bad job. It was just, "Expose everything you’re doing."

That's scary at first, but in the end, we found out we really are competing with the competitors and we can continue to do it, and do it better. We understand healthcare, we understand McKesson, and we’re an internal group, so we don’t have a profit margin. All those things combined can make us a better IT solution than a third party could be.

Gardner: And as you entered that standardization process, did that services orientation become a value point for you? Did private cloud or an even a hybrid model start to become interesting? How far have you progressed in that “cloud direction”?

Smith: The services orientation helped a lot. We’re on the IT side, so we started out with our service as Unix, our service as data, our service as Windows. Getting us focused on that helped us remember what the service really was. We’re now stepping back even one step farther and saying that that no longer matters.

What really matters is the business solution you’re trying to solve. We’re stepping even farther back, saying that the service is order to cash, or the service is payroll, or the service is whatever. We’re stepping back farther, so we can look at the service from the standpoint of the customer. What does the customer want? The customer doesn’t want Unix. The customer wants order to cash. The customer doesn’t want Windows. The customer wants payroll.

Thinking about cloud

Stepping back has now allowed us to start thinking about that cloud. All the equipment underneath is commoditized, and so I can now sit back and say that the customer wants this business solution and ask who is the best person to give me the components underneath?

Some of them, for security reasons, we’re going to do on our internal cloud. Some of them, because of no security issues, we’re going to have a broker with an external provider, because they may be better, cheaper, or faster, and they may have that ability to burst up and burst down, if we’re doing R&D kind of work.

So it's brought us back to thinking like a business person. What does the business need and who is the best provider? It might not be me, but we’ll make that decision and broker it out. This year we're probably going to pull off our internal cloud and our external cloud and really have a hybrid solution, which we’ve been talking about for a couple of years. I think it will really happen this year.

Gardner: We’re here at HP Discover and HP COO Bill Veghte was on the stage a little while ago. One of the things that he said that caught my attention was that we’re producing the app services and the Web services that are the expression of business processes.

I thought that was a good way to put it, because in the past, business processes had to conform to the applications. Now, we’re able to take the applications in the hybrid delivery model and extend them to form what the business processes demand. Is that also sort of a shift that's come along with your going more towards a service brokering capability?
The other 80 percent is really unique business services that our customers need to improve healthcare, to reduce the cost in healthcare, and those are really unique to McKesson.

Smith: It is a shift that's going on, and it's interesting, because I don’t think part of this is matured. If you’re dealing with the big package products whether it's the Oracles or the SAPs, those people are dictating almost a custom solution in order to keep themselves alive. But that's probably 20 percent of my business, when I think about servers and applications.

The other 80 percent is really unique business services that our customers need to improve healthcare, to reduce the cost in healthcare, and those are really unique to McKesson. What I am finding, when I look at those types of business services, they are the real bread-and-butter that makes our world different.

Having the hybrid capability does let me put together the pieces to optimize what the business need is, but it is the 80-20. For the 80 percent I can do it. For the other 20 percent, those vendors are probably going to lock me into a custom solution, but that's okay.

Gardner: Well great. I am afraid we’re about out of time. We’ve been discussing with McKesson, how they’ve recast the role and impact of IT. I want to thank our guest, Andy Smith, Vice President of Applications Hosting Services at McKesson. Thanks so much, Andy.

Smith: Thank you very much, Dana.

Gardner: And I also want to thank our audience for joining us for this special HP Discover Performance podcast coming to you from the HP Discover 2012 Conference in Las Vegas.

I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host for this ongoing series of HP sponsored discussions. Thanks again for listening, and come back next time.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: HP.

Transcript of a BriefingDirect podcast from HP Discover 2012 on how health-care giant McKesson has revamped it's IT approach and instituted a cultural shift toward services. Copyright Interarbor Solutions, LLC, 2005-2012. All rights reserved.

You may also be interested in:

Monday, September 10, 2012

Server and Desktop Virtualization Produce Combined Cloud and Mobility Benefits for Israeli Insurance Giant Clal Group

Transcript of a BriefingsDirect podcast on the multiplier effects gained from virtualization in the enterprise.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: VMware.

Dana Gardner: Hi, this is Dana Gardner, Principal Analyst at Interarbor Solutions, and you’re listening to BriefingsDirect.

Today, we present a sponsored podcast discussion on how a large Israeli insurance and financial services group rapidly modernized its IT infrastructure. We’ll hear the story of how Clal Insurance Enterprises Holdings, based in Tel Aviv, both satisfied current requirements and built a better long-term technology architecture.

The rapid adoption of server virtualization that enabled desktop virtualization that has spawned cloud and mobile computing benefits at Clal clearly illustrate the multiplier effect of value and capabilities from such IT transformation efforts. [Disclosure: VMware is a sponsor of BriefingsDirect podcasts.]

We’ll learn how Clal’s internal IT organization, Clalbit Systems, translated that IT innovation and productivity into significant and measurable business benefits for its thousands of users and customers.

Here to fill us in on Clal’s impressive IT infrastructure transformation journey, is Haim Inger, the Chief Technology Officer and Head of Infrastructure Operations and Technologies at Clalbit Systems. Welcome, Haim.

Haim Inger: Nice to meet you.

Gardner: One of the things that’s interesting to me is the speed and depth of how your organization has embraced virtualization. You went to nearly 100 percent server virtualization across mission-critical applications in just a few short years. Why did you need to break the old way of doing things and why did you move so quickly to virtualization?

Inger: The answer is quite simple. When I got the job at Clal Insurance four years ago, everything was physical. We had about 700 servers, and to deploy a new server took us about two months. The old way of doing things couldn't hold on for much longer.

Regulations in the new businesses that we needed to implement required us to do such things as deploy servers as quickly as possible and simplify the entire process for demanding a new server -- to deploying it and giving it the full disaster recovery (DR) solution that the regulations require.

The physical way of doing things just couldn’t supply the answer for those requirements, so we started to look for other solutions. We tested the well-known virtualization solutions that were available, Microsoft and VMware, and after a very short proof of concept (POC), we decided to go with VMware in a very specific way.

We didn’t want to go only on the development side, the laboratory side, and so on. We saw VMware as a solution for our co-applications and for a long-term solution, not just for islands of simple virtual servers, where we decided from day one to start using VMware on SQL servers, the Oracle servers, and SAP servers.

Full speed ahead


I
f they held on there very well, then we could, of course, also virtualize the simpler servers. It took us about four months to virtualize those initial servers, and those were very simple. We just pushed the project ahead full speed and virtualized our entire data center.

Gardner: It seems as if you were concerned about DR first and foremost, but that led you on a path to wider virtualization of the servers. Is that correct?

Inger: Yes, that’s correct.

Gardner: As you’ve gone about this journey, why does it seem to be paying off both on the short term and setting you up for longer term benefits?

Inger: That’s very simple to answer. Today, to provision a new server for my customer takes about 20 minutes. As I said, in the past, in the physical world, it took about two months.

DR was the main reason for going into this project. During a DR test in the old days, we had to shut down our production site, start up all servers on the DR site, and hope that everything worked fine. Whatever didn’t work fine, we tested one year after that initial DR was done.

Using VMware with Site Recovery Manager, I can do an entire DR test without any disruption to the organization.



Using VMware with Site Recovery Manager (SRM), I can do an entire DR test without any disruption to the organization, and I do it every three months. Watching our current DR status, if anything needs to be fixed, it’s fixed immediately. I don’t have to wait an entire year to do another test.

So those simple things are enabling us to give our organization the servers that they need, when they need them, and to do the regression in a much simpler way than we did in the past.

Gardner: Tell us a bit more about Clal. I'd like to learn about the size of your organization and the types of responsibilities you have. You’re supporting several different companies within the Clal Group, isn’t that right?

Inger: Clal is a group that contains a very big insurance company and another company that is doing trading on the Israeli international stock market. We have a pension company and insurance for cars, boats, apartments and so on. We even have two facilities running in United States and one in the UK.

We're about 5,000 employees, and 7,000 insurance brokers, so that’s about 12,000 people using our datacenter. We have about 200 different applications serving those people, those customers of ours, running on about 1,300 servers.

Large undertaking


Gardner: That's obviously a very large undertaking. How do you manage that? Is there a certain way that you’ve moved from physical to virtual, but have been able to manage it without what some people refer to as server sprawl.

Inger: I know exactly what you mean about over populating the environment with more servers than needed, because it’s very easy to provide a server today, as I said, within one hour.

The way we manage that is by using VMware Chargeback. We've implemented this module and we have full visibility of the usage of a server. If someone who requested a server is not using it over a period of three months, we’ll know about it. We’ll contact them, and if they don’t require that server, we’ll just take it back, and the resources of that server will be available once again for us.

That way, we're not providing servers as easy as could be. We're taking back servers that are not used or can even be consolidated into one single server. For example, if someone requested five web servers based on Microsoft IAS and we’re sure that it can be consolidated into just one server because CPU utilization is very low, we’ll take it back.

If an application guy requires that the server have eight virtual CPUs, and we judge it's use on peak time is only two, we’ll take six virtual CPUs back. So the process is managed very closely in order not to give away servers, or even power, to existing servers that are not really needed.

We're taking back servers that are not used or can even be consolidated into one single server.



Gardner: Tell me how you’ve been able to develop what sounds like a private cloud, but a sort of dynamic workload capability. Do you consider what you’ve done a private cloud, or is that something you’re looking to put in?

Inger: We do consider what we've done a private cloud. We're actually looking into ways of going into a hybrid cloud and pushing some of our systems to the public cloud in order to control the hybrid one. But, as I said, we do consider the work we've been doing in the past three years as fully partnering a private cloud.

Gardner: Have there been any hardware benefits when moving to a private cloud, perhaps using x86 hardware and blades? How has that impacted your costs, and have you moved entirely to standardized hardware?

Inger: Of course. When we saw that those 20 servers that we initially did in late 2008 and everything worked okay, we decided to do standards. In one of the standards that was decided upon was if it doesn’t work on VMware it won’t get on our data center. So a lot of applications that run on Itanium microprocessors were migrated into Linux and on top of VMware running on x86,

Saving money

W
e managed to save a lot of money, both in supporting those legacy systems and developing in those legacy systems. They’re all grown. Everything that we have is virtual, 100 percent of the data center. Everything is run on x86 blades, running Windows 2008 or in Linux.

All these systems we have used to run on a mainframe. It’s Micro Focus COBOL running on top of Red Hat Linux latest version, on top of VMware, and x86 blade.

Gardner: Let’s take the discussion more towards the desktop, the virtualization experience you’ve had with servers and supporting such workloads as SQL Server, Oracle, and SAP. This has given you a set of skills and confidence in virtualization that you’ve now taken out, using VMware View, to the desktop. Perhaps you could tell us how far you’ve gone in the virtual desktop infrastructure (VDI) direction?

Inger: After finishing the private cloud in our two data centers, the next step within that cloud was desktop. We looked at was how to minimize the amount of trouble we get from using our desktops -- back then it was with Windows XP Desktop -- and how to enable mobility of users, giving them the full desktop experience, whether they’re connecting from their own desktop in the workplace or if they’re using an iPad device, connecting from home, or visiting an insurance broker outside of our offices.

We looked at the couple of technologies that would fit in VMware View. Again, after a short POC, we decided to go ahead with the VMware View. We started the project in January 2012 and right now w're running 600 users. All of them are using VMware View 4.6 which is being upgraded, as we speak, to version of 5.1.

The plan is that by the end of next year, all of our employees could be working on VMware View.



It enables us to give those users an immediate upgrade to a Windows 7 experience, by just installing VMware View, instead of having to upgrade each station of those viewers, and without going to those 600 users who are on Windows 7 right now.

And we're delivering it on every device that they're working on. If they’re at work, at home, outside of their office, their devices, iPad as we said earlier, are getting the same experience. The plan is that by the end of next year, all of our employees could be working on VMware View.

Gardner: With those 600 or more users, have you been able to measure any business benefits -- maybe a cost savings or the agility of being able to work remotely. Have you been able to find a return on investment (ROI) in business terms?

Inger: It’s quite hard to calculate down to the last dollar our ROI data sheet on VDI, because the initial cost is very high. But in the past, in a building where I have 300 people working, I had to have two technicians full time working and giving assurance to those end users.

After going to full VDI in that building I don’t have any technician there at all. When a user has a problem on the physical workstation, we usually remote control the station and try to fix it. Sometimes, you have to format the entire station. When the user has a problem in the VDI station, he can just log out, log in, and within less than a minute, get a completely new work station. A technician doesn’t even have to remote control that problem in order to fix it.

Same experience

The ability to give the user the same experience on each device that he works on is sometimes priceless. When I fly from Israel to the United States and have a wi-fi connection in the plane, I can use an iPad and then work on my office application as if I were in the office. Otherwise, if it’s a 12 hour flight, I'd be 12 hours out of work.

If you take into account the entire ecosystem that you’ve built surrounding VMware View, it’s actually priceless, but it’s very hard to quote exactly how many dollars it save us on a daily basis.

Gardner: Has the experience with the initial 600 now prompted you to move to VDI across more of your thousands of workers? How aggressive do you intend to be with your use of View?

If you take into account the entire ecosystem that you’ve built surrounding VMware View, it’s actually priceless.



Inger: By end of the year 2012, our plan and budget was for 1,000 users. So we're on the way to meet our goal in December this year. For next year, 2013, our goal is to add 2,000. So it will cover almost the entire organization. It leaves something like 500 power users. I’m not sure that VMware View is the best solution for them yet. That will be tested in 2014.

Gardner: It certainly sounds as if you’re able to move rapidly to a mobile tier capability using View and also your cloud capacities. That's something that many other companies are seeing that their users are interested in. Do you have sense that VDI is a stepping stone to supporting this mobile capability as well?

Inger: Of course VDI as a stepping stone is an essential element in implementing a bring your own device (BYOD) policy. That’s something we're doing. We're in the initial steps of this policy mainly with iPad devices, which a lot of employees are bringing to work and would like to bring when they're on site, offsite, or at home. Without VDI, it would be impossible to give them a solution. We have tons of iPads today that are connecting to the office via VDI with a full Window Server experience.

Gardner: I'd like to get your thinking around virtuous adoption. As we started talking about DR, your full virtualization of your server workloads, your being able to go to standardized operating systems and hardware, moving to VDI, then moving to hybrid cloud and also now mobile, it truly sound as if there is a clear relationship between what you’ve done over the years with virtualization and this larger architectural payoff. Maybe you could help me better understand why the whole is perhaps greater than the sum of the parts.

Inger: The whole is greater than sum of the parts, because when I chose VMware as a partner combined with EMC on the storage side and their professional services, I had actually done a lot of the work together with my people.

Life gets easier

L
ife gets easier managing IT as an infrastructure, when you choose all those parts together. An application guy could come to you and say, "I didn’t calculate the workload correctly on the application that's going to be launched tomorrow, and instead of 2 front-end servers I need 15."

Some other person could come to me and say, "I have now five people working offshore, outside of the Israel and I need them to help me with a development task that is urgent. I need to give them access to our development site. What can you do to help me?"

I tell him, "Let’s put in our VDI environment, and they can start working five minutes from now." When you put all of those things together, you actually build an ecosystem that is easier to manage, easier to deploy, and everything is managed from a central view.

Life gets easier managing IT as an infrastructure, when you choose all those parts together.



I know how many servers I have. I know the power consumption of those servers. I know about CPU’s memory, disk I/O and so on. And it even affects the decision-making process of how much more power I'll need on the server side, how many disks I'll need to buy for the upcoming project that I have. It’s much easier decision making process. Back in the physical day, when each server had its own memory, its own CPU, and its own disk, there was much more guessing than deciding upon facts.

Gardner: Very good. I’m afraid we’ll have to leave it there. We’ve been talking about how Clal Insurance Enterprises Holdings, has both satisfied current requirements and built a better long-term technology architecture, all based on virtualization. And we’ve seen how such IT innovation and productivity have translated into significant business benefits for Clal’s users and customers.

I'd like to thank our guest, Haim Inger, the Chief Technology Officer and Head of Infrastructure Operations and Technologies and Clalbit systems. Thank you so much, Haim.

Inger: Thank you very much.

Gardner: This is Dana Gardner, Principal Analyst at Interarbor Solutions. Thanks to you, our audience, for listening, and come back next time.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: VMware.

Transcript of a BriefingsDirect podcast on the multiplier effects gained from virtualization in the enterprise. Copyright Interarbor Solutions, LLC, 2005-2012. All rights reserved.

You may also be interested in: