Monday, May 07, 2012

Expert Chat with HP on How Better Understanding Security Makes it an Enabler, Rather than an Inhibitor, of Cloud Adoption

Transcript of a BriefingsDirect podcast on the role of security in moving to the cloud and how sound security practices can make adoption easier.

Listen to the podcast. Find it on iTunes/iPod. Download the transcript. Sponsor: HP.

Join the next Expert Chat presentation on May 15 on support automation best practices.

Dana Gardner: Welcome to a special BriefingsDirect presentation, a sponsored podcast created from a recent HP Expert Chat discussion on best practices for protecting cloud-computing implementations and their use.

Business leaders clearly want to exploit the cloud values that earn them results fast, but they also fear the risks perceived in moving to cloud models rashly. It now falls to CIOs to not only rapidly adapt to cloud, but find the ways to protect their employees and customers – even as security threats grow.

This is a serious but not insurmountable challenge.

This is Dana Gardner, Principal Analyst at Interarbor Solutions. To help find out how to best implement protected cloud models, I recently moderated an HP Expert Chat session with Tari Schreider, HP Chief Architect of HP Technology Consulting and IT Assurance Practice. Tari is a Distinguished Technologist with 30 years of IT and cyber security experience, and he has designed, built, and managed some of the world’s largest information protection programs.

In our discussion, you’ll hear the latest recommendations for how to enable and protect the many cloud models being considered by companies the world over. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

If you understand the security risk, gain a detailed understanding of your own infrastructure, security can move from an inhibitor of cloud adoption to an enabler.



As part of our chat, we're also joined by three other HP experts, Lois Boliek, World Wide Manager in the HP IT Assurance Program; Jan De Clercq, World Wide IT Solution Architect in the HP IT Assurance Program; and Luis Buezo, HP IT Assurance Program Lead for EMEA.

Our discussion begins with a brief overview from me of the cloud market and current adoption risks. We'll begin by looking at why cloud and hybrid computing are of such great interest to businesses and why security concerns may be unnecessarily holding them back.

If you understand the security risk, gain a detailed understanding of your own infrastructure, and follow proven reference architectures and methods, security can move from an inhibitor of cloud adoption to an enabler.

Cloud has sparked the imagination of business leaders, and many see it now as essential. Part of that is because the speed of business execution, especially the need for creating innovations that span corporate boundaries and extend across business ecosystems, has made this a top priority for corporations.

Every survey that I've seen and every panelist that I've talked to is saying that the cloud is elevating in terms of priority, and a lot of it has to do with the agility benefits. There's is a rush to be innovative and to be a first mover. That also puts a lot of pressure on the business people inside these companies, and they have been intrigued by cloud computing as a mean of getting them where they need to go fast.

This now means that the center of gravity for IT services is shifting towards the enterprise’s boundaries, moving increasingly outside of their firewalls, and therefore beyond the traditional control of IT.

Protection risks

B
usiness leaders want to exploit the cloud values that bring them productivity results fast, but IT leaders think that the protection risk perceived in moving to cloud models could come back to bite them. They need to be aware and maybe even put the brakes on in order to do this correctly.

So it now falls on CIOs and other leaders in IT not only to rapidly adopt cloud models, but to quickly find the means to make cloud use protected for operations, data, processes, intellectual property, their employees, and their customers, even as security and cyber threats ramp up.

We'll now hear from HP experts from your region about meeting these challenges and obtaining the business payoffs by making the transition to cloud enablement securely. Now is the time for making preparation for successful cloud use.

We're going to be hearing specifically about how HP suggests that you best understand the transition to cloud-protected enablement. Please join me now in welcoming our main speaker, Tari Schreider. Tari, please tell us more about how we can get into the cloud and do it with low risk.

Tari Schreider: It's always a pleasure to be able to sit with you and chat about some of the technology issues of the day, and certainly cloud computing protection is the topic that’s top of mind for many of our customers.

I want to begin talking about the four immutable laws of cloud security. For those of you who have been involved in information security over time, you understand that there is a certain level of immutability that is incumbent within security. These are things that will always be, things that will never change, and it is a state of being.

When we started working on building clouds at HP a few years ago, we were also required to apply data protection and security controls around those platforms we built. We understood that the same immutable laws that apply to security, business continuity, and disaster recovery extended into the cloud world.

First is an understanding that if your data is hosted in the cloud, you no longer directly control its privacy and protection. You're going to have to give up a bit of control, in order to achieve the agility, performance, and cost savings that a cloud ecosystem provides you.

The next immutable law is that when your data is burst into the cloud, you no longer directly control where the data resides or is processed.

One of the benefits of cloud-based computing is that you don’t have to have all of the resources at any one particular time. In order to control your costs, you want to have an infrastructure that supports you for daily business operations, but there are ebbs and flows to that. This is the whole purpose of cloud bursting. For those of you who are familiar with grid-based computing, the models are principally the same.

Different locations

Rather than your data being in one or maybe a secondary location, it could actually be in 5, 10, or maybe 30 different locations, because of bursting, and also be under the jurisdiction of many different rules and regulations, something that we're going to talk about in just a little bit.

The next immutable law is that if your security controls are not contractually committed to, then you may not have any legal standing in terms of the control over your data or your assets. You may feel that you have the most comprehensive security policy that is rigorously reviewed by your legal department, but if that is not ensconced in the terminology of the agreement with a service provider, then you don’t have the standing that you may have thought you had.

The last immutable law is that if you don’t extend your current security policies and controls in the cloud computing platform, you're more than likely going to be compromised.

You want to resist trying to create two entirely separate, disparate security programs and policy manuals. Cloud-based computing is an attribute on the Internet. Your data and your assets are the same. It’s where they reside and how they're being accessed where there is a big change. We strongly recommend that you build that into your existing information security program.

Gardner: Tari, these are clearly some significant building blocks in moving towards cloud activities, but as we think about that, what are the top security threats from your perspective? What should we be most concerned about?

The reason to move to cloud is for making data and assets available anywhere, anytime.



Schreider: Dana, we have the opportunity to work with many of our customers who, from time to time, experience breaches of security. As you might imagine, HP, a very large organization, has literally hundreds of thousands of customers around the world. This provides us with a unique vantage point to be able to study the morphology of cloud computing platform, security, outages, and security events.

One of the things that we also do is take the pulse of our customer base. We want to know what’s keeping them up at night. What are the things that they're most concerned with? Generally, we find that there is a gap between what actually happens and what people believe could happen.

I want to share with you something that we feel is particularly poignant, because it is a direct interlock between what we're seeing actually happening in the industry and also what keeps our clients up late at night.

First and foremost, there's the ensured continuity of the cloud-computing platform. The reason to move to cloud is for making data and assets available anywhere, anytime, and also being able to have people from around the world accept that data and be able to solve business needs.

If the cloud computing platform is not continuously available, then the business justification as to why you went there in the first place is significantly mooted.

Loss of GRC control

N
ext is the loss of span of governance, risk management, and compliance (GRC) control. In today’s environment, we can build an imperfect program and we can have a GRC management program with dominion over our assets and our information within our own environment.

Unfortunately, when we start extending this out into a cloud ecosystem, whether private, public, or hybrid, we don’t necessarily have the same span of control that we have had before. This requires some delicate orchestration between multiple parties to ensure that you have the right governance controls in place.

The next is data privacy. Much has been written on data privacy and protection across the cloud ecosystem. Today, you may have a data privacy program that’s designed to address the security and privacy laws of your specific country or your particular state that you might reside in.

However, when you're moving into a cloud environment, that data can now be moved or burst anywhere in the world, which means that you could be violating data-privacy laws in another country unwittingly. This is something that clients want to make sure that they address, so it does not come back in terms of fines or regulatory penalties.

Mobility access is the key to the enablement of the power of the cloud. It could be a bring-your-own-device (BYOD) scenario, or it could be devices that are corporately managed. Basically you want to provide the data and put it in the hands of the people.

You have to make sure that you have an incident-response plan that recognizes the roles and responsibilities between owner and custodian.



Whether they're out on an oil platform and they need access to data, or whether it’s the sales force that need access to Salesforce.com data on BlackBerrys, the fact remains that the data in the cloud has to land on those mobile devices, and security is an integral part.

You may be the owner of the data, but there are many custodians of the data in a cloud ecosystem. You have to make sure that you have an incident-response plan that recognizes the roles and responsibilities between owner and custodian.

Gardner: Tari, the notion of getting control over your cloud activities is important, but a lot of people get caught up in the devil in the details. We know that cloud regulations and laws change from region to region, country to country, and in many cases, even within companies themselves. What is your advice, when we start to look at these detailed issues and all of the variables in the cloud?

Schreider: Dana, that is a central preoccupation of law firms, courts, and regulatory bodies today. What tenets of law apply to data that resides in the cloud? I want to talk about a couple of areas that we think are the most crucial, when putting together a program to secure data from a privacy perspective.

Just as you have to have order in the courts, you have to have order in the clouds. First and foremost, and I alluded to this earlier, is that the terms and conditions of the cloud computing services are really what adjudicates the rights, roles, and responsibilities between a data owner and a data custodian.

Choice of law

However, within that is the concept of choice of law. This means that, wherever the breach of security occurs, the courts can actually go to the choice of the law, which means whatever is the law of the land where the data resides, in order to determine who is at fault and at breach of security.

This is also true for data privacy. If your data resides in your home location, is that the choice of law by which you follow the data privacy standards? Or if your data is burst, how long does this have to be in that other jurisdiction before it is covered by that choice of law? In either case, it is a particularly tricky situation to ensure that you understand what rules and regulations apply to you.

The next one is transporter data flow triggers. This is an interesting concept, because when your data moves, if you do a data-flow analysis for a cloud ecosystem, you'll find that the data can actually go across various borders, going from jurisdiction to jurisdiction.

The data may be created in one jurisdiction. It may be sent to another jurisdiction for processing and analysis, and then may be sent to another location for storage, for intermediate use, and yet a fourth location for backup, and then possibly a fifth location for a recovery site.

This is not an atypical example. You could have five triggering events across five different borders. So you have to understand the legal obligations in multiple jurisdictions.

The onus is predominantly placed on the owner of the data for the integrity of the data. The CSP basically wants no direct responsibility for maintaining the integrity of that data.



The next one is reasonable security, which is, under the law, what would a prudent person do? What is reasonable under the choice of law for that particular country? When you're putting together your own private cloud, in which you may have a federated client base, this ostensibly makes you a cloud service provider (CSP).

Or, in an environment where you are using several CSPs, what are the data integrity disclaimers? The onus is predominantly placed on the owner of the data for the integrity of the data, and after careful crafting of terms and conditions, the CSP basically wants no direct responsibility for maintaining the integrity of that data.

When we talk about who owns the data, there is an interesting concept, and there are a few test cases that are coursing their way through various courts. It’s called the Berne Convention.

In the late 1990s, there were a number of countries that got together and said, "Information is flowing all over the place. We understand copyright protection for works of art and for songs and those types of things, but let’s take it a step further."

In the context of a cloud, could not the employees of an organization be considered authors, and could not the data they produce be considered work? Therefore wouldn’t it be covered by the Berne Convention, and therefore be covered under standard international copyright laws. This is also something that’s interesting.

Modify policies

The reason that I bring this to your attention is that it is this kind of analysis that you should do with your own legal counsel to make sure that you understand the full scope of what’s required and modify your existing security policies.

The last point is around electronic evidence and eDiscovery. This is interesting. In some cases it can be a dual-edged sword. If I have custody of the data, then it is open under the rules of discovery. They can actually request that I produce that information.

However, if I don’t directly have control of that data, then I don’t have the right, or I don’t have the obligation, to turn it over under eDiscovery. So you have to understand what rules and regulations apply where the data is, and that, in some cases, it could actually work to your advantage.

Gardner: So we've identified some major building blocks for safe and proper cloud, we have identified the concerns that people should have as they go into this. We understand there is lot of detail involved. What are the risks in terms of what we should prioritize? How should we create a triage effect, if you will, in identifying what’s most important from that risk perspective?

Schreider: There are certainly unique risks that are extant to a cloud computing environment. However, one has to understand where that demarcation point is between a current risk register, or threat inventory, for assets that have already been classified and those that are unique to a cloud-computing environment.

You have to understand what rules and regulations apply where the data is, and that, in some cases, it could actually work to your advantage.



Much has been said about uniqueness, but at the end of the day, there are only a handful of truly unique threats. In many cases, they've been reconstituted from what is classically known as the top 20 types of threats and vulnerabilities to affect an organization.

If you have an asset, an application, and data, they're vulnerable. It is the manner or the vector by which they become vulnerable and can be compromised that come from some idiosyncrasies in a cloud-computing environment.

One of the things that we like to do at HP for our own cloud environment, as well as for our customers, is to avail ourselves of the body of work that has been done through European Network and Information Security Agency (ENISA), the US National Institute of Standards and Technology (NIST), and the Cloud Security Alliance (CSA) in understanding the types of threats that have been vetted internationally and are recognized as the threats that are most likely to occur within our environment.

We're strong believers of qualitative risk assessments and using a Facilitated Risk Assessment Process (FRAP), where we simply want to understand the big picture. NIST has published a great model, a nine-box chart, where you can determine where the risk is to your cloud computing environment. You can use it from an impact from a high to low, to the likelihood from high to low as well.

So in a very graphical form, we can present to executives of an organization where we feel we have the greatest threats and. You'd have to have several overlays and templates for this, because you're going to have multiple constituencies in an ecosystem for a cloud. So you're going to have different views of this.

Join the next Expert Chat presentation on May 15 on support automation best practices.

Different risk profiles

Y
our risk profile may be different, if you are the custodian, versus the risk profile if you're the owner of the data. This is something that you can very easily put together and present to your executives. It allows you to model the safeguards and controls to protect the cloud ecosystem.

Gardner: We certainly know that there is a great deal of opportunity for cloud models, but unfortunately, there is also significant down side, when things don’t go well. You're exposed. You're branded in front of people. Social media allows people to share issues when they arise. What can we learn from the unfortunate public issues that have cropped up in the past few years that allows us to take steps to prevent that from happening to us?

Schreider: These are all public events. We've all read about these events over the last 16-18 months, and some of them have occurred within just the last 30 days or so. This is not to admonish anybody, but basically to applaud these companies that have come forward in the interest of security. They've shared their postmortem of what worked and what didn’t work.

What goes up can certainly come down. Regardless of the amount of investment that one can put into protecting their cloud computing environment, nobody is immune, whether it’s a significant and pervasive hacking attempt against an organization, where sensitive data is exfiltrated, or whether it is a service-oriented cloud platform that has an outage that prevents people from being able to board a plane.

When an outage happens in your cloud computing environment, it definitely has a reverberation effect. It’s almost a digital quake, because it can affect people from around the world.

You want to make sure that you have a secure system development lifecycle methodology to ensure that the application is secure and has been tested for all conventional threats and vulnerabilities.



One of the things that I mentioned before is that we're very fortunate that we have that opportunity to look at disaster events and breaches of security and study what worked and what didn’t.

I've put together a little model that would reanalyze the storm damage. if you look at the types of major events that have occurred. I've looked at the control construct that would exist, or should exist, in a private cloud and the control construct that should exist in a public cloud, and of course in a hybrid cloud. It's the convergence of the two, and we would be able to mix and match those.

If you have a situation where you have an external threat that infiltrates an application, hacks into it, compromises an application, in a private cloud environment, you want to make sure that you have a secure system development lifecycle methodology to ensure that the application is secure and has been tested for all conventional threats and vulnerabilities.

In a public cloud environment, you normally don’t have that same avenue available to you. So you want to make sure that you either have presented to you, or on behalf of the service provider, have a web-application security review, external threat and vulnerability test.

In a cloud environment, where you are dealing in the situation of grouping many different customers and users together, you have to have a basis to be able to segregate data and operation, so that one of that doesn’t affect everybody.

Multi-tenancy strategies

In a private cloud environment, you would set up your security zone and segmentation, but in the public cloud environment, you would have your multi-tenancy strategies in place and you would make sure that you work with that service provider to ensure that they had the right layers of security to protect you in a multi-tenant environment.

Data encryption is critical. One of the things you're going to find is that the difference between a private cloud is that it's your responsibility to provide the data encryption.

Most public cloud providers don’t provide data encryption. If they do, then it's on a service. You end up in a dedicated model as opposed to a shared model, and it's more expensive. But the protection of that data from the encryption perspective is generally going to lie with the owner.

The difference with disaster recovery is that physical assets need to be recovered from a DR perspective versus business continuity to make sure that you can cover your business by the CSP.

As you can see, the list goes on. There's a definite correlation with some slight nuances between cloud computing incidents that affect a private cloud versus a public cloud.

You never really know where your perimeter is. Your perimeter is defined by the mobility devices, and you have many different moving parts.



Gardner: Tari, we've talked about the ills. We've talked about cloud protection. What about the remediation and the prescription? How can we get on top of this?

Schreider: As we get towards the end and open it up for questions for our experts to answer specific questions for those who have attended, I'll share with you what we do at HP, because we do believe in eating our own dog food.

First and foremost, we understand that the cloud computing environment can be a bit chaotic. It can be very gelatinous. You never really know where your perimeter is. Your perimeter is defined by the mobility devices, and you have many different moving parts.

We're a great believer that you need a structure to bring order to that chaos. So we're very fortunate to have one of the authors of HP’s Cloud Protection Reference Architecture, Jan De Clercq, on with us today. I encourage people to please take advantage of that and ask any architecture questions of him.

But as you can see here, we cleanly defined the types of security that should exist within the access device zone, the types of security that are going to be unique to the model for software as a service (SaaS), platform as a service (PaaS), and infrastructure as a service (IaaS), how that interacts with a virtualized environment. Having access to this information is very crucial.

Unique perspective

The other thing we also understand is that we have to bring in service providers who have a unique perspective on security. One of those partners that we've chosen to help build our cloud reference architecture with is Symantec.

The next thing that I want to share with you is that it's also an immutable law that the level of investment that you make in protecting your cloud environment should be commensurate with the value of the assets that are being burst or hosted in that cloud environment.

At HP, we work with HP Labs and our Information Technology Assurance practice. We've put together what is now a patent-pending model on how to analyze the security controls, their level of maturity, in contrast to the threat posture of an organization, to be able to arrive at the right layer of investment to protect your environment.

We can look at the value of the assets. We can take a look at your budget. We can also do a what-if analysis. If you're going to have a 10 percent cut in your budget, which security controls can you most likely cut that will have the least amount of impact on your threat posture?

The last point that I want to talk about, before we open it up to the experts, is that we talked a little bit about the architecture, but I really wanted to emphasize the framework. HP is a founding member in ITIL, principal provider of ITSM type services. We are on CSA standards bodies. We've written a number of chapters. We believe that you needs to have a very cohesive protection framework for your cloud computing environment.

The level of investment that you make in protecting your cloud environment should be commensurate with the value of the assets that are being burst or hosted in that cloud environment.



We're a big believer in, whether it's cloud or just in security, having an information technology architecture that's defined by layers. What is the business rationale for the cloud and what are we trying to protect? How should it work together functionally? Technically, what types of products and services will we use, and then how will it all be implemented?

We also have a suite of products that we can bring to our cloud computing environment to ensure that we're securing and providing governance, securing applications, and then also trying to detect breaches of security. I've talked about our reference architecture.

Something that's also unique is our P5 Model, where basically we look at the cloud computing controls and we have an abstraction of five characteristics that should be true to ensure that they are deployed correctly.

As I mentioned before, we're either a principal member, contributing member, or founding member of virtually every cloud security standards organization that's out there. Once again, we can't do it by ourselves, and that's why we have strategic partners with VMwares and the Symantecs of the world.

Gardner: Okay. Now, we're going to head over to our experts who are going to take questions.

I'd like to direct the first one to Luis Buezo joining us from Spain. There's a question here about key challenges regarding data lifecycle specifically. How do you view that? What are some of the issues about secure data, even across the data lifecycle?

Key challenges

Luis Buezo: Based on CSA recommendations, we're not only talking about data security related to confidentiality, integrity, and availability, but there are other key challenges in the cloud like location of the data to guarantee that the geographical locations are permitted by regulations.

There's data permanence, in order to guarantee that data is effectively removed, for example, when moving from one CSP to a new one, or data backup and recovery schemes. Don't assume that cloud-based data is backed up by default.

There are also data discovery capabilities to ensure that all data requested by authorities can be retrieved.

Another example is data aggregation on inference issues. This will be implemented to prevent revealing protected information. So there are many issues with having data lifecycle management.

Gardner: Our next question should go to Jan. The inquiries about being cloud ready for dealing with confidential company data, how do you come down on that?

Jan De Clercq: HP's vision on that is that we think that many cloud service today are not always ready for letting organizations store their confidential or important data. That's why we recommend to organizations, before they consider moving data into the cloud, to always do a very good risk assessment.

They should make sure that they clearly understand the value of their data, but also understand the risks that can occur to that data in the cloud provider’s environment. Then, based on those three things, they can determine whether they should move their data into the cloud.

We also recommend that consumers get clear insights from the CSP on exactly where their organization's data is stored and processed, and where travels inside the network environment of the job provider.

As a consumer you need to get a complete view on what's done with your data and how the CSP is protecting them.

Gardner: Okay. Jan, here is another one I'd like to direct to you. What are essential data protection security controls that they should look for from their provider?

Clercq: It’s important that you have security controls in place that protect the entire data lifecycle. By data lifecycle we mean from the moment that the data is created to the moment that the data is destroyed.

Data creation

W
hen data is created it’s important that you have a data classification solution in place and that you apply proper access controls to the data. When the data is stored, you need confidentiality, integrity, and availability protection mechanisms in place. Then, you need to look at things like encryption tools, and information rights management tools.

When the data is in use, it’s important that you have proper access control in place,so that you can make sure that only authorized people can access the data. When the data is shared, or when it’s sent to another environment, it’s important that you have things like information rights management or data loss prevention solutions in place.

When the data is archived, it’s important that it is archived in a secured way, meaning that you have proper confidentiality, integrity, and availability protection.

When the data is destroyed, it’s important, as a consumer, that you make sure that the data is really destroyed on the storage systems of your CSP. That’s why you need to look at things like crypto-shredding and other data destruction tools.

Gardner: Tari, a question for you. How does cloud computing change my risk profile? It's a general subject, but do you really reduce or lose risk control when you start doing cloud?

When the data is destroyed, it’s important, as a consumer, that you make sure that the data is really destroyed on the storage systems of your CSP.



Schreider: An interesting question to be sure, because in some cases, your risk profile could be vastly improved. In other cases, it could be significantly diminished. If you find yourself no longer in a position to be able to invest in a hardened data center, it may be more prudent for you to move your data to a CSP that is already classified as a data-carrier grade, Tier 1 infrastructure, where they have the ability to invest the tens of millions of dollars for a hardened facility that you wouldn’t normally be able to invest yourself.

On the other hand, you may have a scenario where you're using smaller CSPs that don’t necessarily have that same level of rigor. We always recommend, from a strategic perspective when you are looking at application deployment, you consider its risk profile and where best to place that application and how it affects your overall threat posture.

Gardner: Lois, the next question is for you. How can HP help clients get started, as they determine how and when to implement cloud?

Lois Boliek: We offer a full lifecycle of cloud-related services and we can help clients get started on their transition to the cloud, no matter where they are in that process.

We have the Cloud Discovery Workshop. That’s where we can help customers in a very interactive work session on all aspects of considerations of the cloud, and it will result in a high-level strategy and a roadmap for helping to move forward.

Business/IT alignment

We also offer the Hybrid Delivery Strategy Services. That’s where we drill down into all the necessary components that you need to gain business and IT alignment, and it also results in a well-defined cloud service delivery model.

We also have some fast-start services. One of those is the CloudStart service, where we come in with a pre-integrated architecture to help speed up the deployment of the production-ready private cloud, and we can do that in less than 30 days.

We also offer a Cloud System Enablement service, and in this we can help fast track setting up the initial cloud service catalog development, metering, and reporting.

Gardner: Lois, I have another question here on products or the security issues. Does HP have the services to implement security in the cloud?

Boliek: Absolutely. We believe in building security into the cloud environment from the beginning through our architectures and our services. We offer something called HP Cloud Protection Program, and what we have done is extended the cloud service offerings that I've just mentioned by addressing the cloud security threats and vulnerabilities.

We always recommend that you consider its risk profile and where best to place that application and how it affects your overall threat posture.



We've also integrated a defense in depth approach to cloud infrastructure. We address the people, process, policies, products improved, and the P5 Model that Tari covered, and this is just to help to address confidently and securely build out the hybrid cloud environment.

We have service modules that are available, such as the Cloud Protection Workshop. This is for deep-dive discussions on all the security aspects of cloud, and it results in a high-level cloud security strategy and next steps.

We offer the Cloud Protection Roadmap Service, where we can define the specific control recommendations, also based on our P5 Model, and a roadmap that is very customized and specific to our clients’ risk and compliance requirements.

We have a Foundation Service that is also like a fast start, specific to implementing the pre-integrated, hardened cloud infrastructure, and we mitigate the most common cloud security threats and vulnerabilities.

Then, for customers who require very specific custom security, we can do custom design and implementation. All these services are based on the Cloud Reference Architecture that Jan and Tari mentioned earlier, as well as extensive research that we do ahead of time, before coming out with customers with our Cloud Protection Research & Development Center.

Gardner: Luis Buezo, a fairly large question, sort of a top-down one I guess. Not all levels of security would be appropriate for all applications or all data in all instances. So what are the security levels in the cloud that we should be aware of that we might be able to then align with the proper requirements for a specific activity?

Open question

B
uezo: This is a very open question. Understanding the security level as the real capability to manage different threats or compliance needs, cloud computing has different possible service models, like IaaS, PaaS, or SaaS, or different deployment models -- public, private, community, or hybrid.

Regarding service models, the consumer has more potential risk and less control and flexibility in SaaS models, compared to PaaS and IaaS. But when you go to a PaaS or IaaS, the consumer is responsible for implementing more security controls to achieve the security level that he requires.

Regarding deployment models, when you go to a public cloud, the consumer will be able to contract the security level already furnished by the provider. If consumer needs more capability to define specific security levels, he will need to go to community, private, or hybrid models.

My recommendation is that if you're looking to move to the cloud, the approach should be first to define assets for the cloud deployment and then evaluate it to know how sensitive this asset is. After this exercise, you'll be able to match the asset to potential cloud deployment models, understanding the implication of each one. At this stage, you should have an idea of the security level required to transition to the cloud.

Gardner: Jan De Clercq, our solution architect, next question should go to you, and it’s about CSPs. How can we as an organization and enterprise that consumes cloud services be sure that the CSP’s infrastructure remains secure?

If you're looking to move to the cloud, the approach should be first to define assets for the cloud deployment and then evaluate it to know how sensitive this asset is.



Clercq: It’s very important that, as a consumer during the contact negotiation phase with the CSP, you get complete insight into how the CSP secures its cloud infrastructure, how it protects your data, and how it shields the environments of different customers or tenants inside this cloud.

It’s also important that, as a cloud consumer, you establish a very clear service level agreements with your cloud provider, to agree on who does exactly what it comes down to security. This basically boils down to make sure that you know who takes care of things like infrastructure security controls and data protection controls.

This is not only about making sure that these controls are in place, but it’s also about making sure that they are maintained and that they are maintained using proper security management and operation process.

A third thing is that you also may want to consider monitoring tools that can cover the CSP infrastructure for checking things like availability of the service and for things like integrated security information and event management.

To check the quality of the CSP security controls, a good resource to get you started here is the questionnaire that’s provided by the CSA. You can download it from their website. It is titled the "Consensus Assessments Initiative Questionnaire."

Gardner: Tari, it's such a huge question about how to rate your CSP, and unfortunately, we don’t seem to have a rating agency or an insurance handicapper now to rate these on a scale of 1-5 stars. But I still want to get your input on what should I do to determine how good my service provider is when it comes to these security issues?

Incumbent on us

Schreider: I wish we did have a rating system, but unfortunately, it's still incumbent upon us to determine the veracity of the claims of security and continuity of the CSPs.

However, there are actually a number of accepted methods to gauge whether one's CSP is secure. Many organizations have had what's referred to as an attestation. Formally, most people are familiar with SAS 70, which is now SSAE 16, or you can have an ISO 27000.

Basically, you have an independent attestation body, typically an auditing firm, that will come in and test the operational efficiency and design of your security program to ensure that whatever you have declared as your control schema, maybe ISO, NIST, CSA, is properly deployed.

However, there is a fairly significant caveat here. These attestations can also be very narrowly scoped, and many of the CSPs will only attach it to a very narrow portion of their infrastructure, maybe not their entire facility, and maybe not even the application that you're a customer of.

Also, we found that CSPs many application-as-service providers don’t even own their own data centers. They're actually provided elsewhere, and there also may be some support mechanisms in place. In some cases, you may have to evaluate three attestations just to have a sense of security that you have the right controls in place, or the CSP does.

We strongly encourage organizations to add that nuance to make their policy manuals elastic, and resist creating all new security policies.



Gardner: And I suppose in our marketplace, there's also an element of self-regulation, because when things don’t go well, most people become aware of it and they will tend to share that information with the ecosystem that they are in.

Schreider: Absolutely.

Gardner: There's another question I'd like to direct to you, Tari. This is at an operational process level, and they are asking about their security policy manual. If they start to do more cloud activities -- private, public, or hybrid -- should they update or change their security policy manual and a little bit about how?

Schreider: Definitely. As I had mentioned before, one of the things you want to do is make your security policy manual extensible. Just like a cloud is elastic, you want to make sure that your policy manual is elastic as well.

Typically one of the missing things that you'll find in a conventional security policy manual is location of the data. What you'll find is that it covers data classification, the types of assets, and maybe some standards, but it really doesn’t cover the triggering, the transborder triggering aspects.

We strongly encourage organizations to add that nuance to make their policy manuals elastic, and resist creating all new security policies that people have to learn, so you end up with two disparate programs to try to maintain.

Gardner: Well, we'll have to leave it there. I really want to thank our audience for joining us. I hope you found it as insightful and valuable as I did.

And I also thank our main expert guest, Tari Schreider, Chief Architect of HP Technology Consulting and IT Assurance Practice.

I'd furthermore like to thank our three other HP experts, Lois Boliek, World Wide Manager in the HP IT Assurance Program; Jan De Clercq, World Wide IT Solution Architect in the HP IT Assurance Program, and Luis Buezo, HP IT Assurance Program Lead for EMEA.

This is Dana Gardner, Principal Analyst at Interarbor Solutions. You've been listening to a special BriefingsDirect presentation, a sponsored podcast created from a recent HP Expert Chat discussion on best practices for protecting cloud computing implementations and their use.

Thanks again for listening, and come back next time.

Join the next Expert Chat presentation on May 15 on support automation best practices.

Listen to the podcast. Find it on iTunes/iPod. Download the transcript. Sponsor: HP.

Transcript of a BriefingsDirect podcast on the role of security in moving to the cloud and how sound security practices can make adoption easier. Copyright Interarbor Solutions, LLC, 2005-2012. All rights reserved.

You may also be interested in:

Thursday, May 03, 2012

Ariba Network Helps Cox Enterprises Manage Procurement Across Six Different ERP Systems

Transcript of a sponsored BriefingsDirect podcast on how eProcurement helped Cox Enterprises get a better handle on indirect spend.

Listen to the podcast. Find it on iTunes/iPod. Download the transcript. Sponsor: Ariba.

Dana Gardner: Hello and welcome to a special BriefingsDirect podcast series coming to you from the 2012 Ariba LIVE Conference in Las Vegas. We're here to explore the latest in cloud-based collaborative commerce and learn how innovative companies are tapping into the networked economy.

We'll see how they are improving their business productivity along with building far-reaching relationships with new partners and customers.

Our next innovator interview focuses on Cox Enterprises, a major communications, media, and automotive services company, with revenues of nearly $15 billion and more than 50,000 employees, and with major subsidiaries, including Cox Communications, Manheim, Cox Media Group, and AutoTrader.com.

We'll learn how Cox, through the Ariba Network, manages multiple ERP systems for an improved eProcurement strategy and has moved toward more efficient indirect spend efforts to improve ongoing operations and drive future growth.

To hear more about how they have done this, we're here with Brooke Krenn, the Senior Manager of Procurement Systems for Cox Enterprises, based in Atlanta. [Disclosure: Ariba is a sponsor of BriefingsDirect podcasts.]

Welcome to BriefingsDirect.

Brooke Krenn: Thanks, Dana. Great to be with you.

Gardner: I am glad you could join us. Let me ask you first about these multiple ERP systems. I think that's pretty common. A lot of organizations either have organically developed multiple systems for different groups or, for merger and acquisition reasons, have different ERP. How has that been a challenge, when it comes to procurement?

Krenn: We have six separate ERP systems. Cox is a very interesting company in that our business units are very diverse and very unique. Across four divisions and our holding company we have those six ERP systems.

So with that, obviously, there are a lot of challenges. There's not a lot of common ground, when it comes to purchasing. Across those six ERP systems we needed some way to drive consistency, as we focused on really capitalizing on our indirect spend across all the business units.

Gardner: Let’s hear a bit more about the scale of your operation as a very large company. Tell me about your position and the depth and breadth of the procurement activities that you are responsible for?

Procurement systems team

Krenn: My team is the Procurement Systems Team. We fall under supply chain in Cox Enterprises. I have a team of three, and we manage our eProcurement platform, with which we do about $50 million year-end POs, and average about 1,500 POs a month. We also manage our P-Card program, which is about $130 million a year in spend, and also our fuel card program, which is about $50 million a year.

Gardner: I briefly described what Cox is and does, but maybe you could fill that out a little bit. It’s a very large organization with a fairly diverse group of products and services.

Krenn: All across the United States our Cox Communications division is the cable Internet telephone. We have Manheim, which is the wholesale car industry. AutoTrader.com, which hopefully a lot of your listeners are familiar with or maybe even used in the past, is an online form for buying and selling used as well as new vehicles. Also our Cox Media Group, which is our TV stations, radio stations, and newspapers, are all throughout the U.S.

Gardner: So with 50,000 employees, that’s a lot of indirect procurement to keep them productive and engaged. Back to the whole issue of procurement. What’s been your story? What have you been doing for the past few years, and why has that been important in the way in which you've used Ariba to accelerate your benefits?

Krenn: Historically, our spend, specifically the indirect spend, has been all over the place. We haven’t had a lot of visibility into that spend and haven’t had a consistent manner in which we purchased.

Ariba was one of the top contenders, simply because of the user experience was most important to us, and also how quickly we could implement it.



We had an eProcurement solution for about 10 years. We were on that software for a decade, and it was just very dated. It wasn't supported very well. We knew it was time to make that change. Where we were in the economy, everyone was looking at the most logical places to save time and money and to become more efficient. Obviously, procurement was one of those areas where we could do very quickly.

We knew the first step was replacing the software that we did have. Immediately, Ariba was one of the top contenders, as we looked for a new solution, simply because of the user experience was most important to us, and also how quickly we could implement it.

Gardner: So you’re going from an on-premises software installed affair to now more of a software-as-a-service (SaaS) and cloud affair. Was that something that was difficult or something you were looking forward to?

Krenn: Moving to the cloud in an on-demand solution was great for us. Having the on-premises software in the past, any time there was an upgrade or an update, we had to be sure IT knew about it and we scheduled the time on a night or a weekend. We had to call on resources internally within the company. So it was very exciting for us to move to an on-demand solution and all of the technology that was available with that.

Gardner: Let’s hear more about what this has done for you, not just in terms of savings, but in terms of productivity and agility. How have the users adapted to this, and what has it brought to them in terms of a business benefit?

A great change

Krenn: For the users, it's been a great change, because now they consistently know there's one place to go. When they need to order office supplies, when they need to order something for their break room, when they need to order business cards, they know where to go. In all of our divisions and all of our locations, employees want to do the right thing. They want to purchase the right way. A lot of times they're just not sure of what to do.

So with this implementation of a new tool, we were able to really drive them in the right direction, and it was an easy solution for them. It was easy for us to implement, and it's been very easy for our end users and our employees to adopt.

Gardner: Has that, in fact, translated into other metrics of success that you could describe for us. Maybe they're hard numbers, like dollar savings, or maybe they’re the ability to find better products that suit your constituents' needs when they’re in a certain new or interesting activity?

With this implementation of a new tool, we were able to really drive them in the right direction, and it was an easy solution for them.



Krenn: Probably one of the biggest wins for us has been just driving compliance against our contracts. We’re able to see very easily now when a location or a business unit within one of the divisions is purchasing off-contract or when they're not utilizing one of our preferred or negotiated suppliers. That's probably been the biggest win for us.

Gardner: How often does that happen? Have you been able to effectively reduce how often that happens? And what does that mean when you can get everyone on the same page?

Krenn: We have the visibility now to see very quickly within our P2P tool and also within our spend management tool to see where this spend is taking place and able to reach out directly to those locations or to those employees that are purchasing off-contract. Obviously, the more purchasing power we have, the more spend we are driving to these contracts, the better our pricing is going to be going forward.

Gardner: How about for folks who might be thinking about a different eProcurement strategy, recognizing that they also have multiple ERP systems? Tell us a bit what you suggest, particularly on how you bridged those multiple ERP systems with this new sort of centralized strategy?

Unconventional

Krenn: We went about implementing our new P2P solution a bit unconventionally, you could say. About 98 percent of our transactions are actually on a supplier card -- a P-Card model, which has just been tremendously successful for us. With that, we didn't have to integrate directly into our six separate ERPs because our payment method is with that supplier card.

Ease of implementation was one of the biggest wins. Also with that is the ease of use for the end user. There's no reconciliation for them at the end of the month. We’re taking care of all of that GL coding information, all of the approvals, upfront.

The supplier card model, again, has been great on the end user side as well as on the AP reconciliation side.

Gardner: We’ve been talking about how Cox Enterprises, through the Ariba Network, has gained insight and control over its procurement and instituted a strategic approach to eProcurement with their indirect spend efforts.

Ease of implementation was one of the biggest wins. Also with that is the ease of use for the end user.



I'd like to thank our guest. We’ve been here with Brooke Krenn. She is the Senior Manager of Procurement Systems at Cox Enterprises. Thanks so much.

Krenn: Thanks so much, Dana.

Gardner: And thanks to our audience for joining this special podcast coming to you from the 2012 Ariba LIVE Conference in Las Vegas.

I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host throughout this series of Ariba-sponsored BriefingsDirect discussions. Thanks again for listening, and come back next time.

Listen to the podcast. Find it on iTunes/iPod. Download the transcript. Sponsor: Ariba.

Transcript of a sponsored BriefingsDirect podcast on how eProcurement helped Cox Enterprises get a better handle on indirect spend. Copyright Interarbor Solutions, LLC, 2005-2012. All rights reserved.

You may also be interested in:

Thursday, April 26, 2012

Case Study: Strategic Approach to Disaster Recovery and Data Lifecycle Management Pays Off for Australia's SAI Global

Transcript of a sponsored podcast on how compliance services provider SAI Global successfully implemented a disaster recovery project with tools from VMware.

Listen to the podcast. Find it on iTunes/iPod. Download the transcript. Sponsor: VMware.

Dana Gardner: Hi, this is Dana Gardner, Principal Analyst at Interarbor Solutions, and you’re listening to BriefingsDirect.

Today, we present a sponsored podcast discussion on how business standards and compliance services provider SAI Global has benefited from a strategic view of IT enabled disaster recovery (DR).

We'll see how SAI Global has brought advanced backup and DR best practices into play for its users and customers. We will further learn how this has not only provided business continuity assurance, but it has also provided beneficial data lifecycle management and virtualization efficiency improvement.

Here to share more detail on how standardizing DR has helped improve many aspects of SAI Global’s business reliability, please join me now in welcoming Mark Iveli, IT System Engineer at SAI Global, based in Sydney, Australia. Welcome to BriefingsDirect, Mark. [Disclosure: VMware is a sponsor of BriefingsDirect podcasts.]

Mark Iveli: Hi, Dana. Thanks for having me.

Gardner: My pleasure. Let’s start from a high level. What do you think is different about DR, the requirements for doing good DR now versus five years ago?

Iveli: At SAI Global we had a number of business units that all had different strategies for their DR and different timings and mechanisms to report on it.

Through the use of VMware Site Recovery Manager (SRM) in the DR project, we've been able to centralize all of the DR processes, provide consistent reporting, and be able to schedule these business units to do all of their testing in parallel with each other.

So we can make a DR session, so to speak, within the business and just run through the process for them and give them their reports at the end of it.

Gardner: It sounds like a lot of other aspects of IT. Things had been done differently within silos, and at some point, it became much more efficient, in a managed capacity, to do this with a strategic perspective, a systems-of-record perspective. Does that make sense?

Complete review

Iveli: Absolutely. The initiative for DR started about 18 months ago with our board, and it was a directive to improve the way we had been doing things. That meant a complete review of our processes and documentation.

When we started to get into DR, we handled it from an IT point of view and it was very much like an iceberg. We looked at the technology and said, "This is what we need from a technology point of view." As we started to get further into the journey, we realized that there was so much more that we were overlooking.

We were working with the businesses to go through what they had, what they didn’t have, what we needed from them to make sure that we could deliver what they needed. Then we started to realize it was a bigger project.

The first 12 months of this journey so far has been all around cleaning up, getting our documentation up to spec, making sure that every business unit understood and was able to articulate their environments well. Then, we brought all that together so that we could say what’s the technology that’s going to encapsulate all of these processes and documentation to deliver what the business needs, which is our recovery point objective (RPO) and for our recovery time objective (RTO).

Gardner: All right. Before we delve a bit deeper into what DR is doing for you and maybe tease out a bit more about this whole greater than the sum of the parts, tell us about SAI Global and your responsibilities and specifically how you got involved with this particular project.

When we started to get into DR, we handled it from an IT point of view and it was very much like an iceberg.



Iveli: I'm a systems engineer with SAI Global, and I've been with the company for three years. When the DR project started to gather some momentum, I asked to be a significant part of the project. I got the nod and was seconded to the DR project team because of my knowledge of VMware.

That’s how I got into the DR project. I've spent a lot of time now working with SRM and I've become a lot less operational. I've had a chance to be in front of the business and do a little bit of the BA work of IT to work with these business units and say, "This is what your application is doing and this is what we can see it’s doing through the use of Application Discovery Manager. Is this what you guys know your applications to do?"

We've worked through those rough edges to bring together their documentation. They would put it together, we would review it, we would all then sit around and agree on it, and put the information into the DR plans.

From the documentation side of things, I've worked with the project manager and our DR manager to say, "This is how we need to line up our script. This is how we need to create our protection grid. And this is how the inventory mappings are all going to work from a technical point in SRM."

Gardner: Just briefly, what is SAI Global about? Are you in the business of helping people manage their standards and provide compliance services?

Umbrella company

Iveli: SAI Global is an umbrella company. We have three to four main areas of interest. The first one, which we're probably most well-known for, is our Five Ticks brand, and that’s the ASIS standards. The publication, the collection, the customization to your business is all done through our publishing section of the business.

That then flows into an assurance side of the business, which goes out and does auditing, training, and certification against the standards that we sell.

We continue to buy new companies, and part of the acquisition trail that we have been on has been to buy some compliance businesses. That’s where we provide governance risk and compliance services through the use of Board Manager, GRC Manager, Cintellate, and in the U.S., Integrity 360.

Finally, last year, we acquired a company that deals solely in property settlement, and they're quite a significant section of the business that deals a lot with banks and convincing firms in handling property settlements.

So we're a little bit diverse. All three of those business sections have their own IT requirements.

Gardner: I suppose, like many businesses, your brand is super important. The trust associated with your performance is something you will take seriously. So DR, backup and recovery, business continuity, are top-line issues for you.

Because of what we do, especially around the property settlement and interactions with the banks, DR is critical for us.



Is there anything about what you've been doing as a company that you think makes DR specifically important for you, or is this just generally something you think all businesses really need to master?

Iveli: From SAI Global’s point of view, because of what we do, especially around the property settlement and interactions with the banks, DR is critical for us.

Our publishing business feels that their website needs to be available five nines. When we showed them what DR is capable of doing, they really jumped on board and supported it. They put DR as high importance for them.

As far as businesses go, everyone needs to be planning for this. I read an article recently where something like 85 percent of businesses in the Asia-Pacific region don’t have a proper DR strategy in place. With the events that have happened here in Australia recently with the floods, and when you look at the New Zealand earthquakes and that sort of stuff, you wonder where the businesses are putting DR and how much importance they've got on it. It’s probably only going to take a significant event before they change their minds.

Gardner: I was really intrigued, Mark, when you said what DR is capable of doing. Do you feel that there is a misperception, perhaps an under-appreciation of what DR is? What is this larger whole that you're alluding to that you had to inform others in your organization about?

Process in place

Iveli: The larger whole was just that these business units had a process in place, but it was an older process and a lot of the process was designed around a physical environment.

With SAI Global being almost 100 percent virtual, moving them into a virtual space opened their minds up to what was possible. So when we can sit down with the business units and say, "We're going to do this DR test," they ask if it will impact production. No, it won’t. How is it happening? "Well, we are going to do this, this, and this in the background. And you will actually have access to your application the way it is today, it’s just going to be isolated and fenced off."

They say, "This is what we've been waiting for." We can actually do this sort of stuff. They're starting to see and ask, "Can we use this to test the next version of the applications and can we test this to kind of map out our upgrade path?"

We're starting to move now into a slightly different world, but it has been the catalyst of DR that’s enabled them to start thinking in these new ways, which they weren’t able to do before.

Gardner: So being able to completely switch over and recover with very little interruption in terms of the testing, with very little downtime or loss, the opportunity then is to say, "What else can we do with this capability?"

It has been the catalyst of DR that’s enabled them to start thinking in these new ways, which they weren’t able to do before.



I have heard about people using it for migrations and for other opportunities to literally move their entire infrastructure, their virtual assets. Is that the sort of thing you're getting at -- that this is larger than DR? It’s really about being able to control, manage, and move your assets?

Iveli: Absolutely. With this new process, we've taken the approach of baby steps, and we're just looking to get some operational maturity into the environment first, before we start to push the boundaries and do things like disaster avoidance.

Having the ability to just bring these environments across in a state that’s identical to production is eye-opening for them. Where the business wants to take it is the next challenge, and that’s probably how do we take our DR plan to version 2.0.

We need to start to work with the likes of VMware and ask what our options are now. We have this in place, people are liking it, but they want to take it into a more highly available solution. What do we do next? Use vCloud Director? Do we need to get our sites in an active/active pairing?

However, whatever the next technology step is for us, that’s where the business are now starting to think ahead. That’s nice from an alignment point of view.

Gardner: Now, you mentioned that your organization is almost 100 percent virtualized. It’s my understanding from a lot of users as well that being highly virtualized provides an advantage and benefit when heading to DR activities. Those DR maturation approaches put you in a position to further leverage virtualization. Is there sort of a virtuous adoption pattern, when you combine modern DR with widespread virtualization?

Outside the box

Iveli: Because all of a sudden, your machines are just a file on a data store somewhere, now you can move these things around. As the physical technologies continue to advance -- the speed of our networks, the speed of the storage environments, metro clustering, long haul replication -- these technologies are allowing businesses to think outside of the box and look at ways in which they can provide faster recovery, higher availability, more elastic environments.

You're not pinned down to just one data center in Sydney. You could have a data center in Sydney and a data center in New Zealand, for instance, and we can keep both of those sites online and in sync. That’s couple of years down the track for our business, but that’s a possibility somehow through the use of more virtualization technology.

Gardner: Perhaps another way to look at it too would be that your investments to get to a high level of virtualization, server virtualization, pays back dividends, when you move to advanced DR, is that fair?

Iveli: Yes, that’s a fair comment, a fair way to sum it up.

Gardner: Tell us a little bit about your use of VMware vCenter SRM. What version are you using now and have you been progressing along rapidly with that?

Iveli: We've installed SRM 4.1 and our installation was handled by an outsource company, VCPro. They were engaged with us to do the installation and help us get the design right from a technical point of view.

Trying to make it a daily operational activity is where the biggest challenge is, because the implementation was done in a project methodology.



Trying to make it a daily operational activity is where the biggest challenge is, because the implementation was done in a project methodology. Handing it across to the operational teams to make it a daily operation, or a daily task, is where we're seeing some challenges. A new contract admin has come on board, and they don’t quite understand the environment. So they put a machine in the wrong spot, or some use of storage is provisioned and it’s not being replicated and it is designed for a P1 recovery ranking.

That’s what my role is now -- keeping the SRM environment tuned and in line with what the business needs. That’s where we're at with SRM.

Gardner: Certainly, the constant reliability and availability of all your assets, regardless of external circumstances, is the number one metric, but are there any other metrics during your journey, as you called it, that you can point to that indicate whether you have done this right, or what it pays back -- reliability certainly, but what else is there in terms of a measurement of success?

Iveli: That's an interesting question. When I put this to the DR team yesterday, the only real measurements that we have has been the RPO and the RTO. As long as all the data that we needed was being replicated inside the 15-minute timeframe, that was one of our measurements.

Timely manner

Through the use of the HP Enterprise Virtual Array (EVA) monitoring, we've been able to see and ensure that our DR tunnels are being replicated correctly and within a timely manner.

The other one was the RTO, which we have been able to measure by the report from SRM showing us the time it has taken to present the failover these machines. So we're very confident that we can meet both our RPO and RTO through the use of these metrics.

Gardner: Any advice for those listening in who are beginning their journey? For those folks that are recognizing the risks and seeing these larger benefits, these more strategic benefits, how would you encourage them to begin their journey, what advice might you offer?

Iveli: The advice would be to get hired guns in. With DR, you're not going to be able to do everything yourself. So spend a little bit more money and make sure that you get some consultants in like VCPro. Without these guys, we probably would have struggled a little bit just making sure that our design was right. These guys ensured that we had best practice in our designs.

Before you get into DR, do your homework. Make sure that your production environment is pristine. Clean it up. Make sure that you don’t have anything in there that’s wasting your resources.

Come around with a strong business case for DR. Make sure that you've got everybody on board and you have the support of the business.

Make sure that your production environment is pristine. Clean it up. Make sure that you don’t have anything in there that’s wasting your resources.



When you get into DR, make sure that you secure dedicated resources for it. Don't just rely on people coming in and out of the project. Make sure that you can lead people to the resource and you make sure that they are fully engaged in the design aspects and the implementation aspects.

And as you progress with DR, incorporate it as early as you can into your everyday IT operation. We're seeing that, because we held it back from our operations, just handing it over and having them manage the hardware and the ESX and the logical layers, the environment, they were struggling just to get their head around it and what was what, where should this go, where should that go.

And once it’s in place, celebrate. It can be a long haul. It can be quite a trying time. So when you finally get it done, make sure that you celebrate it.

Gardner: And perhaps a higher degree of peace of mind that goes with that.

Iveli: Well, you'll find out when you get through it, how much easier this is making your life, how much better you can sleep at night.

Gardner: Well, great. We've been talking about business standards and compliance provider, SAI Global, and how they have benefited from a strategic view of IT-enabled DR processes and methods.

I'd like to thank our guest, Mark Iveli. He is IT System Engineer at SAI Global. I appreciate your time, and it was very interesting. Thank you, Mark.

Iveli: Thank you.

Gardner: This is Dana Gardner, Principal Analyst at Interarbor Solutions. Thanks also to our audience for listening, and come back next time.

Listen to the podcast. Find it on iTunes/iPod. Download the transcript. Sponsor: VMware.

Transcript of a sponsored podcast on how compliance services provider SAI Global successfully implemented a disaster recovery project with tools from VMware.
Copyright Interarbor Solutions, LLC, 2005-2012. All rights reserved.

You may also be interested in:

Monday, April 16, 2012

Virtualization Simplifies Disaster Recovery for Insurance Broker Myron Steves While Delivering Efficiency and Agility Gains Too

Transcript of a sponsored BriefingsDirect podcast on how small-and-medium businesses can improve disaster recovery through virtualization, while reaping additional benefits.

Listen to the podcast. Find it on iTunes/iPod. Download the transcript. Sponsor: VMware.

Dana Gardner: Hi, this is Dana Gardner, Principal Analyst at Interarbor Solutions, and you’re listening to BriefingsDirect.

We now present a sponsored podcast discussion on how insurance wholesaler Myron Steves & Co. developed and implemented an impressive IT disaster recovery (DR) strategy.

We'll see how small business Myron Steves made a bold choice to go essentially 100 percent server virtualized in 90 days. That then set the stage for a faster, cheaper, and more robust DR capability. It also helped them improve their desktop-virtualization delivery, another important aspect of maintaining constant business continuity.

Based in Houston, Texas, and supporting some 3,000 independent insurance agencies in that region, with many protected properties in the active hurricane zone at the Gulf of Mexico, Myron Steves needs to have all sources up and available, if and when severe storms strike. To help those less fortunate, employees need to be operational from home, if necessary, when a natural disaster occurs. [Disclosure: VMware is a sponsor of BriefingsDirect podcasts.]

We'll learn how the IT executives at Myron Steves adopted an advanced DR and virtualization approach to ensure that it can help its customers -- regardless of the circumstances. At the same time, they also set themselves up for improved IT efficiency and agility for years to come.

Here to share in more detail on how a small- to medium-sized business (SMB) can modernize DR completely for far better responsiveness is Tim Moudry, Associate Director of IT at Myron Steves & Co. Welcome, Tim.

Tim Moudry: Hello. How are you doing, Dana?

Gardner: I am doing great. Thanks for being with us. We're also here with William Chambers, IT Operations Manager at Myron Steves. And welcome to you also, William.

William Chambers: Thanks. Hello. How are you?

Gardner: We're doing well. Tim, let me throw a first question out at you. Hurricane Ike, back in 2008, was the second costliest hurricane ever to make landfall in the U.S. and, fortunately, it was a near miss for you and your data center, but as I understand, this was a wake-up call for you on what your DR approach lacked.

What was the biggest lesson you learned from that particular incident, and what spurred you on then to make some changes?

Moudry: Before Hurricane Ike hit, William and I saw an issue and developed a project that we presented to our executive committee. Then, when Hurricane Ike came about, which was during this time that we were presenting this, it was an easy sell.

When Hurricane Ike came, we were on another DR system. We were testing it, and it was really cumbersome. We tried to get servers up and running. We stayed there to recover one whole day and never got even a data center recovered.

Easy sell


W
hen we came to VMware, we made a proposal to our executive committee, and it was an easy sell. We did the whole project for the price of one year of our old DR system.

Gardner: What was your older system? Were you doing it on an outsourced basis? How did you do it?

Moudry: We were with another company, and they gave us facilities to recover our data. They were also doing our backups.

We went to that site to recover systems and we had a hard time recovering anything. So William and I were chatting and thinking that there's got to be a better way. That’s when we started testing a lot of the other virtualization software. We came to VMware, and it was just so easy to deploy.

William was the one that did all that, and he can go on with that more later, but we just came to VMware and it became a little bit easier.

Gardner: Tell me about the requirements. What was it that you wanted to do differently or better, after recognizing that you got away with Ike, but things may not go so well the next time? William, what were your top concerns about change?

Chambers: Our top concerns were just avoiding what happened during Ike. In the building we're in in Houston, we were without power for about a week. So that was the number one cause for virtualization.

Number two was just the amount of hardware. Somebody actually called us and said, "Can you take these servers somewhere else and plug them in and make them run?" Our response was no.

Moudry: We were running 70 servers at the time.

Chambers: They were the physical servers.

Moudry: Yeah, so that was about four racks of servers.

Chambers: That was the lead into virtualization. If we wanted everything to be mobile like that, we had to go with a different route.

Gardner: So you had sort of a two-pronged strategy. One was to improve your DR capabilities, but embracing virtualization as a means to do that also set you up for some other benefits. How did that work? Was there a nice synergy between these that played off one another?

Chambers: Once you get into it, you think, "Well, okay, this is going to make us mobile, and we'll be able to recover somewhere else quicker," but then you start seeing other features that you can use that would benefit what you are doing at smaller physical size. It's just the mobility of the data itself, if you’ve got storage in place that will do it for you. Recovery times were cut down to nothing.

Simpler to manage


There was ease of backups, everything that you have to do on a daily maintenance schedule. It just made everything simpler to manage, faster to manage, and so on.

Gardner: I talk to large enterprises a lot and I hear about issues when they are dealing with 10,000 seats, but you are a smaller enterprise, about 200 employees, is that right?

Moudry: Yeah, about 200.

Gardner: And so for you as an SMB, what requirements were involved? You obviously don't have unlimited resources and you don't have a huge IT staff. What was an important aspect from that vantage point?

Chambers: It’s probably what any other IT shop wants. They want stability, up-time, manageability, and flexibility. That’s what any IT shop would want, but we're a small shop. So we had to do that with fewer resources than some of the bigger Exxons and stuff like that.

Moudry: And they don’t want it to cost an arm and a leg either.

Gardner: For the benefit of our listeners, let’s talk a little bit about Myron Steves. Tell us about the company, what you do, and why having availability of your phones, your email, and all of your systems is so important to what you do for your customers.

Moudry: We're an insurance broker. We're not a carrier. We are between carriers and agents. With our people being on the phone, up-time is essential, because they're on the phone quoting all the time. That means if we can’t answer our phones, the insurance agent down the street is going to go pick up the phone, and they're going to get the business somewhere else.

Now, we're trying to get more green in the industry, and we are trying to print less paper



Also, we do have claims. We don't process all claims, but we do some claims, mainly for our stuff that's on the coast. After a hurricane, that’s when people are going to want that.

Now, we're trying to get more green in the industry, and we are trying to print less paper. That means we're trying to put the policies up there on the website, a PDF or something like that. Most likely, when they write the policy, they're not going to download that policy and keep it. It’s just human nature. They're going to say, "They’ve got it up there on the Web."

We have to be up all the time. When a disaster strikes, they are going to say, "I need to get my policy," and then they are going to want to go to our website to download that policy, and we have to be up. It’s the worst time I guess.

Chambers: And not many people are going to pack their paper policy when they evacuate or something like that.

Gardner: So the phones are essential. I also talk with a lot of companies and I ask them, which applications they choose to virtualize first. They have lots of different rationales for that, but you guys just went kit and caboodle. Tell me about the apps that are important to you and why you went 100 percent virtualized in such a short time?

SAN storage

Chambers: We did that because we’ve got applications running on our servers, things like rating applications, emails, our core applications. A while back, we separated the data volumes from the physical server itself. So the data volume is stored on a storage area network (SAN) that we get through an iSCSI.

That made it so easy for us to do a physical-to-virtual (P2V) conversion on the physical server. Then in the evenings, during our maintenance period, we shut that physical server down and brought up the virtual connected to the SAN one, and we were good. That’s how we got through it so quickly.

Gardner: So having taken that step of managing your data first, I also understand you had some virtual desktop activity go on there earlier. That must have given you some experience and insights into virtualization as well.

Chambers: Yeah, it did.

Moudry: William moved us to VMware first and then after we saw how VMware worked so well, we tried out VMware View and it was just a no-brainer, because of the issues that we had before with Citrix and because of the way Citrix works. One session affects all the others. That’s where VMware shines, because everybody is on their independent session.

Gardner: I notice that you're also a Microsoft shop. Did you look at their virtualization or DR? You mentioned that Citrix didn’t work out for you. How come you didn’t go with Microsoft?

Then he downloaded the free version of VMware and tried the same thing on that. We got it up in two or three days.



Chambers: We looked at one of their products first. We've used the Virtual PC and Virtual Server products. Once you start looking at and evaluating theirs, it’s a little more difficult setup. It runs well, but at that time, I believe it was 2008, they didn’t have anything like the vCenter Site Recovery Manager (SRM) that I could find. It was a bit slower. All around, the product just wasn’t as good as the VMware product was.

Moudry: I remember when William was loading it. I think he spent probably about 30 days loading Microsoft and he got a couple of machines running on it. It was probably about two or three machines on each host. I thought, "Man, this is pretty cool." But then he downloaded the free version of VMware and tried the same thing on that. We got it up in two or three days?

Chambers: I think it was three days to get the host loaded and then re-center all the products, and then it was great.

Moudry: Then he said that it was a little bit more expensive, but then we weighed out all the cost of all the hardware that we were going to have to spend with Microsoft. He loaded the VMware and he put about 10 VMs on one host.

Chambers: At that time, yeah.

Increased performance


Moudry: Yeah, it was running great. It was awesome. I couldn’t believe that that we could get that much performance from one machine. You'd think that running 10 servers, you would get the most performance. I couldn’t believe that those 10 servers were running just as fast on one server that they did on 10.

Chambers: That was another key benefit. The footprint of ESXi was somewhat smaller than a Microsoft.

Moudry: It used the memory so much more efficiently.

Gardner: So these are the things that are super-important to SMBs, when you’ve got a free version to try. It's the ease of installation, higher degree of automation, particularly when it came to multiple products, and then that all important footprint, the cost of hardware and then the maintenance and skills that go along with that. So that sounds like a pretty compelling case for SMB choice.

Before we move on, you mentioned vSphere, vCenter Site Recovery Manager, and View. Is that it? Are you up to the latest versions of those? What do you actually have in place and running?

Chambers: We’ve got both in production right now, vCenter 4.1, and vCenter 5.0. We’re migrating from 4.1 to 5.0. Instead of doing the traditional in-place upgrade, we’ve got it set up to take a couple of hosts out of the production environment, build them new from scratch, and then just migrate VMs to it in the server environment.

It went by so fast that it just happened that way. We were ahead of schedule on our time-frames and ahead on all of our budget numbers.



It's the same thing with the View environment. We’ve got enough hosts so we can take a couple out, build the new environment, and then just start migrating users to it.

Gardner: As I understand, you went to 99.999 percent virtualization in three months, is that correct?

Chambers: Yes.

Gardner: Was that your time-table, or did that happen faster than you expected?

Chambers: It happened much quicker than we thought. Once we did a few of the conversions, of the physical servers that we had, and it went by so fast that it just happened that way. We were ahead of schedule on our time-frames and ahead on all of our budget numbers. Once we got everything in our physical production environment virtualized, then we could start building new virtual servers to replace the ones that we had converted, just for better performance.

Gardner: So that's where you can bring more of those green elements, blades and so forth, which you mentioned is an important angle here. Of course you’re doing this for DR, but the process of moving from physical to virtual can be challenging for some folks. There are disruptions along the way. Did any of your workers seem put out, or were you able to do this without too much of disruption in terms of the migration process?

Without disruption

Chambers: We were able to do it without disruption, and that was one of the better things that happened. We could convert a physical server during the day, while people were still using it, or create that VM for it. Then, at night, we took the physical down and brought the virtual up, and they never knew it.

Gardner: So this is an instance where being an SMB works in your favor, because a large organization has to flip the switch on massive data centers. It's a little bit more involved. Sometimes weekends or even weeks are involved. So that’s good.

How about some help? Did you have any assistance in terms of a systems integrator, professional services, or anything along those lines?

Chambers: On the things that we’ve built here, we like to have other people come in and look at it and make sure we did it properly. So we’ll have an evaluation of it, after we build it and get everything in place.

Gardner: It sounds like you’re pretty complete though. That’s impressive. Another thing that I hear in the market is that when people make this move to virtualization and then they bring in the full DR capabilities, they see sort of a light bulb go on. "Wow. I can move my organization around, not just physically but I have more choices."

We’re going from a DR model to a high-availability business continuity, just to make sure everything is up all the time.



Some people are calling this cloud, as they’re able to move things around and think about a hybrid model, where they have some on their premises or in their own control, and then they outsource in some fashion to others. Now that you've done this, has this opened your eyes to some other possibilities, and what does that mean for you as an IT organization?

Chambers: It did exactly that. We’re going from a DR model to a high-availability business continuity, just to make sure everything is up all the time.

Moudry: That’s our next project. We’re taking what we did in the past and going to the next level, because right now we have it to where we have to fail over. We’re doing it like a SAN replication and we have to do a failover to another site.

William is trying to get that to more of a high-availability, where we just bring it down here and bring it up there, and it's a lot less downtime. So we’re working on phase two of the process now.

Gardner: All right. When you say here and near, I think you're talking about Houston and then Austin. Are those your two sites?

Moving to colos


Moudry: Right now it’s Houston and San Antonio, but we are trying to move -- we are moving all of our equipment to colos and we are going to be in Phoenix and Houston. So all the structure will be in colos, Houston, and Phoenix.

Gardner: So that’s even another layer of protection, wider geographic spread, and just reducing your risk in general. Let’s take a moment and look at what you’ve done and see in a bit more detail what it’s gotten for you. Return on investment (ROI), do you have any sense, having gone through this, what you are doing now that perhaps covered the cost of doing it in the first place?

Moudry: We spent about $350,000 a year in our past DR solution. We didn’t renew that, and the VMware DR paid for itself in the year.

Gardner: So you were able to recover your cost pretty quickly, and then you’ve got ongoing lower costs?

Moudry: Well, we are not buying equipment like we used to. We had 70 servers and four racks. It compressed down to one rack. How many blades are we running, William?

We're working with automation. We're getting less of a footprint for our employees. You just don’t hire as many.



Chambers: We're running 12 blades, and the per year maintenance cost on every server that we had compared to what we have now is 10 percent now of what it was.

Gardner: I suppose this all opens up more capacity, so that you can add on more data and more employees. You can grow, but without necessarily running out of capacity. So that's another benefit.

Moudry: We can probably do that, if we needed employees, but we're working with automation. We're getting less of a footprint for our employees. You just don’t hire as many.

Gardner: As you pursue colos, then you’ve got somebody else. They can worry about the air-conditioning, protection, security, and so forth. So that’s a little less burden for you.

Moudry: That’s the whole idea, for sure.

Gardner: How about some other metrics of success? Has this given you some agility now. Maybe your business folks come down and say, "We’d like you to run a different application," or "We're looking to do something additional to what we have in the past?" You can probably adapt to that pretty quickly.

Copying the template

Moudry: Making new servers is nothing. William has a template. He just copies it and renames it.

Chambers: The deployment of new ones is 20 minutes. Then, we’ve got our development people who come down and say, "I need a server just like the production server to do some testing on before we move that into production." That takes 10 minutes. All I have to do is clone that production server and set it up for them to use for development. It’s so fast and easy that they can get their work done much quicker.

Moudry: Rather than loading the Windows disk and having to load a server and get it all patched up.

Chambers: It gives you a like environment. In the past, where they tested on a test server you built, that’s not exactly the same as the production server. They could have bugs that they didn’t even know about yet, and that just cuts down on the development time just a lot.

Gardner: And so you're able to say yes, instead of, "Get in line behind everybody else." That’s a nice thing to do.

Chambers: Yes.

Gardner: Any advice for folks who are looking at the same type of direction, higher virtualization, gaining the benefits of DR’s result and then perhaps having more of that agility and flexibility. What might you have learned in hindsight that you could share with some other folks?

We’ve got a lot of people working at home now, just because of the View environment and things like that.



Chambers: We’ve attended several conferences and forums. I think there’s more caution that people are using. They want to get into virtualization but they're just not sure how it runs.

If you are going to use it, then get in and start using it on a small basis. Just to do a proof of concept, check performance, do all the due diligence that you need, and get into it. It will really pay off in the end.

Moudry: Have a change control system that monitors what you change. When we first went over there, William was testing out the VMs, and I couldn’t believe, as I was saying earlier, how fast it is. We have people who are on the phones. They're quoting insurance. They have to have the speed. If it hesitates, and that customer on the phone takes longer to give our people the information and our people has hard time quoting it, we’re going to lose the business.

When William put some of these packages over to the VM software, and it was not only running as fast, but it was running faster on the VM than it was on a hard box. I couldn’t believe it. I couldn’t believe how fast it was.

Chambers: And there was another thing that we saw. We’ve got a lot of people working at home now, just because of the View environment and things like that. I think we’ve kind of neglected our inside people, because they'd rather work in a View environment, because it's so much faster than sitting on a local desktop.

Backbone speed

Moudry: Well, the View, and all that being on the chassis itself is all backbone speed. When a person is working on the View, he is working right next to servers, rather than going through Cat 5 cable and through switches. He is on the backbone.

When somebody works at home, they're at lightning speeds. Upstairs is a ghost town now, because everybody wants to work from home. That’s part of our DR also. The model is, "We have a disaster here. You go work from home." That means we don’t have to put people into offices anywhere, and with the Voice over IP, it's like their call-center. They just call from home.

Gardner: I hope it never comes to this, but if there is a natural disaster type of issue, they could just pick up and drive 100 miles to where it's good. They’re up and running and they’ve got a mobile office.

Moudry: The way we did it, if they want to go 100 miles and check into hotel, they can work from the hotel That’s no problem.

Gardner: Let's look to the future unintended consequences that sometimes kick in on this. I've heard from other folks, and it sounds like with these View desktops that you’re going to have some compliance and security benefits, better control over data. Any metrics or payback along those lines?

There is no need for anybody to take our data out of this data center, because they can work from View anywhere they want to.



Moudry: We just were going over some insurance policies and stuff like that for digital data protection. One of the biggest problems that they were mentioning is employees putting data on laptops and then the laptop goes away, get stolen or whatever. There is no need for anybody to take our data out of this data center, because they can work from View anywhere they want to. Anywhere in the world, they can work from View. There's no reason to take the data anywhere. So that’s a security benefit.

Chambers: They can work from different devices now, too. I know we’ve got laptops out there, iPads, different type of mobile devices, and it's all secure.

Gardner: Any other future directions that you could share with us? You've told us quite a bit about what your plans are, colos and further data center locations, perhaps moving more towards mobile device support. Did we miss anything? What's the next step?

vMotion between sites


Moudry: As we said before we’re colo-ing VMware, we’re not able to vMotion between sites, but we’re kind of waiting for VMware to improve that a little bit. They'll probably come in down the road a little. But, that would probably be the next thing that I’d want is the vMotion between sites.

Gardner: And why is that important to you?

Moudry: Well, because it's a high-availability, they meet a true high-availability, because you just vMotion all your stuff to the other side and nobody even knows.

We’ve vMotioned servers between the hosts, and nobody even knows they moved. It's up all the time. Nobody even knows that we changed hardware on them. So that’s a great thing.

Gardner: It's just coming out of the cloud.

Moudry: Yeah.

Chambers: Sometimes, there may be a need to shut down an entire rack of equipment in one of our colos. Then we’d have to migrate everything.

Gardner: So an insurance policy for an insurance provider?

Chambers: Yes.

Moudry: Yeah.

Gardner: I'm afraid we’ll have to leave it there, gentlemen. We’ve been talking about how insurance wholesaler Myron Steves & Co. has developed and implemented an impressive IT DR strategy We’ve seen how an even small-to-medium-sized business can create business continuity for its operations, and make IT more efficient and agile to its business users. I’d like to thank our guests, Tim Moudry, Associate Director of IT at Myron Steves & Co. Thanks so much, Tim.

Moudry: Thank you.

Gardner: And also, William Chambers, IT Operations Manager there at Myron Steves. Thank you, William.

Chambers: You're very welcome, thank you.

Gardner: This is Dana Gardner, Principal Analyst at Interarbor Solutions. Thanks again to our audience for listening, and come back next time.

Listen to the podcast. Find it on iTunes/iPod. Download the transcript. Sponsor: VMware.

Transcript of a sponsored BriefingsDirect podcast on how small-and-medium businesses can improve disaster recovery through virtualization, while reaping additional benefits. Copyright Interarbor Solutions, LLC, 2005-2012. All rights reserved.

You may also be interested in: