Wednesday, May 09, 2012

Ariba Network Plus Dynamic Discounting Give Startup Mediafly Cash Flow Benefits, Help in Managing Capital

Transcript of a BriefingsDirect podcast on how cloud networking helps a small company work well with Fortune 500 enterprises.

Listen to the podcast. Find it on iTunes/iPod. Download the transcript. Sponsor: Ariba.

Dana Gardner: Hello, and welcome to a special BriefingsDirect podcast series coming to you from the 2012 Ariba LIVE Conference in Las Vegas.

We’re here to explore the latest in cloud-based collaborative commerce and learn how innovative companies are tapping into the networked economy. We’ll see how they're improving their business productivity along with building far-reaching relationships with new partners and customers.

I'm Dana Gardner, Principal Analyst at Interarbor Solutions and I'll be your host throughout this series of Ariba-sponsored BriefingsDirect case study discussions. [Disclosure: Ariba is a sponsor of BriefingsDirect podcasts.]

Our next innovator interview focuses on Mediafly, a startup company that delivers cloud-based applications for content management and distribution on mobile devices for Fortune 500 companies.

We’ll learn how Mediafly, through the Ariba Network, gained insight and control over its cash flow and found new means of managing capital and in aiding its ability to support ongoing operations, as well as to drive future growth.

To hear how they did it, please join me now in welcoming two executives from Mediafly, Carson Conant, CEO, and John Evarts, Chief Financial Officer and Chief Operating Officer. Welcome to you both.

Carson Conant: Thank you.

John Evarts: Thank you very much. Good to be here.

Gardner: Let me start with you, Carson. Tell me about the type of business you are. I think there's an interesting opportunity here to explore why buying and selling things works in an advantageous way for you. Tell me about the size of your company and why managing cash flow is so important.

Conant: Mediafly is the leader in the presentation platform market. What that means is that we’re the company that helps bridge the gap between large Fortune 1000 companies, their internal systems, and primarily mobile applications, but also things like Internet-connected televisions, and so forth.

Lots of video

Large companies create lots of video. It could be live broadcast, sales presentations, training videos, and TV and movie industry content. When they're trying to distribute that content to make it available on all of these emerging devices, particularly at that large scale, they need a provider like Mediafly. We’re a leader in the space right now.

Gardner: As a small company, what are you facing, when it comes to the financial pressures? Let’s go to you, John.

Evarts: As a small company, we often don't have a balance sheet that’s attractive to banks, among other things. As we seek things like angel investment or equity investment, we need to do things that are extremely capital efficient with those funds.

When we have an opportunity for revenue, especially revenue at large corporations, Fortune 100 companies, these are large contracts. As a small organization, contracting with larger organizations, it’s absolutely critical for us to manage that cash flow well and have visibility into the cash flow.

As we said, we’ve been growing very quickly. So our recurring revenue has grown by 3x over the last two years. As we grow quickly, we need to have that visibility into cash management, because it’s absolutely critical that we staff at the right time relative to taking advantage of opportunities that are out there in the market.

Gardner: So looking at this from an elasticity point of view, larger companies have a bit more wiggle room. As a smaller company you don't, but you need to grow fast. Help me understand what led you to do things differently in order to make this elasticity work in your favor.

Conant: We’re very fortunate. One of our largest customers is in the media entertainment space and we did a large seven-figure deal with them over a series of years. But the way that they do invoicing and transactions is through the Ariba Network. They said, "For you to get paid, join the Ariba Network."

So that was the first thing that got us onto the network. What was amazing is that once we got on there, as John said, it was unlike a lot of our other transactions with similarly large companies. In those companies it’s just like a black box. You've got a several hundred thousand-dollar invoice that goes out, and you may not know if that’s going to come in in two weeks or six weeks.

What was amazing to us with Ariba was the ability to know exactly where we were in that payment process. Ultimately we took advantage of this program they call "dynamic discounting," which allowed us to accelerate cash for a couple of basis points.

Huge ramifications


So for a fairly inconsequential amount of money to us, we were able to get paid in about 14 days instead of 60 days. It had huge ramifications on our business. What that did for us is allowed us to interact with them in a way that they preferred, but still have the nimbleness that we need from it as being a small company.

Gardner: So visibility and predictability are really important. In the past, people would generally go to a bank to get a line of credit and pay a high interest rate in order to have that accordion to manage their cash flows. You’ve found a way to do this, not through a bank, but through working directly with your customers and perhaps even incentivizing them to help you with your cash flow and visibility and your saving on the interest. It sounds like a win-win all around.

Evarts: It's an excellent opportunity for us to work with a partner and deepen that partnership with our vendors. We’ve found that, as Carson said, for a few basis points of a concession on the contract, we’re able to factor 100 percent of the contract value of the invoice.

When that occurs, the advantage to us is that we're able to immediately take advantage of it, as soon as it hits the system, to take 100 percent of those otherwise unknown collection periods. When we can reduce the collection periods from 60 days all the way to 14 days. We’re in a much stronger financial position, because we can take advantage of those dollars.

Gardner: Carson, what has this enabled you to do in terms of growing your company?

Conant: The first time we took advantage of dynamic discounting, it was relatively early in a development cycle for a security package that we were in the process of building. What that did allowed us to get access to cash to bring in additional resources to accelerate those featured enhancements.

It sparked additional Fortune 100 contracts. It was fundamentally game changing for us.



Literally, two weeks after signing this deal with one of the largest entertainment companies in the world, we were in the board room with one of the largest global banks in the world touting these new security features we had, which we otherwise wouldn’t have had for maybe 60 days.

It sparked additional Fortune 100 contracts. It was fundamentally game changing for us. We joke that it would be interesting if all of our customers leveraged something like dynamic discounting. It would be transformative for our business. It would drastically accelerate how we can deploy cash. Then you think about it in terms of what could it do for the economy.

If all these companies were taking advantage of this, it would boost the stability and the growth of their partners and their vendors. It would be something. That’s why we’re so vocal about it.

Evarts: As a small organization that is very nimble and trying to innovate, it speeds up and accelerates the pace of innovation that we’re able to generate. The new features that we offered to this first client, we were immediately able to turn and sell to one of the leading investment banks as the same security capability.

So when we’re able to quickly accelerate and bring new innovations to market, obviously everybody benefits. Mediafly benefits, and ultimately, our customers are going to benefit as well.

Level playing field

Gardner: And what strikes me is that this seems to be a level playing field between you, a small company, and as you point out, some of the largest media companies in the world. You’re playing with the same rules with Ariba being the arbiter, if you will. You can partake in those services just as easily as the big company. Is this a leveling of the playing field?

Conant: Absolutely. There are probably two or three technologies that we've taken advantage of that have just come into play in the last three to five years. One of them is cloud-based infrastructure. We don't have to buy servers anymore. That’s allowed a company of our size to outpace and out-compete companies that have been around for a long time and provide enterprise services to Fortune 100 global companies.

Then, you look at Ariba, and it's very similar. It allows us to interact with them the same way that they would interact with another large company. Doing business with us doesn’t feel different than doing business with another large company.

They get what they want, we get some additional visibility and some things that are valuable to us. But, these technologies have just come into play in the last three to five years, and it's really allowed a company like Mediafly to exist.

Gardner: A lot of times, analysts like myself focus on the technology behind the cloud, but it's really a game changer, when it comes to business processes and allows for the compression of what used to be latency in terms of business functions, monetization, and cash flow. Now, when everybody has visibility, when the level field is there for all participants, it's much more efficient and direct, and we’re just starting to pick some of the fruit of that.

These technologies have just come into play in the last three to five years, and it's really allowed a company like Mediafly to exist.



Evarts: And you touched on it. Creating scalable solutions is absolutely critical and it allows a small organization with relatively limited initial capital, first to be able to scale to a level, and participate in the Ariba Network, and basically have the same credentials as some of the largest companies in the world.

Folks who are transacting with Mediafly are doing it in the exact same way that they do with other Fortune 100 peers. To some degree, to us, it's a competitive advantage, and we feel that way. We feel that if we're on the system, we’ve been vetted, and other folks are using us on the system. It's an excellent credential for us to have and a nice reference for us.

Gardner: So it's also a go-to market strategy.

Evarts: Absolutely.

Gardner: Tell me a little bit more about Mediafly. What do you do? Content management, the mobile thing, is huge. You're using cloud to your advantage in a number of different ways. Maybe you can give us the elevator pitch about what you do, and why people should be interested.

Conant: One of the best ways is to think of an example. Think of all the TV and movie productions that are going on the studios. Those companies have thousands of video files that they're housing inside of their four walls. They're trying to expose that content to all of their executives and staff, everybody from the makeup artist that needs to watch the last three dailies to the CEO and the president.

Perfect platform

Now, they want to be able to do that on iPads, iPhone, Android, and on televisions connected to the web. We're the perfect platform, because there is so much that has to go on that so many gears are turning to make all that happen.

That’s a perfect solution for the cloud, and those companies now integrate with us so that that material is available to all the different stakeholders on all of these different devices. So we’ve dropped ourselves in and filled the gap between their in-house systems and all of these mobile devices.

Gardner: If I understand correctly, lots of content needs to be shared, and you're able to deal with the multiple panes of glass, the formats, the streaming, codecs, all these other technical issues.

Conant: Yes. Security is a huge thing, too. Think about the value of this content. It's their important assets. How do you move this thing around so that when it's on an iPad, if that iPad gets stolen or persons are let go from the organization, that they're not walking away with sensitive information.

We also provide that same service for documents for one of the larger global banks. So when they're training their sales force or their sales force is going on doing one-on-one presentations to large money managers, they are doing that with iPads. If that thing is lost or that thing gets hacked into, that content is protected. This comes back to that security suite. There is a whole lot of functionality we’ve added to really make this enterprise great.

We feel that this Ariba Discovery concept is extremely valuable to us as a small organization, as we look to scale as a lead generation opportunity and ultimately, as we’re transacting business.



Gardner: Now that we understand a bit more about how this is important as a function, let's revisit this notion about the cloud as an enabler. Ariba calls it the networked economy. That really gets to what we've been talking about -- that there are multiple levers that incentivize all the players to contribute. But then they all get something out of it, including that great visibility and control, when it comes to money, as well as business processes that can make all the difference.

Let's go one more time into this notion of the networked economy. We’ve touched briefly on how this could be a go-to market for you. Let's expand on that. How in providing a discount incentive to cash flow, and using the Ariba Network, does that end up getting you more customers?

Evarts: One of the tools that we’re just trying to tap into is this concept called Ariba Discovery. Discovery allows you to self select a series of industries, what they call commodities. That allows you to say, "These are the services that we offer." Then, large companies are able to go on that system and say, "These are the services that we're looking for." So it's really kind of a matchmaking function.

While we’ve only scratched the surface -- we feel we're relatively new to this system -- we feel that this Ariba Discovery concept is extremely valuable to us as a small organization, as we look to scale as a lead generation opportunity and ultimately, as we’re transacting business.

We feel that as a small vendor, if there are a number of individual companies that are looking to leverage this system, we're happy to make a light concession, obviously, for the right amount of basis points and just for the right timing. We're able to then take advantage of that, accelerate cash in. When non-financial companies, at the end of third quarter last year, had $3 trillion sitting on their balance sheets, you know that there's a ton of liquidity out there that will be invested, and is going to be invested in different ways.

One way that folks can take advantage of it is using a system like Ariba in order to support the supply chain, investing in their current partners.

Of, for, by the cloud

Gardner: So you're sort of of, for, and by the cloud. When it came to moving toward Ariba and using some of their services, did that work as an off-the-cloud service, where there wasn't anything on premise or you didn’t have to have your IT people involved in? How friendly a cloud player did Ariba turn out to be?

Conant: Extremely friendly, relative to some other more manual processes that some of our other customers leverage. The best example that is our ultimate discovery of the dynamic discounting program. Our controller noticed a checkbox in our interface. It's a web-based interface and he asked John, "This looks interesting. Should we take advantage of it?" We said, "Yeah, let's try it on our first invoice."

This was not some training that had to happen before we understood how to use this system. It was a couple of checkboxes, and now we are getting paid earlier.

To me, that's really what the cloud is. A company like Ariba, in my opinion, has done a really good job of abstracting, so you're left with just an elegant functionality and it's in the cloud. It's all web-based. There's nothing we had to deploy on premises.

We're a cloud company. So it feels natural. I can't even imagine how simple it must seem to somebody who's used to using things on premises.

Not only can we now take full advantage of their entire cloud-based infrastructure, but it was very easy for us as a small vendor to get onto this system.



Evarts: One of the key elements for us was the ease to get on the system. When a customer whose that large asks you to join, and you're as small as you are, you say absolutely, how quickly and when. Ariba was absolutely fantastic in helping us to get onto this system and then ultimately helping us navigate, within the course of a couple of hours max, to have been fully integrated into the system. Not only can we now take full advantage of their entire cloud-based infrastructure, but it was very easy for us as a small vendor to get onto this system.

Gardner: On the other side, the flip side of the coin, these global Fortune 500 companies were familiar with Ariba. You didn’t have to drag them along and convince them. There was already the established trust and credibility.

Conant: We’re still scratching the surface, as more and more companies are moving this way. It seems like a lot of the people that we’re talking to are moving into cloud-based procurement solutions, things like Ariba. As more time goes on, more and more of our customers will be on Ariba and leveraging dynamic discounting and so forth.

What's great is that each one that is using Ariba is already set up. It's just a matter of them attaching our profile or however it happens behind the scene. But there are not a whole lot of additional process. That’s what's neat about the network effect. Once multiple parties are on a network, it's just a matter of connecting the two lines together.

Gardner: I am afraid we’ll have to leave it there. We’ve been talking about how Mediafly, through the Ariba Network and a dynamic-discounting program, gained insight and control over its cash flow and found new ways of managing capital to support ongoing operations and drive future growth.

Join me in thanking our guests. We’ve been here with Carson Conant, CEO of Mediafly based in Chicago. Thank you, Carson.

Conant: Thank you, very much.

Gardner: We’ve also been here with John Evarts, the Chief Financial Officer and Chief Operating Officer at Mediafly. Thank you.

Evarts: Thanks for having me.

Gardner: And thank you to our audience for joining us for this special podcast coming to you from the 2012 Ariba Live Conference at Las Vegas. I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host throughout this series of Ariba-sponsored BriefingsDirect discussions. Thanks for listening, and come back next time.

Listen to the podcast. Find it on iTunes/iPod. Download the transcript. Sponsor: Ariba.

Transcript of a BriefingsDirect podcast on how cloud networking helps a small company work well with Fortune 500 enterprises. Copyright Interarbor Solutions, LLC, 2005-2012. All rights reserved.

You may also be interested in:

Tuesday, May 08, 2012

For Acorda Therapeutics, Disaster Recovery Protects Vital Enterprise Assets and Smooths Way to Data-Center Flexibility and Migration

Transcript of a sponsored podcast discussion on how a fine-tuned disaster recovery program can produce benefits across the IT landscape.

Listen to the podcast. Find it on iTunes/iPod. Download the transcript. Sponsor: VMware.

Dana Gardner: Hi, this is Dana Gardner, Principal Analyst at Interarbor Solutions, and you’re listening to BriefingsDirect.

Today, we present a sponsored podcast discussion on how biotechnology services provider Acorda Therapeutics has implemented a strategic disaster recovery (DR) capability to protect its highly virtualized IT operations and data.

We will see how Acorda Therapeutics’ use of advanced backup and DR best practices and products has helped it to manage rapid growth, cut energy costs, and gain the means to recover and manage applications and data faster. We will also see how these advanced DR benefits have led to other data center flexibly and even migration benefits. [Disclosure: VMware is a sponsor of BriefingsDirect podcasts.]

Here to share more detail on how modernizing DR has helped improve many aspects of Acorda Therapeutics’ responsiveness is Josh Bauer, Senior Manager of Network Operations at Acorda Therapeutics in Hawthorne, New York. Welcome to BriefingsDirect, Josh.

Josh Bauer: Thank you.

Gardner: From a high level, looking at the landscape of how things are changing so rapidly, what do you perceive as being different today about DR than just a few years ago? Is this really a fast moving area?

Bauer: One of the most prominent changes is recovery time, especially with this technology such as virtualization using VMware. You no longer need to restore from physical tape and see recovery times of upwards of 24 hours, something that we hadn’t seen until recently. We implemented Site Recovery Manager (SRM) from VMware and we can now do that same recovery in about four hours.

Gardner: So one of the chief benefits is just moving from tape into a more virtualized environment, where you can get fast turnaround. How about completeness? Is there an element of completeness that has improved as well?

Bauer: Absolutely. We're constantly replicating using RecoverPoint and we can get data up to the minute, versus tape, where you are at the whim of whether the backup completed on time -- did everything go to tape, and when was it done? It could have been two days ago, versus now, when it's data that’s 100 percent synced up to a minute ago.

Gardner: I am also wondering, because you are in the healthcare and biotechnology field, are there aspects of this that appeal to you from a compliance or regulatory perspective as well?

Bauer: Definitely. Four times per year we have to prove that we can recover all of our software and data by doing a DR test. Until we had SRM, we had to do it all from tape, from a cold facility, and it would take us a day, sometimes a day-and-a-half. That’s just not the best way to do things. But now, with SRM, we can always do these tests on the fly, even from our office, from home, or from wherever.

Gardner: Tell me a little bit more about Acorda Therapeutics. You were founded in 1995. Tell us what you do, so our audience can understand the type of company you are and type of products and services you provide.

Recent growth

Bauer: We create treatments for people with multiple sclerosis, spinal cord injuries, or other neurological disorders. We have two marketed drugs in the market right now, the most recent of which, Ampyra, helps people with multiple sclerosis walk better, and it has been a huge success. And that's the main reason we've been growing so much lately.

Gardner: Tell me about this issue of growth. When you started to look at your facilities, your data center, and your infrastructure, you obviously made the move to virtualization in a big way. How did it make sense in your mind to go to a DR improvement and how did that come to bear on this issue of being able to ramp up and deal with a fast growing organization?

Bauer: That was just the next logical step. Prior to virtualization, we were spending a lot of time managing our infrastructure, with all those physical servers. Once we virtualized everything, we spent way less time managing the infrastructure and could spend more time helping the business.

In fact, the IT department itself has become less like a computer repair shop and more like a strategy center. I'm constantly being brought into projects to help the business make the right decisions when it comes to any type of technology.

The next logical step would be to have my team spend less time doing these four-times-a-year DR drills the way I described before. With SRM it’s a few clicks. We're saving so much time and we are able to do other things.

The IT department itself has become less like a computer repair shop and more like a strategy center.



Gardner: Just so we have a sense of the growth, you went from 80 employees a few years ago to how many now?

Bauer: Now, it's about 350.

Gardner: That’s pretty impressive. Obviously, too, in this type of field you're dealing with large amounts of data, data that is structured and unstructured. Give us a sense of the storage and/or data requirements that you're facing?

Bauer: When we had about 80 employees, we probably barely had a terabyte, and now we easily have over 14 terabytes.

Gardner: At a high level, tell me about how you approach this, and if you use partners, how you sought some help in terms of figuring out your journey. What was it that you went to in terms of beginning the journey and how it unfolded and got you to the point today, where you can deal with something like 14 terabytes and moment-by-moment backup capability?

Bauer: Specific to DR or the data recovery?

Gardner: The whole journey. How you approached this problem, got some help, and then got to the level you are now.


Strategic partner

Bauer: It all really started at VMworld. That’s been a fantastic way for me to learn what's out there, what's coming up, and just staying in the know. That’s actually where I met International Computerware, Inc. (ICI), who is one of our strategic partners for storage and virtualization.

I had approached them with the growth issue. We had already started doing virtualization on our own. I had used it at a previous company, but I wasn’t familiar with SRM, and it looked like it might be a nice fit for improving our DR. So ICI came in and they sort of held our hands and helped us with that project.

Specific to storage, they have also helped us make sure that we do better management of growth, anticipate our growth, and show that we have more than what we're going to need, before the growth happens, and they've done some analysis on like what we have. We brought them in before things got too bad.

Gardner: So how about beyond the technology and the products? People and process also play a big role in this. Did this require a big shift in culture or skills when you went from cold tape to this more modern and software-based approach?

Bauer: Not much of a cultural shift, luckily, because of projects like virtualization and how successful we've been. The company trusts us to take on new technologies and they kind of leave it to us.

Within IT, the shift was a good one. It was a reduced workload on them, and it's a much better process.



But within IT, the shift was a good one. It was a reduced workload on them, and it's a much better process. As a result, it got more people in my IT department involved in virtualization as well.

Gardner: I'm intrigued about this relationship between server virtualization and a track record of strong skills and process to moving into DR. Tell me a little bit about your IT environment and your level of virtualization and why that led to a sort of no-brainer, when it came to moving to SRM ,and a higher degree of efficiency, when it comes to DR?

Bauer: Since using VMware, we've noticed uptime upwards of three nines monthly. Before that, when we were mostly a physical environment, it was nowhere near that much. We had physical servers going down all the time.

VMware immediately gained our trust, seeing that they came out with this product for DR. It was a name that we trusted. Then, we played with it for a while, and it worked out fantastically.

It's all about trusting VMware and then, again, ICI, working with them. They just know their stuff. We have a lot of different partners we work with, but we prefer to use ICI, because they really focus on doing things properly. It's more about working with someone that really knows what they are doing. They understand that we have some skills, as well. They're not trying to sell us something we don’t need.

Gardner: I believe that ICI was named VMware Business Continuity Partner of the Year in 2011. So clearly there is a strong relationship between them and VMware. But getting back to the products, do you recall what degree of virtualization you have among your servers?

95 percent virtualized

Bauer: We are 95 percent virtualized here. The only thing that’s not virtual is our fax server, which requires a physical fax board and that’s about it. Everything else is virtual.

Gardner: So this is across all tiered apps, tier one, three, four?

Bauer: That’s correct, our SQL apps, our Exchange, everything you can think of is virtualized.

Gardner: I understand you're using vSphere 5. You're on vCenter SRM 5. That only came out towards the end of last year. So you just jumped right on that.

Bauer: Oh, I didn’t waste any time. We were very excited about it, especially this new option of using a failback, which wasn’t really part of SRM Version 4.

Gardner: Tell us a little bit more about why that’s important to you.

They've certainly fixed some of the bugs, and the interface is much better. The whole testing process seems to be a lot more smooth.



Bauer: If you ever have the very unlikely event of a a disaster, when you do a recovery, you're now operating off of the disaster equipment or recovery equipment. While that’s happening, people are still saving files and generating new data. If you were to just simply turn on the original equipment again, all that data would be lost. So you need to fail back to re-sync everything.

With SRM Version 4, you had to configure two one-way recovery systems. So it would take a lot more time. But now with failback, it's a lot more smooth, kind of built-in.

Gardner: How about doing test? If you wanted to try out and see how things were working, perhaps preparing for some of those compliance and regulatory requirements, does that happen a bit easier as well now with the newer version?

Bauer: We've seen a higher success rate on the new version versus the old one. They've certainly fixed some of the bugs, and the interface is much better. The whole testing process seems to be a lot more smooth.

Gardner: Let's move on to how you know you're doing this correctly. Do you have any metrics? Do you track this? Is there anecdotal evidence from your business users, even those who are involved with the compliance issues? Of course, the number one metric is that you don’t suffer downtime and you don’t lose data, but are there other ways that you look at this and say, "Wow, we're saving money, reducing workload, and reducing labor?" Anything along those lines?

Bauer: When we do these four-times-a-year test, we create this lab bubble and we also have a few Windows XP and Windows 7 virtual workstations on there. We invite a few people from the business to log in and test their applications.

They would be protected

So right there, we're getting people outside of IT involved to let them see how cool this is. It also gives them the comfort in knowing that, if there ever were a disaster, they would be protected. They can see it for themselves by actually dialing into the computer and testing things themselves. So there's a huge benefit to that. It deepens the trust between IT and the business.

Gardner: Do you actually have separate data centers that you are backing up to? What's the topology or architecture that you're using?

Bauer: We have two separate data centers, recovery and production.

Gardner: And do you have them far apart in different geographies or do you have them hosted.

Bauer: At the moment they're only a few towns apart, but we are shopping around for a data center much further away. We hope to do that in the next six months or so.

Gardner: And this is all in Hawthorne, New York. Is that correct?

We reduced the footprint by easily 75 percent by not needing so many physical servers.



Bauer: Right.

Gardner: Looking to the future, one other area I wanted to hit on, which is important to a lot of folks, especially in some overseas markets, is this issue about energy. Did you have any impact on energy and/or storage costs associated with the total life cycle of the data?

Bauer: We reduced the footprint by easily 75 percent by not needing so many physical servers. That’s a pretty huge shout-out to VMware there. Also, we're not using that much power. We don’t need as big a data center. Not as much cooling is needed. There's a whole assortment of things, when you take out all the physical servers.

Gardner: Now, looking to the future, other areas that people have described as a segue from going to high virtualization, exploiting the latest technologies in DR, is to start thinking about desktop virtualization infrastructure (VDI) and desktop-as-a-service. They're even looking at cloud and hybrid-cloud models for hosting apps, then backing them up and recovering them in different data centers, which you've alluded to. Do you have any thoughts about where this could possibly lead?

Bauer: In fact, if you were going to ask me what my next initiative was going to be, and you didn’t mention desktops, that’s the first thing that would have come to mind. We're starting to explore replacing our laptops with virtual desktops. I'm hoping this is something that we could implement next year.

Right way to go


This seems like the right way to go, because our helpdesk team spends too much time swapping out laptops or replacing laptops that are dropped on the ground. You're looking at a small thin client, which is the fraction of the cost of a laptop. Plus, the data is no longer kept in a laptop. There are no security or compliance issues. You can l just give them a thin client, and they are back in business.

Gardner: So you rest easily of course with good DR, but you rest easy, as well, when your intellectual property is all well protected across the entire spectrum of its deployment and use in local storage.

Bauer: Exactly. It makes everybody in this company, especially at the top-level, nervous to know that some sensitive data still does make it out to the laptops. We tell people to save everything to their network drives, but without using thin clients and virtual desktops, there's no other way to force that.

Gardner: How about advice for those folks that might be moving towards a more modern DR journey, as you described it? What would you advise to them as they begin, and what lessons might you have learned that you could share?

Bauer: First off, do it. You're going to be glad that you did. The good thing about this is that you can do it in parallel with your current DR plans. You don’t have to change your existing recovery plans. You can take as much time as you want to set it up right. And the key is to set up a demonstration for the key business owners and players that are going to make the decision on the change.

Set it up right with a handful of important apps, important VMs, and then just show it to people. Once they see how great it works, you're definitely going to want to change.

Gardner: And that disruption, or the lack of disruption I suppose I should say, when you're implementing this seems to be important too. Any thoughts about what you might be able to inform people about, when it comes to level or lack of level of disruption when you're putting this together?

Bauer: As I said, you can do this in parallel. As you're setting up this new environment, it doesn’t affect your existing environment whatsoever.

The key is to set up a demonstration for the key business owners and players that are going to make the decision on the change.



Gardner: A matter of flipping the switch.

Bauer: Exactly.

Gardner: Anything else you would like to offer in terms of your thoughts on strategic and tactical benefits around DR and your journey?

Bauer: It's always helpful to have some outside help. No matter how skilled you are, it's always good to have a second pair of eyes look at the work that you did, if for nothing more than to confirm that you've done everything you could and your plans are solid. It's helpful to have a partner like ICI.

Gardner: Great. We've been talking about how biotechnology services provider Acorda Therapeutics has implemented a strategic DR capability to augment its highly virtualized IT operations. And we have seen quite a few tactical and strategic benefits for that for their IT group, as well as for the larger organization and its requirements as a healthcare provider, for compliance, regulation, and protection of their assets.

Thanks so much to our guest. We've been here with Josh Bauer. He is the Senior Manager of Network Operations for Acorda Therapeutics. Thanks so much, Josh.

Bauer: Thank you.

Gardner: This is Dana Gardner, Principal Analyst at Interarbor Solutions. Thanks again for joining, and come back next time.

Listen to the podcast. Find it on iTunes/iPod. Download the transcript. Sponsor: VMware.

Transcript of a sponsored podcast discussion on how a fine-tuned disaster recovery program can produce benefits across the IT landscape. Copyright Interarbor Solutions, LLC, 2005-2012. All rights reserved.

You may also be interested in:

Monday, May 07, 2012

Expert Chat with HP on How Better Understanding Security Makes it an Enabler, Rather than an Inhibitor, of Cloud Adoption

Transcript of a BriefingsDirect podcast on the role of security in moving to the cloud and how sound security practices can make adoption easier.

Listen to the podcast. Find it on iTunes/iPod. Download the transcript. Sponsor: HP.

Join the next Expert Chat presentation on May 15 on support automation best practices.

Dana Gardner: Welcome to a special BriefingsDirect presentation, a sponsored podcast created from a recent HP Expert Chat discussion on best practices for protecting cloud-computing implementations and their use.

Business leaders clearly want to exploit the cloud values that earn them results fast, but they also fear the risks perceived in moving to cloud models rashly. It now falls to CIOs to not only rapidly adapt to cloud, but find the ways to protect their employees and customers – even as security threats grow.

This is a serious but not insurmountable challenge.

This is Dana Gardner, Principal Analyst at Interarbor Solutions. To help find out how to best implement protected cloud models, I recently moderated an HP Expert Chat session with Tari Schreider, HP Chief Architect of HP Technology Consulting and IT Assurance Practice. Tari is a Distinguished Technologist with 30 years of IT and cyber security experience, and he has designed, built, and managed some of the world’s largest information protection programs.

In our discussion, you’ll hear the latest recommendations for how to enable and protect the many cloud models being considered by companies the world over. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

If you understand the security risk, gain a detailed understanding of your own infrastructure, security can move from an inhibitor of cloud adoption to an enabler.



As part of our chat, we're also joined by three other HP experts, Lois Boliek, World Wide Manager in the HP IT Assurance Program; Jan De Clercq, World Wide IT Solution Architect in the HP IT Assurance Program; and Luis Buezo, HP IT Assurance Program Lead for EMEA.

Our discussion begins with a brief overview from me of the cloud market and current adoption risks. We'll begin by looking at why cloud and hybrid computing are of such great interest to businesses and why security concerns may be unnecessarily holding them back.

If you understand the security risk, gain a detailed understanding of your own infrastructure, and follow proven reference architectures and methods, security can move from an inhibitor of cloud adoption to an enabler.

Cloud has sparked the imagination of business leaders, and many see it now as essential. Part of that is because the speed of business execution, especially the need for creating innovations that span corporate boundaries and extend across business ecosystems, has made this a top priority for corporations.

Every survey that I've seen and every panelist that I've talked to is saying that the cloud is elevating in terms of priority, and a lot of it has to do with the agility benefits. There's is a rush to be innovative and to be a first mover. That also puts a lot of pressure on the business people inside these companies, and they have been intrigued by cloud computing as a mean of getting them where they need to go fast.

This now means that the center of gravity for IT services is shifting towards the enterprise’s boundaries, moving increasingly outside of their firewalls, and therefore beyond the traditional control of IT.

Protection risks

B
usiness leaders want to exploit the cloud values that bring them productivity results fast, but IT leaders think that the protection risk perceived in moving to cloud models could come back to bite them. They need to be aware and maybe even put the brakes on in order to do this correctly.

So it now falls on CIOs and other leaders in IT not only to rapidly adopt cloud models, but to quickly find the means to make cloud use protected for operations, data, processes, intellectual property, their employees, and their customers, even as security and cyber threats ramp up.

We'll now hear from HP experts from your region about meeting these challenges and obtaining the business payoffs by making the transition to cloud enablement securely. Now is the time for making preparation for successful cloud use.

We're going to be hearing specifically about how HP suggests that you best understand the transition to cloud-protected enablement. Please join me now in welcoming our main speaker, Tari Schreider. Tari, please tell us more about how we can get into the cloud and do it with low risk.

Tari Schreider: It's always a pleasure to be able to sit with you and chat about some of the technology issues of the day, and certainly cloud computing protection is the topic that’s top of mind for many of our customers.

I want to begin talking about the four immutable laws of cloud security. For those of you who have been involved in information security over time, you understand that there is a certain level of immutability that is incumbent within security. These are things that will always be, things that will never change, and it is a state of being.

When we started working on building clouds at HP a few years ago, we were also required to apply data protection and security controls around those platforms we built. We understood that the same immutable laws that apply to security, business continuity, and disaster recovery extended into the cloud world.

First is an understanding that if your data is hosted in the cloud, you no longer directly control its privacy and protection. You're going to have to give up a bit of control, in order to achieve the agility, performance, and cost savings that a cloud ecosystem provides you.

The next immutable law is that when your data is burst into the cloud, you no longer directly control where the data resides or is processed.

One of the benefits of cloud-based computing is that you don’t have to have all of the resources at any one particular time. In order to control your costs, you want to have an infrastructure that supports you for daily business operations, but there are ebbs and flows to that. This is the whole purpose of cloud bursting. For those of you who are familiar with grid-based computing, the models are principally the same.

Different locations

Rather than your data being in one or maybe a secondary location, it could actually be in 5, 10, or maybe 30 different locations, because of bursting, and also be under the jurisdiction of many different rules and regulations, something that we're going to talk about in just a little bit.

The next immutable law is that if your security controls are not contractually committed to, then you may not have any legal standing in terms of the control over your data or your assets. You may feel that you have the most comprehensive security policy that is rigorously reviewed by your legal department, but if that is not ensconced in the terminology of the agreement with a service provider, then you don’t have the standing that you may have thought you had.

The last immutable law is that if you don’t extend your current security policies and controls in the cloud computing platform, you're more than likely going to be compromised.

You want to resist trying to create two entirely separate, disparate security programs and policy manuals. Cloud-based computing is an attribute on the Internet. Your data and your assets are the same. It’s where they reside and how they're being accessed where there is a big change. We strongly recommend that you build that into your existing information security program.

Gardner: Tari, these are clearly some significant building blocks in moving towards cloud activities, but as we think about that, what are the top security threats from your perspective? What should we be most concerned about?

The reason to move to cloud is for making data and assets available anywhere, anytime.



Schreider: Dana, we have the opportunity to work with many of our customers who, from time to time, experience breaches of security. As you might imagine, HP, a very large organization, has literally hundreds of thousands of customers around the world. This provides us with a unique vantage point to be able to study the morphology of cloud computing platform, security, outages, and security events.

One of the things that we also do is take the pulse of our customer base. We want to know what’s keeping them up at night. What are the things that they're most concerned with? Generally, we find that there is a gap between what actually happens and what people believe could happen.

I want to share with you something that we feel is particularly poignant, because it is a direct interlock between what we're seeing actually happening in the industry and also what keeps our clients up late at night.

First and foremost, there's the ensured continuity of the cloud-computing platform. The reason to move to cloud is for making data and assets available anywhere, anytime, and also being able to have people from around the world accept that data and be able to solve business needs.

If the cloud computing platform is not continuously available, then the business justification as to why you went there in the first place is significantly mooted.

Loss of GRC control

N
ext is the loss of span of governance, risk management, and compliance (GRC) control. In today’s environment, we can build an imperfect program and we can have a GRC management program with dominion over our assets and our information within our own environment.

Unfortunately, when we start extending this out into a cloud ecosystem, whether private, public, or hybrid, we don’t necessarily have the same span of control that we have had before. This requires some delicate orchestration between multiple parties to ensure that you have the right governance controls in place.

The next is data privacy. Much has been written on data privacy and protection across the cloud ecosystem. Today, you may have a data privacy program that’s designed to address the security and privacy laws of your specific country or your particular state that you might reside in.

However, when you're moving into a cloud environment, that data can now be moved or burst anywhere in the world, which means that you could be violating data-privacy laws in another country unwittingly. This is something that clients want to make sure that they address, so it does not come back in terms of fines or regulatory penalties.

Mobility access is the key to the enablement of the power of the cloud. It could be a bring-your-own-device (BYOD) scenario, or it could be devices that are corporately managed. Basically you want to provide the data and put it in the hands of the people.

You have to make sure that you have an incident-response plan that recognizes the roles and responsibilities between owner and custodian.



Whether they're out on an oil platform and they need access to data, or whether it’s the sales force that need access to Salesforce.com data on BlackBerrys, the fact remains that the data in the cloud has to land on those mobile devices, and security is an integral part.

You may be the owner of the data, but there are many custodians of the data in a cloud ecosystem. You have to make sure that you have an incident-response plan that recognizes the roles and responsibilities between owner and custodian.

Gardner: Tari, the notion of getting control over your cloud activities is important, but a lot of people get caught up in the devil in the details. We know that cloud regulations and laws change from region to region, country to country, and in many cases, even within companies themselves. What is your advice, when we start to look at these detailed issues and all of the variables in the cloud?

Schreider: Dana, that is a central preoccupation of law firms, courts, and regulatory bodies today. What tenets of law apply to data that resides in the cloud? I want to talk about a couple of areas that we think are the most crucial, when putting together a program to secure data from a privacy perspective.

Just as you have to have order in the courts, you have to have order in the clouds. First and foremost, and I alluded to this earlier, is that the terms and conditions of the cloud computing services are really what adjudicates the rights, roles, and responsibilities between a data owner and a data custodian.

Choice of law

However, within that is the concept of choice of law. This means that, wherever the breach of security occurs, the courts can actually go to the choice of the law, which means whatever is the law of the land where the data resides, in order to determine who is at fault and at breach of security.

This is also true for data privacy. If your data resides in your home location, is that the choice of law by which you follow the data privacy standards? Or if your data is burst, how long does this have to be in that other jurisdiction before it is covered by that choice of law? In either case, it is a particularly tricky situation to ensure that you understand what rules and regulations apply to you.

The next one is transporter data flow triggers. This is an interesting concept, because when your data moves, if you do a data-flow analysis for a cloud ecosystem, you'll find that the data can actually go across various borders, going from jurisdiction to jurisdiction.

The data may be created in one jurisdiction. It may be sent to another jurisdiction for processing and analysis, and then may be sent to another location for storage, for intermediate use, and yet a fourth location for backup, and then possibly a fifth location for a recovery site.

This is not an atypical example. You could have five triggering events across five different borders. So you have to understand the legal obligations in multiple jurisdictions.

The onus is predominantly placed on the owner of the data for the integrity of the data. The CSP basically wants no direct responsibility for maintaining the integrity of that data.



The next one is reasonable security, which is, under the law, what would a prudent person do? What is reasonable under the choice of law for that particular country? When you're putting together your own private cloud, in which you may have a federated client base, this ostensibly makes you a cloud service provider (CSP).

Or, in an environment where you are using several CSPs, what are the data integrity disclaimers? The onus is predominantly placed on the owner of the data for the integrity of the data, and after careful crafting of terms and conditions, the CSP basically wants no direct responsibility for maintaining the integrity of that data.

When we talk about who owns the data, there is an interesting concept, and there are a few test cases that are coursing their way through various courts. It’s called the Berne Convention.

In the late 1990s, there were a number of countries that got together and said, "Information is flowing all over the place. We understand copyright protection for works of art and for songs and those types of things, but let’s take it a step further."

In the context of a cloud, could not the employees of an organization be considered authors, and could not the data they produce be considered work? Therefore wouldn’t it be covered by the Berne Convention, and therefore be covered under standard international copyright laws. This is also something that’s interesting.

Modify policies

The reason that I bring this to your attention is that it is this kind of analysis that you should do with your own legal counsel to make sure that you understand the full scope of what’s required and modify your existing security policies.

The last point is around electronic evidence and eDiscovery. This is interesting. In some cases it can be a dual-edged sword. If I have custody of the data, then it is open under the rules of discovery. They can actually request that I produce that information.

However, if I don’t directly have control of that data, then I don’t have the right, or I don’t have the obligation, to turn it over under eDiscovery. So you have to understand what rules and regulations apply where the data is, and that, in some cases, it could actually work to your advantage.

Gardner: So we've identified some major building blocks for safe and proper cloud, we have identified the concerns that people should have as they go into this. We understand there is lot of detail involved. What are the risks in terms of what we should prioritize? How should we create a triage effect, if you will, in identifying what’s most important from that risk perspective?

Schreider: There are certainly unique risks that are extant to a cloud computing environment. However, one has to understand where that demarcation point is between a current risk register, or threat inventory, for assets that have already been classified and those that are unique to a cloud-computing environment.

You have to understand what rules and regulations apply where the data is, and that, in some cases, it could actually work to your advantage.



Much has been said about uniqueness, but at the end of the day, there are only a handful of truly unique threats. In many cases, they've been reconstituted from what is classically known as the top 20 types of threats and vulnerabilities to affect an organization.

If you have an asset, an application, and data, they're vulnerable. It is the manner or the vector by which they become vulnerable and can be compromised that come from some idiosyncrasies in a cloud-computing environment.

One of the things that we like to do at HP for our own cloud environment, as well as for our customers, is to avail ourselves of the body of work that has been done through European Network and Information Security Agency (ENISA), the US National Institute of Standards and Technology (NIST), and the Cloud Security Alliance (CSA) in understanding the types of threats that have been vetted internationally and are recognized as the threats that are most likely to occur within our environment.

We're strong believers of qualitative risk assessments and using a Facilitated Risk Assessment Process (FRAP), where we simply want to understand the big picture. NIST has published a great model, a nine-box chart, where you can determine where the risk is to your cloud computing environment. You can use it from an impact from a high to low, to the likelihood from high to low as well.

So in a very graphical form, we can present to executives of an organization where we feel we have the greatest threats and. You'd have to have several overlays and templates for this, because you're going to have multiple constituencies in an ecosystem for a cloud. So you're going to have different views of this.

Join the next Expert Chat presentation on May 15 on support automation best practices.

Different risk profiles

Y
our risk profile may be different, if you are the custodian, versus the risk profile if you're the owner of the data. This is something that you can very easily put together and present to your executives. It allows you to model the safeguards and controls to protect the cloud ecosystem.

Gardner: We certainly know that there is a great deal of opportunity for cloud models, but unfortunately, there is also significant down side, when things don’t go well. You're exposed. You're branded in front of people. Social media allows people to share issues when they arise. What can we learn from the unfortunate public issues that have cropped up in the past few years that allows us to take steps to prevent that from happening to us?

Schreider: These are all public events. We've all read about these events over the last 16-18 months, and some of them have occurred within just the last 30 days or so. This is not to admonish anybody, but basically to applaud these companies that have come forward in the interest of security. They've shared their postmortem of what worked and what didn’t work.

What goes up can certainly come down. Regardless of the amount of investment that one can put into protecting their cloud computing environment, nobody is immune, whether it’s a significant and pervasive hacking attempt against an organization, where sensitive data is exfiltrated, or whether it is a service-oriented cloud platform that has an outage that prevents people from being able to board a plane.

When an outage happens in your cloud computing environment, it definitely has a reverberation effect. It’s almost a digital quake, because it can affect people from around the world.

You want to make sure that you have a secure system development lifecycle methodology to ensure that the application is secure and has been tested for all conventional threats and vulnerabilities.



One of the things that I mentioned before is that we're very fortunate that we have that opportunity to look at disaster events and breaches of security and study what worked and what didn’t.

I've put together a little model that would reanalyze the storm damage. if you look at the types of major events that have occurred. I've looked at the control construct that would exist, or should exist, in a private cloud and the control construct that should exist in a public cloud, and of course in a hybrid cloud. It's the convergence of the two, and we would be able to mix and match those.

If you have a situation where you have an external threat that infiltrates an application, hacks into it, compromises an application, in a private cloud environment, you want to make sure that you have a secure system development lifecycle methodology to ensure that the application is secure and has been tested for all conventional threats and vulnerabilities.

In a public cloud environment, you normally don’t have that same avenue available to you. So you want to make sure that you either have presented to you, or on behalf of the service provider, have a web-application security review, external threat and vulnerability test.

In a cloud environment, where you are dealing in the situation of grouping many different customers and users together, you have to have a basis to be able to segregate data and operation, so that one of that doesn’t affect everybody.

Multi-tenancy strategies

In a private cloud environment, you would set up your security zone and segmentation, but in the public cloud environment, you would have your multi-tenancy strategies in place and you would make sure that you work with that service provider to ensure that they had the right layers of security to protect you in a multi-tenant environment.

Data encryption is critical. One of the things you're going to find is that the difference between a private cloud is that it's your responsibility to provide the data encryption.

Most public cloud providers don’t provide data encryption. If they do, then it's on a service. You end up in a dedicated model as opposed to a shared model, and it's more expensive. But the protection of that data from the encryption perspective is generally going to lie with the owner.

The difference with disaster recovery is that physical assets need to be recovered from a DR perspective versus business continuity to make sure that you can cover your business by the CSP.

As you can see, the list goes on. There's a definite correlation with some slight nuances between cloud computing incidents that affect a private cloud versus a public cloud.

You never really know where your perimeter is. Your perimeter is defined by the mobility devices, and you have many different moving parts.



Gardner: Tari, we've talked about the ills. We've talked about cloud protection. What about the remediation and the prescription? How can we get on top of this?

Schreider: As we get towards the end and open it up for questions for our experts to answer specific questions for those who have attended, I'll share with you what we do at HP, because we do believe in eating our own dog food.

First and foremost, we understand that the cloud computing environment can be a bit chaotic. It can be very gelatinous. You never really know where your perimeter is. Your perimeter is defined by the mobility devices, and you have many different moving parts.

We're a great believer that you need a structure to bring order to that chaos. So we're very fortunate to have one of the authors of HP’s Cloud Protection Reference Architecture, Jan De Clercq, on with us today. I encourage people to please take advantage of that and ask any architecture questions of him.

But as you can see here, we cleanly defined the types of security that should exist within the access device zone, the types of security that are going to be unique to the model for software as a service (SaaS), platform as a service (PaaS), and infrastructure as a service (IaaS), how that interacts with a virtualized environment. Having access to this information is very crucial.

Unique perspective

The other thing we also understand is that we have to bring in service providers who have a unique perspective on security. One of those partners that we've chosen to help build our cloud reference architecture with is Symantec.

The next thing that I want to share with you is that it's also an immutable law that the level of investment that you make in protecting your cloud environment should be commensurate with the value of the assets that are being burst or hosted in that cloud environment.

At HP, we work with HP Labs and our Information Technology Assurance practice. We've put together what is now a patent-pending model on how to analyze the security controls, their level of maturity, in contrast to the threat posture of an organization, to be able to arrive at the right layer of investment to protect your environment.

We can look at the value of the assets. We can take a look at your budget. We can also do a what-if analysis. If you're going to have a 10 percent cut in your budget, which security controls can you most likely cut that will have the least amount of impact on your threat posture?

The last point that I want to talk about, before we open it up to the experts, is that we talked a little bit about the architecture, but I really wanted to emphasize the framework. HP is a founding member in ITIL, principal provider of ITSM type services. We are on CSA standards bodies. We've written a number of chapters. We believe that you needs to have a very cohesive protection framework for your cloud computing environment.

The level of investment that you make in protecting your cloud environment should be commensurate with the value of the assets that are being burst or hosted in that cloud environment.



We're a big believer in, whether it's cloud or just in security, having an information technology architecture that's defined by layers. What is the business rationale for the cloud and what are we trying to protect? How should it work together functionally? Technically, what types of products and services will we use, and then how will it all be implemented?

We also have a suite of products that we can bring to our cloud computing environment to ensure that we're securing and providing governance, securing applications, and then also trying to detect breaches of security. I've talked about our reference architecture.

Something that's also unique is our P5 Model, where basically we look at the cloud computing controls and we have an abstraction of five characteristics that should be true to ensure that they are deployed correctly.

As I mentioned before, we're either a principal member, contributing member, or founding member of virtually every cloud security standards organization that's out there. Once again, we can't do it by ourselves, and that's why we have strategic partners with VMwares and the Symantecs of the world.

Gardner: Okay. Now, we're going to head over to our experts who are going to take questions.

I'd like to direct the first one to Luis Buezo joining us from Spain. There's a question here about key challenges regarding data lifecycle specifically. How do you view that? What are some of the issues about secure data, even across the data lifecycle?

Key challenges

Luis Buezo: Based on CSA recommendations, we're not only talking about data security related to confidentiality, integrity, and availability, but there are other key challenges in the cloud like location of the data to guarantee that the geographical locations are permitted by regulations.

There's data permanence, in order to guarantee that data is effectively removed, for example, when moving from one CSP to a new one, or data backup and recovery schemes. Don't assume that cloud-based data is backed up by default.

There are also data discovery capabilities to ensure that all data requested by authorities can be retrieved.

Another example is data aggregation on inference issues. This will be implemented to prevent revealing protected information. So there are many issues with having data lifecycle management.

Gardner: Our next question should go to Jan. The inquiries about being cloud ready for dealing with confidential company data, how do you come down on that?

Jan De Clercq: HP's vision on that is that we think that many cloud service today are not always ready for letting organizations store their confidential or important data. That's why we recommend to organizations, before they consider moving data into the cloud, to always do a very good risk assessment.

They should make sure that they clearly understand the value of their data, but also understand the risks that can occur to that data in the cloud provider’s environment. Then, based on those three things, they can determine whether they should move their data into the cloud.

We also recommend that consumers get clear insights from the CSP on exactly where their organization's data is stored and processed, and where travels inside the network environment of the job provider.

As a consumer you need to get a complete view on what's done with your data and how the CSP is protecting them.

Gardner: Okay. Jan, here is another one I'd like to direct to you. What are essential data protection security controls that they should look for from their provider?

Clercq: It’s important that you have security controls in place that protect the entire data lifecycle. By data lifecycle we mean from the moment that the data is created to the moment that the data is destroyed.

Data creation

W
hen data is created it’s important that you have a data classification solution in place and that you apply proper access controls to the data. When the data is stored, you need confidentiality, integrity, and availability protection mechanisms in place. Then, you need to look at things like encryption tools, and information rights management tools.

When the data is in use, it’s important that you have proper access control in place,so that you can make sure that only authorized people can access the data. When the data is shared, or when it’s sent to another environment, it’s important that you have things like information rights management or data loss prevention solutions in place.

When the data is archived, it’s important that it is archived in a secured way, meaning that you have proper confidentiality, integrity, and availability protection.

When the data is destroyed, it’s important, as a consumer, that you make sure that the data is really destroyed on the storage systems of your CSP. That’s why you need to look at things like crypto-shredding and other data destruction tools.

Gardner: Tari, a question for you. How does cloud computing change my risk profile? It's a general subject, but do you really reduce or lose risk control when you start doing cloud?

When the data is destroyed, it’s important, as a consumer, that you make sure that the data is really destroyed on the storage systems of your CSP.



Schreider: An interesting question to be sure, because in some cases, your risk profile could be vastly improved. In other cases, it could be significantly diminished. If you find yourself no longer in a position to be able to invest in a hardened data center, it may be more prudent for you to move your data to a CSP that is already classified as a data-carrier grade, Tier 1 infrastructure, where they have the ability to invest the tens of millions of dollars for a hardened facility that you wouldn’t normally be able to invest yourself.

On the other hand, you may have a scenario where you're using smaller CSPs that don’t necessarily have that same level of rigor. We always recommend, from a strategic perspective when you are looking at application deployment, you consider its risk profile and where best to place that application and how it affects your overall threat posture.

Gardner: Lois, the next question is for you. How can HP help clients get started, as they determine how and when to implement cloud?

Lois Boliek: We offer a full lifecycle of cloud-related services and we can help clients get started on their transition to the cloud, no matter where they are in that process.

We have the Cloud Discovery Workshop. That’s where we can help customers in a very interactive work session on all aspects of considerations of the cloud, and it will result in a high-level strategy and a roadmap for helping to move forward.

Business/IT alignment

We also offer the Hybrid Delivery Strategy Services. That’s where we drill down into all the necessary components that you need to gain business and IT alignment, and it also results in a well-defined cloud service delivery model.

We also have some fast-start services. One of those is the CloudStart service, where we come in with a pre-integrated architecture to help speed up the deployment of the production-ready private cloud, and we can do that in less than 30 days.

We also offer a Cloud System Enablement service, and in this we can help fast track setting up the initial cloud service catalog development, metering, and reporting.

Gardner: Lois, I have another question here on products or the security issues. Does HP have the services to implement security in the cloud?

Boliek: Absolutely. We believe in building security into the cloud environment from the beginning through our architectures and our services. We offer something called HP Cloud Protection Program, and what we have done is extended the cloud service offerings that I've just mentioned by addressing the cloud security threats and vulnerabilities.

We always recommend that you consider its risk profile and where best to place that application and how it affects your overall threat posture.



We've also integrated a defense in depth approach to cloud infrastructure. We address the people, process, policies, products improved, and the P5 Model that Tari covered, and this is just to help to address confidently and securely build out the hybrid cloud environment.

We have service modules that are available, such as the Cloud Protection Workshop. This is for deep-dive discussions on all the security aspects of cloud, and it results in a high-level cloud security strategy and next steps.

We offer the Cloud Protection Roadmap Service, where we can define the specific control recommendations, also based on our P5 Model, and a roadmap that is very customized and specific to our clients’ risk and compliance requirements.

We have a Foundation Service that is also like a fast start, specific to implementing the pre-integrated, hardened cloud infrastructure, and we mitigate the most common cloud security threats and vulnerabilities.

Then, for customers who require very specific custom security, we can do custom design and implementation. All these services are based on the Cloud Reference Architecture that Jan and Tari mentioned earlier, as well as extensive research that we do ahead of time, before coming out with customers with our Cloud Protection Research & Development Center.

Gardner: Luis Buezo, a fairly large question, sort of a top-down one I guess. Not all levels of security would be appropriate for all applications or all data in all instances. So what are the security levels in the cloud that we should be aware of that we might be able to then align with the proper requirements for a specific activity?

Open question

B
uezo: This is a very open question. Understanding the security level as the real capability to manage different threats or compliance needs, cloud computing has different possible service models, like IaaS, PaaS, or SaaS, or different deployment models -- public, private, community, or hybrid.

Regarding service models, the consumer has more potential risk and less control and flexibility in SaaS models, compared to PaaS and IaaS. But when you go to a PaaS or IaaS, the consumer is responsible for implementing more security controls to achieve the security level that he requires.

Regarding deployment models, when you go to a public cloud, the consumer will be able to contract the security level already furnished by the provider. If consumer needs more capability to define specific security levels, he will need to go to community, private, or hybrid models.

My recommendation is that if you're looking to move to the cloud, the approach should be first to define assets for the cloud deployment and then evaluate it to know how sensitive this asset is. After this exercise, you'll be able to match the asset to potential cloud deployment models, understanding the implication of each one. At this stage, you should have an idea of the security level required to transition to the cloud.

Gardner: Jan De Clercq, our solution architect, next question should go to you, and it’s about CSPs. How can we as an organization and enterprise that consumes cloud services be sure that the CSP’s infrastructure remains secure?

If you're looking to move to the cloud, the approach should be first to define assets for the cloud deployment and then evaluate it to know how sensitive this asset is.



Clercq: It’s very important that, as a consumer during the contact negotiation phase with the CSP, you get complete insight into how the CSP secures its cloud infrastructure, how it protects your data, and how it shields the environments of different customers or tenants inside this cloud.

It’s also important that, as a cloud consumer, you establish a very clear service level agreements with your cloud provider, to agree on who does exactly what it comes down to security. This basically boils down to make sure that you know who takes care of things like infrastructure security controls and data protection controls.

This is not only about making sure that these controls are in place, but it’s also about making sure that they are maintained and that they are maintained using proper security management and operation process.

A third thing is that you also may want to consider monitoring tools that can cover the CSP infrastructure for checking things like availability of the service and for things like integrated security information and event management.

To check the quality of the CSP security controls, a good resource to get you started here is the questionnaire that’s provided by the CSA. You can download it from their website. It is titled the "Consensus Assessments Initiative Questionnaire."

Gardner: Tari, it's such a huge question about how to rate your CSP, and unfortunately, we don’t seem to have a rating agency or an insurance handicapper now to rate these on a scale of 1-5 stars. But I still want to get your input on what should I do to determine how good my service provider is when it comes to these security issues?

Incumbent on us

Schreider: I wish we did have a rating system, but unfortunately, it's still incumbent upon us to determine the veracity of the claims of security and continuity of the CSPs.

However, there are actually a number of accepted methods to gauge whether one's CSP is secure. Many organizations have had what's referred to as an attestation. Formally, most people are familiar with SAS 70, which is now SSAE 16, or you can have an ISO 27000.

Basically, you have an independent attestation body, typically an auditing firm, that will come in and test the operational efficiency and design of your security program to ensure that whatever you have declared as your control schema, maybe ISO, NIST, CSA, is properly deployed.

However, there is a fairly significant caveat here. These attestations can also be very narrowly scoped, and many of the CSPs will only attach it to a very narrow portion of their infrastructure, maybe not their entire facility, and maybe not even the application that you're a customer of.

Also, we found that CSPs many application-as-service providers don’t even own their own data centers. They're actually provided elsewhere, and there also may be some support mechanisms in place. In some cases, you may have to evaluate three attestations just to have a sense of security that you have the right controls in place, or the CSP does.

We strongly encourage organizations to add that nuance to make their policy manuals elastic, and resist creating all new security policies.



Gardner: And I suppose in our marketplace, there's also an element of self-regulation, because when things don’t go well, most people become aware of it and they will tend to share that information with the ecosystem that they are in.

Schreider: Absolutely.

Gardner: There's another question I'd like to direct to you, Tari. This is at an operational process level, and they are asking about their security policy manual. If they start to do more cloud activities -- private, public, or hybrid -- should they update or change their security policy manual and a little bit about how?

Schreider: Definitely. As I had mentioned before, one of the things you want to do is make your security policy manual extensible. Just like a cloud is elastic, you want to make sure that your policy manual is elastic as well.

Typically one of the missing things that you'll find in a conventional security policy manual is location of the data. What you'll find is that it covers data classification, the types of assets, and maybe some standards, but it really doesn’t cover the triggering, the transborder triggering aspects.

We strongly encourage organizations to add that nuance to make their policy manuals elastic, and resist creating all new security policies that people have to learn, so you end up with two disparate programs to try to maintain.

Gardner: Well, we'll have to leave it there. I really want to thank our audience for joining us. I hope you found it as insightful and valuable as I did.

And I also thank our main expert guest, Tari Schreider, Chief Architect of HP Technology Consulting and IT Assurance Practice.

I'd furthermore like to thank our three other HP experts, Lois Boliek, World Wide Manager in the HP IT Assurance Program; Jan De Clercq, World Wide IT Solution Architect in the HP IT Assurance Program, and Luis Buezo, HP IT Assurance Program Lead for EMEA.

This is Dana Gardner, Principal Analyst at Interarbor Solutions. You've been listening to a special BriefingsDirect presentation, a sponsored podcast created from a recent HP Expert Chat discussion on best practices for protecting cloud computing implementations and their use.

Thanks again for listening, and come back next time.

Join the next Expert Chat presentation on May 15 on support automation best practices.

Listen to the podcast. Find it on iTunes/iPod. Download the transcript. Sponsor: HP.

Transcript of a BriefingsDirect podcast on the role of security in moving to the cloud and how sound security practices can make adoption easier. Copyright Interarbor Solutions, LLC, 2005-2012. All rights reserved.

You may also be interested in: