Wednesday, August 03, 2011

Case Study: MSP InTechnology Improves Network Services Via Automation and Consolidation of Management Systems

Transcript of a BriefingsDirect podcast discussion on how InTechnology uses network management automation to improve delivery and service performance for network and communications services.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: HP.

Dana Gardner: Hi, this is Dana Gardner, Principal Analyst at Interarbor Solutions, and you're listening to BriefingsDirect.

Today, we present a sponsored podcast discussion on a UK-based managed service provider’s journey to provide better information and services for its network, voice, VoIP, data, and storage customers. Their benefits have come from an alignment of many service management products into an automated lifecycle approach to overall network operations.

We'll hear how InTechnology has implemented a coordinated, end-to-end solution using HP solutions that actually determine the health of its networks by aligning their tools to ITIL methods. And, by using their system-of-record approach with a configuration management database, InTechnology is better serving its customers with lean resources by leveraging systems over manual processes. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

We're here with an operations manager from InTechnology to learn about their choices and outcomes when it comes to better operations and better service for their hundreds of enterprise customers.

Please join me now in welcoming Ed Jackson, Operational System Support Manager at InTechnology. Welcome, Ed.

Ed Jackson: Thanks. Hi.

Gardner: Your organization is a managed service provider (MSP) for both large enterprises and small to medium-sized companies, and you've been facing an awful lot of growth over the past several years. But you have also been dealing with heterogeneity in terms of many different products in place for network operations. It sounds like you've tried to tackle two major things at once: growth and complexity. How has that worked out?

Jackson: In terms of our network growth, we've basically been growing exponentially year over year. In the past four years, we've grown our network about 75 percent. In terms of our product set, we've basically tripled that in size, which obviously leads to major complexity on both our network and how we manage the product lifecycle.

Previously, we didn’t have anything that could scale as well as the systems that we have in place now. We couldn’t hope to manage 8,000 or 9,000 network devices, plus being able to deliver a product lifecycle, from provisioning to decommission, which is what we have now.

Gardner: So our audience better understands the hurdles and challenges you've faced, you're providing voice, both VoIP and traditional telephone, and telephony services. You have data, managed Microsoft Exchange, managed servers, and virtual hosting. You're providing storage, backup and restore, and of course a variety of network services. So this is a really full set of different services and a whole lot of infrastructure to support that.

Jackson: Yeah. It's pretty massive in terms of the technologies involved. A lot of them are cutting-edge. We have many partners. And you are right, our suite of cloud services is very diverse and comprises what we believe is the UK’s most complete and "joined-up"set of pay-monthly voice and data services.

Their own pace

In practice what we aim to do is help our customers engage with the cloud at a pace that works for them. First, we provide connectivity to our nationwide network ring – our cloud. Once their estate is connected they can then cherry pick services from our broad pay-as-you-go (PAYG) menu.

For example, they might be considering replacing their traditional "tin" PBXs with hosted IP telephony. We can do that and demonstrate massive savings. Next we might overlay our hosted unified communications (UC) suite providing benefits such as "screen sharing," "video calling," and "click-to-dial." Again, we can demonstrate huge savings on planes, trains and automobiles.

Next we might overlay our exciting new hosted call recording package -- Unity Call Recording (UC) -- which is perfect if they are in a regulated industry and have a legal requirement to record calls. It’s got some really neat features including the ability to tag and bookmark calls to help easy searching and playback.

While we're doing this, we might also explore the data path. For example our new FlexiStor service provides what we think is the UK’s most straightforward PAYG service designed to manage data by its business "value" and not just as one big homogenous lump of data.

It treats data as critical, important or legacy and applies an appropriate storage process to each ... saving up to 40 percent against traditional data management methods. There’s much more of course, but that gives you a flavor, I hope.

Due to the HP product set that we have, we've been able to utilize all the integrations and have a fully managed, end-to-end lifecycle of the service.



Imagine trying to manage this disparate set of systems. It would be pretty impossible. But due to the HP product set that we have, we've been able to utilize all the integrations and have a fully managed, end-to-end lifecycle of the service, the devices, and the product sets that we have as a company.

Gardner: I have to imagine too that customer service and support is a huge part of what you do, day in and day out. You also have had to manage the help desk and provide automated alerts, fixes, and notifications, so that the manual help desk, which is of course quite costly, doesn’t overwhelm you. Can you address what you've attempted to do and what you have managed to do when it comes to automated support?

Jackson: In terms of our service and support, we've basically grown the network massively, but we haven’t increased any headcount for managing the network. Our 24/7 guys are the same as they were four or five years ago in terms of headcount.

We get on average around 5,000 incidents a month automatically generated from our systems and network devices. Of these incidents, only about 560 are linked to customer facing Interactions using our Service Desk Module in the Service Manager application.

Approximately 80 percent of our total incidents are generated automatically. They are either proactively raised, based on things like CPU and memory of network devices or virtual devices or even physical servers in our data centers, or reactively raised based on for example device or interface downs.

Massive burden

When you've got like 80 percent of all incidents raised automatically, it takes a massive burden off the 24/7 teams and the customer support guys, who are not spending the majority of their time creating incidents but actually working to resolve them.

Gardner: Let's back it up. Five years ago, when you didn't have any integrated systems and you were dealing with lots of data, perhaps spurious data, what did you think? I know that you're an ITIL shop and so you had to bring in that service management mindset, but what did you do in order to bring these products together or even add more products, but without them being also unwieldy in terms of management?

Jackson: It was spurred by really bad data that we had in the systems. We couldn't effectively go forward. We couldn't scale anymore. So, we got the guys at HP to come in and design us a solution based on products that we already had, but with full integration, and add in additional products such as HP Asset Manager and device Discovery and Dependency Mapping Inventory (DDMI).

With the systems that we already had in place, we utilized mainly HP Service Desk. So we decided to take the bold leap to go to Service Manager, which then gave us the ability to integrate it fully into the Operations Manager product and our Network Node Manager product.

Since we had the initial integrations, we've added extra integrations like Universal Configuration Management Database (UCMDB), which gives us a massive overview on how the network is progressing and how it's developing. Coupled with this, we've got Release Control, and we've just upgraded to the latest version of Service Manager 9.2.

For any auditor that comes in, we have a documented set of reports that we can give them. That will hopefully help us get this compliance and maintain it.



So it has given us a huge benefit in terms of process control, how ITIL is related. More importantly, one of the main things that we are going for at the moment is payment card industry (PCI) and ISO 27001 compliance.

For any auditor that comes in, we have a documented set of reports that we can give them. That will hopefully help us get this compliance and maintain it. One of the things as an MSP is that we can be compliant for the customer. The customer can have the infrastructure outsourced to us with the compliance policy in that. We can take the headache of compliance away from our customers.

Gardner: Having that full view and the ability to manage also discreetly is not only good business, but it sounds like it's an essential ingredient for the way in which you go to market?

Jackson: More and more these days, we have a lot of solicitors and law firms on our books, and we're getting "are you compliant" as a request before they place business with us. We're finding all across the industry that compliance is a must before any contract is won. So to keep one step ahead of the game, this is something that we're going to have to achieve and maintain, and the HP product set that we have is key in that.

Gardner: I suppose too that a data flow application like Connect-It 4.1 provides an opportunity to not only pull together disparate products and give that holistic view, but also provides that validation for any audits or compliance issues?

Recently upgraded

Jackson: We recently upgraded Connect-It from 4.1 to 9.3, and with that, we upgraded Asset Manager System to 9.3. Connect-It is the glue that holds everything together. It's a fantastic application that you can throw pretty much any data at, from a CSV file, to another database, to web services, to emails, and it will formulate it for you. You can do some complex integrations in that. It will give you the data that you want on the other side and it cleanses and parses, so that you can pass it on to other systems.

From our DDMI system, right through to our Service Manager, then into our Network Node Manager, we now have a full set of solutions that are held together by Connect-It.

We can discover the device on the network. We can then propagate it into Service Manager. We can add lots of financial details to it from other financial systems outside of the HP product set, but which are easy to integrate. We can therefore provision the circuit and provision the device and add to monitoring automatically, without any human intervention, just by the fact that the device gets shipped to the site.

It gets loaded up with the configuration, and then it's good to go. It's automatically managed right through to the decommissioning stage, or the upgrade stage, where it's replaced by another device. HP systems give us that capability.

Gardner: So these capabilities really do allow you to take on a whole new level of business and service. It sounds like the maintenance of the network, the integrity, and then the automation really helps you go to market in a whole new way than you could have just several years ago.

I don’t know of many other MSPs that have such an automated set of technology tools to help them manage the service that they provide to their customers.



Jackson: Definitely. One of the key benefits is it gives us a unique calling card for our potential customers. I don’t know of many other MSPs that have such an automated set of technology tools to help them manage the service that they provide to their customers.

Five years ago, this wasn't possible. We had disparate systems and duplicate data held in multiple areas So it wasn’t possible to have the integration and the level of support that we give our customers now for the new systems and services that we provide.

Gardner: Of course, HP has been engineering more integration into its product and you have been aggressive in adopting some of the newer versions, which is an important element of that, but I have to imagine that there is also a systems integrations function here or professional services. Have you employed any professional services or relied on HP for that?

Jackson: When we originally decided to take the step to upgrade from Service Desk to Service Manager and to get the network discovery product set in, we used HP’s Professional Services to effectively design the solution and help us implement it.

Within six months, we had Service Desk upgraded to Service Manager. We had an asset manager system that was fully integrated with our financials, our stock control. And we also had a Network Discovery toolset that was inventorying our estate. So we had a fully end-to-end solution.

Automatic incidents

I
nto that, we have helped to develop the Network Operations Management Solution into being able to generate automatic incidents. HP PS services provided a pivotal role in providing us with the kind of solutions that we have now.

Since then, we took that further, because we have very good in-house knowledgeable guys that really understand the HP systems and services. So we've taken it bit of a step further, and most of the stuff that we do now in terms of upgrades and things are done in-house.

Gardner: It's a very compelling story. I wonder if we have more than just the show-and-tell here. Do we have any metrics of success? Have you been able to point to faster time to resolution, maintaining service-level agreements (SLAs), or something along those lines, that we could help people appreciate what this does, not only functionally in terms of bringing new services to your customers, but also in terms of how you operate and some important metrics that affect your bottom line?

Jackson: Mean time to restore has come down significantly, by way over 15 percent. As I said, there has been zero increase in headcount over our systems and services. We started off with a few thousand network devices and only three or four different products, in data, storage, networks and voice. Now we've got 16 different kinds of product sets, with about 8,000, 9,000 network devices.

In terms of cost saving, and increased productivity, this has been huge. Our 24/7 teams and customer support teams are more proactive in using knowledge bases and Level 1 triage. Resolution of incidents has gone up by 25 percent by customer support teams and level 1 engineers; this enables the level 3 engineers to concentrate on more complex issues.


In terms of SLAs, we manage the availability of network devices. It gives us a lot more flexibility in how we give these availability metrics to the customers.



If you take a Priority 3, Priority 4 incident70 percent of those are now fixed by Level 1 engineers, which was unheard of five or six years ago. Also, we now have a very good knowledge base in the Service Manager tool that we can use for our Level 1 engineers.

In terms of SLAs, we manage the availability of network devices. It gives us a lot more flexibility in how we give these availability metrics to the customers. Because we're business driven by other third party suppliers, we can maintain and get service credits from them. We've also got a fully documented incident lifecycle. We can tell when the downtime has been on these services, and give our suppliers a bit of an ear bashing about it, because we have this information to hand them. We didn’t have that five or six years ago.

Gardner: So, by having event correlation and data to back up your assertions there's much less finger pointing. You know exactly who had dropped the ball.

Jackson: Exactly. With event correlation, we reduced our operations browsers down to just meaningful incidents, we filtered our events from over 100,000 a month to less than 20,000 many of these are duplicated and are correlated together. Most events are associated with knowledge base articles in Service Manager and contain instructions to escalate or how to resolve the event, increasingly by a level 1 engineer.

We can also run automatic actions from these events, and we can send the information to the relevant parties, and also raise an incident and send it directly to the correct assignment groups or teams that are involved in looking after that.

Internal SLA

For Priority 1 incidents, which by an internal SLA we have 15 minutes to communicate to the customer, we can do that now within two minutes, because the group that’s been assigned the incident are on the ball straight away and they can contact the customer and let them know of the potential or actual problem.

Contacting customers within agreed SLAs and how we can drive our suppliers to provide better service is fantastic because of the information that is available in the systems now. It gives us a lot more heads up on what’s happening around the network.

Gardner: And now that you have had this in place, this integrated lifecycle, end-to-end approach, you've got your UCMDB, is there now, in hindsight, an opportunity to do some analytics, perhaps even refine what you requirements are, and therefore cut your total cost at some level?

Jackson: We're building a lot of information, taken from our financial systems and placing it into our UCMDB and CMDB databases to give us the breakdown of cost per device, cost per month, because now this information is available.

We have a couple of data centers. One of our biggest costs is power usage. Now, we can break down by use of collecting the power information, using NNMi -- how much our power is costing per rack by terms of how many amps have been used over a set period of time, say a week or a month. where previously we had no way of determining how our power usage was being spent or how much was actually costing us per rack or per unit.

From this performance information, we can also give our customers extra value reports and statistics that we can charge as a value added managed solution for them.



It's given us a massive information boost, and we can really utilize the information, especially in UCMDB, and because it’s so flexible, we can tailor it to do pretty much whatever we want. From this performance information, we can also give our customers extra value reports and statistics that we can charge as a value added managed solution for them.

Gardner: For the benefit of our listeners, now that you've gone through this process, are there any lessons learned, anything you could relay in terms of, "If I had to do this again, I might do blank?" What would you offer to those who would now be testing the waters and embarking on such a journey?

Jackson: One of the main things is to have a clear goal in mind before you start. Plan everything, get it all written down, and have the processes looked at before you start implementing this, because it’s fairly hard to re-engineer if you decided that one of the actual solutions or one of the processes that you have implemented isn’t going to work. Because of the integration of all the systems, you might tend to find that reverse engineering them is a difficult task.

As a company, we decided to go for a clean start and basically said we'd filter all the data, take the data that we actually really required, and start off from scratch. We found that doing it that way, we didn’t get any bad data in there. All the data that we have now is pretty much been cleansed and enriched by the information that we can get from our automated systems, but also by utilizing the extra data that people have put in.

Gardner: Thanks so much. You've been listening now to a sponsored podcast discussion on a UK-based managed service provider, InTechnology, and their journey to provide better information and services for their voice, data, and storage customers. They've employed an automated lifecycle approach and it has benefited them in a number of levels.

Thanks to Ed Jackson, the Operational System Support Manager at InTechnology. Ed, we really appreciated your input.

Jackson: Okay. No problem.

Gardner: And this is Dana Gardner, Principal Analyst at Interarbor Solutions. Thanks to our audience, and come back next time.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: HP.

Transcript of a BriefingsDirect podcast discussion on how InTechnology uses network management automation to improve delivery and service performance for network and communications services. Copyright Interarbor Solutions, LLC, 2005-2011. All rights reserved.

You may also be interested in:

Friday, July 29, 2011

Discover Case Study: How IHG Has Deployed and Benefited from Increased App Testing

Transcript of a BriefingsDirect podcast from the HP Discover conference on how InterContinental Hotels Group has reduced time and cost in app development.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: HP.

Dana Gardner: Hello, and welcome to a special BriefingsDirect podcast from the recent HP Discover 2011 conference in Las Vegas. We're here to explore some major enterprise IT solution trends and innovations making news across HP’s ecosystem of customers, partners, and developers.

I'm Dana Gardner, Principal Analyst at Interarbor Solutions, and I'll be your host throughout this series of HP-sponsored Discover live discussions.

Our latest user case study focuses on InterContinental Hotels Group (IHG). We're going to be looking at what they are doing around automation and unification of applications, development and deployment, specifically looking at software as a service (SaaS) as a benefit, and how unification helps bring together performance, but also reduce complexity costs over time.

To help guide us through this use-case, we're here with Brooks Solomon, Manager of Test Automation at InterContinental. Welcome to BriefingsDirect.

Brooks Solomon: Thank you, Dana. Pleasure to be here.

Gardner: Give me a sense of the scope and size of your operations?

Solomon: InterContinental Hotels Group is the largest hotel company by number of rooms. We have 645,000 rooms, 4,400 hotels, with seven different brands, and the largest and first hotel loyalty program with 58 million members.

The majority of the hotels, 3,500 or so, are in the US and the others are distributed around the world. We're going to be expanding to China more and more over the next few years.

Gardner: How about in terms of numbers of applications and scope and size of your IT operations?

Solomon: I couldn’t list the number of applications we have. The majority of the revenue comes from four major applications that are consumer-facing

Gardner: What were the high-level takeaways from your presentation at Discover?

Solomon: We use HP’s testing software all the way from Quality Center (QC), through Quick Test Professional (QTP), through LoadRunner, up into the Business Availability Center (BAC) tool. I've talked about how we get to the process of BAC and then how BAC benefits us from a global perspective.

I couldn’t list the number of applications we have. The majority of the revenue comes from four major applications that are consumer-facing.



Gardner: Let’s get into that a little bit. Obviously, reservations, rewards, customer-facing web, and self-service type of functionality are super-important to you. Give us a sense of what you're doing with those sorts of apps and how critical they really are for you?

Solomon: The apps that we generate support the majority of IHG’s revenue and, if they're not customer-facing, they're a call-center application. If you call 1-800 Holiday Inn, that kind of thing, you'll get a reservation agent somewhere around the world wherever you are. Then, that agent will actually tap into another application that we developed to generate the reservation from there.

Gardner: A lot of test and development organizations have been early adopters of SaaS and cloud functionality. What’s the breakdown with your use of products? Do you have an on-premise portion or percentage in SaaS? How does that break down for you?

SaaS monitors

Solomon: We use SaaS and we have a private use of SaaS. Going back to our call-center applications, there are local centers around the world, and we've installed SaaS monitors at those facilities. Not only do we get a sense of how the agent’s response time and availability is from their centers, we also get a full global view from customers and how their experience is, wherever they maybe.

Gardner: In terms of your developers, your testing, and your application life-cycle to what degree are the tools that you're using SaaS-based?

Solomon: Right now the only SaaS-based tool we have is the BAC. The other HP tools that we use are in-house.

Gardner: When you move toward lifecycle benefits, do you have any sense of what that’s done for you, either at a cost and efficiency level within IT, or most importantly, at the customer level in terms of satisfaction and trust?

Solomon: Without the automated suite of tools that we have, we couldn’t deliver our products in a timely fashion and with quality. We have an aggressive release schedule every two weeks, distributing new products, new applications, or bug fixes for things that have occurred. Without the automated regression suite of tools that we have, we couldn’t get those out in time. Having those tools in place allows us approximately a 75 percent reduction in cost.

Gardner: Having gone through this process, to move into that level of efficiency, do you have any 20/20 hindsight things that you may have done differently with that knowledge or that you might pass along as advice to our listeners?

Without the automated suite of tools that we have, we couldn’t deliver our products in a timely fashion and with quality.



Solomon: I would say just to define the core functionality of your applications and automate those first. Then, once new enhancements come along and there are business-critical type transactions, I would include those in your automated suite of tools and tests.

Gardner: How about your thoughts for the future? Do you have any purchases or acquisitions or tools you're looking to adopt in the future? Do you have a roadmap?

Solomon: We're coming off of a mainframe reservation system and we are converting that into service oriented architecture (SOA). So, we’ve recently purchased HP service tests. We hope that acquisition would help us automate all of our services coming off the mainframe. We're going to do that on a gradual basis. So, we're going to be automating those as they come online.

Gardner: Very good. We've been talking about application lifecycle management and productivity. Our guest has been Brooks Solomon, Manager of Test Automation at InterContinental Hotels Group, based in Atlanta.

Thanks to our audience for joining this special BriefingsDirect podcast, coming to you from the HP Discover 2011 Conference in Las Vegas.

I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host for this series of user experience discussions. Thanks again for listening, and come back next time.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: HP.

Transcript of a BriefingsDirect podcast from the HP Discover conference on how InterContinental Hotel Group has reduced time and cost in app development. Copyright Interarbor Solutions, LLC, 2005-2011. All rights reserved.

You may also be interested in:

Thursday, July 28, 2011

Standards Effort Points to Automation Via Common Markup Language O-ACEML for Improved IT Compliance, Security

Transcript of a BriefingsDirect podcast from The Open Group Conference on the new Open Automated Compliance Expert Markup Language and how it can save companies time and money.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: The Open Group.

Dana Gardner: Hi, this is Dana Gardner, Principal Analyst at Interarbor Solutions, and you're listening to BriefingsDirect.

Today, we present a sponsored podcast discussion in conjunction with The Open Group Conference in Austin, Texas, the week of July 18, 2011.

We’re going to examine the Open Automated Compliance Expert Markup Language (O-ACEML), a new standard creation and effort that helps enterprises automate security compliance across their systems in a consistent and cost-saving manner.

O-ACEML helps to achieve compliance with applicable regulations but also achieves major cost savings. From the compliance audit viewpoint, auditors can carry out similarly consistent and more capable audits in less time.

Here to help us understand O-ACEML and managing automated security compliance issues and how the standard is evolving are our guests. We’re here with Jim Hietala, Vice President of Security at The Open Group. Welcome back, Jim.

Jim Hietala: Thanks, Dana. Glad to be with you.

Gardner: We’re also here with Shawn Mullen, a Power Software Security Architect at IBM. Welcome to the show, Shawn.

Shawn Mullen: Thank you.

Gardner: Let’s start by looking at why this is an issue. Why do O-ACEML at all? I assume that security being such a hot topic, as well as ways in which organizations grapple with the regulations, and compliance issues are also very hot, this has now become an issue that needs some standardization.

Let me throw this out to both of you. Why are we doing this at all and what are the problems that we need to solve with O-ACEML?

Hietala: One of the things you've seen in last 10 or 12 years, since the compliance regulations have really come to the fore, is that the more regulation there is, more specific requirements are put down, and the more challenging it is for organizations to manage. Their IT infrastructure needs to be in compliance with whatever regulations impact them, and the cost of doing so becomes a significant thing.

So, anything that could be done to help automate, to drive out cost, and maybe make organizations more effective in complying with the regulations that affect them -- whether it’s PCI, HIPAA, or whatever -- there's lot of benefit to large IT organizations in doing that. That’s really what drove us to look at adopting a standard in this area.

Gardner: Jim, just for those folks who are coming in as fresh, are we talking about IT security equipment and the compliance around that, or is it about the process of how you do security, or both? What are the boundaries around this effort and what it focuses on?

Manual process

Hietala: It’s both. It’s enabling the compliance of IT devices specifically around security constraints and the security configuration settings and to some extent, the process. If you look at how people did compliance or managed to compliance without a standard like this, without automation, it tended to be a manual process of setting configuration settings and auditors manually checking on settings. O-ACEML goes to the heart of trying to automate that process and drive some cost out of an equation.

Gardner: Shawn Mullen, how do you see this in terms of the need? What are the trends or environment that necessitate in this?

Mullen: I agree with Jim. This has been going on a while, and we’re seeing it on both classes of customers. On the high-end, we would go from customer-to-customer and they would have their own hardening scripts, their own view of what should be hardened. It may conflict with what compliance organization wanted as far as the settings. This was a standard way of taking what the compliance organization wanted, and also it has an easy way to author it, to change it.

If your own corporate security requirements are more stringent, you can easily change the ACEML configuration, so that is satisfies your more stringent corporate compliance or security policy, as well as satisfying the regulatory compliance organization in an easy way to monitor it, to report, and see it.

In addition, on the low end, the small businesses don’t have the expertise to know how to configure their systems. Quite frankly, they don’t want to be security experts. Here is an easy way to print an XML file to harden their systems as it needs to be hardened to meet compliance or just the regular good security practices.

Gardner: One of the things that's jumped out at me as I’ve looked into this, is the rapid improvement in terms of a cost or return on investment (ROI), almost to the league of a no-brainer category. Help me understand why is it so expensive and inefficient now, when it comes to security equipment audits and regulatory compliance. What might this then therefore bring in terms of improvement?

If you have these hundreds, or in large organizations thousands, of systems and you have to manually configure them, it becomes a very daunting task.



Mullen: One of the things that we're seeing in the industry is server consolidation. If you have these hundreds, or in large organizations thousands, of systems and you have to manually configure them, it becomes a very daunting task. Because of that, it's a one-time shot at doing this, and then the monitoring is even more difficult. With ACEML, it's a way of authoring your security policy as it meets compliance or for your own security policy in pushing that out.

This allows you to have a single XML and push it onto heterogeneous platforms. Everything is configured securely and consistently and it gives you a very easy way to get the tooling to monitor those systems, so they are configured correctly today. You're checking them weekly or daily to ensure that they remain in that desired state.

Gardner: So it's important not only to automate, but be inclusive and comprehensive in the way you do that or you are back to manual process at least for a significant portion, but that might then not be at your compliance issues. Is that how it works?

Mullen: We had a very interesting presentation here at The Open Group Conference yesterday. I’ll let Jim provide some of the details on that, but customers are finding the best way they can lower their compliance or their cost of meeting compliance is through automation. If you can automate any part of that compliance process, that’s going to save you time and money. If you can get rid of the manual effort with automation, it greatly reduces your cost.

Gardner: Shawn, do we have any sense in the market what the current costs are, even for something that was as well-known as Sarbanes-Oxley? How impressive, or unfortunately intimidating, are some of these costs?

Cost of compliance

Mullen: There was a very good study yesterday. The average cost of an organization to be compliant is $3 million. That's annual cost. What was also interesting was that the cost of being non-compliant, as they called it, was $9 million.

Hietala: The figures that Shawn was referencing come out of the study by the Ponemon Institute. Larry Ponemon does lots of studies around security risk compliance cost. He authors an annual data breach study that's pretty widely quoted in the security industry that gets to the cost of data breaches on average for companies.

In the numbers that were presented yesterday, he recently studied 46 very large companies, looking at their cost to be in compliance with the relevant regulations. It's like $3.5 million a year, and over $9 million for companies that weren't compliant, which suggests that companies that are actually actively managing towards compliance are probably little more efficient than those that aren't.

What O-ACEML has the opportunity to do for those companies that are in compliance is help drive that $3.5 million down to something much less than that by automating and taking manual labor out of process.

Gardner: So it's a seemingly very worthwhile effort. How do we get to where we are now, Jim, with the standard and where do we need to go? What's the level of maturity with this?

We want to encourage adoption by as broad a set of vendors as we can, and we think that having more adoption by the industry, will help make this more available so that end-users can take advantage of it.



Hietala: It's relatively new. It was just published 60 days ago by The Open Group. The actual specification is on The Open Group website. It's downloadable, and we would encourage both, system vendors and platform vendors, as well as folks in the security management space or maybe the IT-GRC space, to check it out, take a look at it, and think about adopting it as a way to exchange compliance configuration information with platforms.

We want to encourage adoption by as broad a set of vendors as we can, and we think that having more adoption by the industry, will help make this more available so that end-users can take advantage of it.

Gardner: Back to you Shawn. Now that we've determined that we're in the process of creating this, perhaps, you could set the stage for how it works. What takes place with ACEML? People are familiar with markup languages, but how does this now come to bear on this problem around compliance, automation, and security?

Mullen: Let's take a single rule, and we'll use a simple case like the minimum password length. In PCI the minimum password length, for example, is seven. Sarbanes-Oxley, which relies on COBiT password length would be eight.

But with an O-ACEML XML, it's very easy to author a rule, and there are three segments to it. The first segment is, it's very human understandable, where you would put something like "password length equals seven." You can add a descriptive text with it, and that's all you have to author.

Actionable command

When that is pushed down on to the platform or the system that's O-ACEML aware, it's able to take that simple ACEML word or directive and map that into an actionable command relevant to that system. When it finds the map into the actionable command ,it writes it back into the XML. So that's completing the second phase of the rule. It executes that command either to implement the setting or to check the setting.

The result of the command is then written back into the XML. So now the XML for particular rule has the first part, the authored high-level directive as a compliance organization, how that particular system mapped into a command, and the result of executing that command either in a setting or checking format.

Now we have all of the artifacts we need to ensure that the system is configured correctly, and to generate audit reports. So when the auditor comes in we can say, "This is exactly how any particular system is configured and we know it to be consistent, because we can point to any particular system, get the O-ACEML XML and see all the artifacts and generate reports from that."

Gardner: Maybe to give a sense of how this works, we can also look at a before-and-after scenario. Maybe you could describe how things are done now, the before or current status approach or standard operating procedure, and then what would be the case after someone would implement and mature O-ACEML implementation.

Mullen: There are similar tools to this, but they don't all operate exactly the same way. I'll use an example of BigFix. If I had a particular system, they would offer a way for you to write your own scripts. You would basically be doing what you would do at the end point, but you would be doing it at the BigFix central console. You would write scripts to do the checking. You would be doing all of this work for each of your different platforms, because everyone is a little bit different.

We see with small businesses and even some of the larger corporations that they're maintaining their own scripts. They're doing everything manually.



Then you could use BigFix to push the scripts down. They would run, and hopefully you wrote your scripts correctly. You would get results back. What we want to do with ACEML is when you just put the high-level directive down to the system, it understands ACEML and it knows the proper way to do the checking.

What's interesting about ACEML, and this is one of our differences from, for example, the security content automation protocol (SCAP), is that instead of the vendor saying, "This is how we do it. It has a repository of how the checking goes and everything like that," you let the end point make the determination. The end point is aware of what OS it is and it's aware of what version it is.

For example, with IBM UNIX, which is AIX, you would say "password check at this different level." We've increased our password strength, we've done a lot of security enhancements around that. If you push the ACEML to a newer level of AIX, it would do the checking slightly differently. So, it really relies on the platform, the device itself, to understand ACEML and understand how best to do its checking.

We see with small businesses and even some of the larger corporations that they're maintaining their own scripts. They're doing everything manually. They're logging on to a system and running some of those scripts. Or, they're not running scripts at all, but are manually making all of these settings.

It's an extremely long and burdensome process,when you start considering that there are hundreds of thousands of these systems. There are different OSs. You have to find experts for your Linux systems or your HP-UX or AIX. You have to have all those different talents and skills in these different areas, and again the process is quite lengthy.

Gardner: Jim Hietala, it sounds like we are focusing on servers to begin with, but I imagine that this could be extended to network devices, other endpoints, other infrastructure. What's the potential universe of applicability here?

Different classes

Hietala: The way to think about it is the universe of IT devices that are in scope for these various compliance regulations. If you think about PCI DSS, it defines pretty tightly what your cardholder data environment consists of. In terms of O-ACEML, it could be networking devices, servers, storage equipment, or any sort of IT device. Broadly speaking, it could apply to lots of different classes of computing devices.

Gardner: Back to you Shawn,. You mentioned the AIX environment. Could you explain a beginning approach that you’ve had with IBM Compliance Expert, or ICE, that might give us a clue as to how well this could work, when applied even more broadly? How does that heritage in ICE develop, and what would that tell us about what we could expect with O-ACEML?

Mullen: We’ve had ICE and this AIX Compliance Expert, using the XML, for a number of years now. It's been broadly used by a lot of our customers, not only to secure AIX but to secure the virtualization environment in a particular a virtual I/O server. So we use it for that.

One of the things that ACEML brings is that it has some of the lessons we learned from doing our own proprietary XML. It also brings some lessons we learned when looking at other XML for compliance like XCCDF. One of the things we put in there was a remediation element.

For example, the PCI says that your password length should be seven. COBiT says your password length should be eight. It has the XML, so you can blend multiple compliance requirements with a single policy, choosing the more secure setting, so that both compliance organizations, or other three compliance organizations, gets set properly to meet all of those, and apply it to a singular system.

One of the things that we're hoping vendors will gravitate toward is the ability to have a central console controlling their IT environment or configuring and monitoring their IT environment.



One of the things that we're hoping vendors will gravitate toward is the ability to have a central console controlling their IT environment or configuring and monitoring their IT environment. It just has to push out a single XML file. It doesn’t have to push out a special XML for Linux versus AIX versus a network device. It can push out that ACEML file to all of the devices. It's a singular descriptive XML, and each device, in turn, knows how to map it to its own particular platform in security configuring.

Gardner: Jim Hietala, it sounds as if the low-hanging fruit here would be the compliance and automation benefit, but it also sounds as if this is comprehensive. It's targeted at a very large set of the devices and equipment in the IT infrastructure. This could become a way of propagating new security policies, protocols, approaches, even standards, down the line. Is that part of the vision here -- to be able to offer a means by which an automated propagation of future security changes could easily take place?

Hietala: Absolutely, and it goes beyond just the compliance regulations that are inflicted on us or put on us by government organizations to defining a best practice instead of security policies in the organization. Then, using this as a mechanism to push those out to your environment and to ensure that they are being followed and implemented on all the devices in their IT environment.

So, it definitely goes beyond just managing compliance to these external regulations, but to doing a better job of implementing the ideal security configuration settings across your environment.

Gardner: And because this is being done in an open environment like The Open Group, and because it's inclusive of any folks or vendors or suppliers who want to take part, it sounds as if this could also cross the chasm between an enterprise, IT set, and a consumer or mobile or external third-party provider set.

Is it also a possibility that we’re going beyond heterogeneity, when it comes to different platforms, but perhaps crossing boundaries into different segments of IT and what we're seeing with the “consumerization” of IT now? I'll ask this to either of you or both of you.

Moving to the cloud

Hietala: I'll make a quick comment and then turn it over to Shawn. Definitely, if you think about how this sort of a standard might apply towards services that are built in somebody’s cloud, you could see using this as a way to both set configuration settings and check on the status of configuration settings and instances of machines that are running in a cloud environment. Shawn, maybe you want to expand on that?

Mullen: It's interesting that you brought this up, because this is the exact conversation we had earlier today in one of the plenary sessions. They were talking about moving your IT out into the cloud. One of the issues, aside from just the security, was how do you prove that you are meeting these compliance requirements?

ACEML is a way to reach into the cloud to find your particular system and bring back a report that you can present to your auditor. Even though you don’t own the system --it's not in the data center here in the next office, it's off in the cloud somewhere -- you can bring back all the artifacts necessary to prove to the auditor that you are meeting the regulatory requirements.

Gardner: Jim, how do folks take further steps to either gather more information? Obviously, this would probably of interest to enterprises as well as the suppliers, vendors for professional services organizations. What are the next steps? Where can they go to get some information? What should they do to become involved?

Hietala: The standard specification is up on our website. You can go to the "Publications" tab on our website, and do a search for O-ACEML, and you should find the actual technical standard document. Then, you can get involved directly in the security forum by joining The Open Group . As the standard evolves, and as we do more with it, we certainly want more members involved in helping to guide the progress of it over time.

It removes the burden of these different compliance groups from being security experts and it let’s them just use ACEML and the default settings that The Open Group came up with.



Gardner: Thoughts from you, Shawn, on that same getting involved question?

Mullen: That’s a perfect way to start. We do want to invite different compliance organization, everybody from the electrical power grid -- they have their own view of security -- to ISO, to payment card industry. For the electrical power grid standard, for example -- and ISO is the same way -- what ACEML helps them with is they don’t need to understand how Linux does it, how AIX does it. They don’t need to have that deep understanding.

In fact, the way ISO describes it in their PDF around password settings, it basically says, use good password settings, and it doesn’t go into any depth beyond that. The way we architected and designed O-ACEML is that you can just say, "I want good password settings," and it will default to what we decided. What we focused in on collectively as an international standard in The Open Group was, that good password hygiene means you change your password every six months. It should at least carry this many characters, there should be a non-alpha/numeric.

It removes the burden of these different compliance groups from being security experts and it let’s them just use ACEML and the default settings that The Open Group came up with.

We want to reach out to those groups and show them the benefits of publishing some of their security standards in O-ACEML. Beyond that, we'll work with them to have that standard up, and hopefully they can publish it on their website, or maybe we can publish it on The Open Group website.

Next milestones

Gardner: Well, great. We’ve been learning more about the Open Automated Compliance Expert Markup Language, more commonly known as O-ACEML. And we’ve been seeing how it can help assure compliance along with some applicable regulations across different types of equipment, but has the opportunity to perhaps provide more security across different domains, be that cloud or on-premises or even partner networks. while also achieving major cost savings. We’ve been learning how to get to started on this and what the maturity timeline is.

Jim Hietala, what would be the next milestone? What should people expect next in terms of how this is being rolled out?

Hietala: You'll see more from us in terms of adoption of the standard. We’re looking already at case studies and so forth to really describe in terms that everyone can understand what benefits organizations are seeing from using O-ACEML. Given the environment we’re in today, we’re seeing about security breaches and hacktivism and so forth everyday in the newspapers.

I think we can expect to see more regulation and more frequent revisions of regulations and standards affecting IT organizations and their security, which really makes it imperative for engineers in IT environment in such a way that you can accommodate those changes, as they are brought to your organization, do so in an effective way, and at the least cost. Those are really the kinds of things that O-ACEML has targeted, and I think there is a lot of benefit to organizations to using it.

Gardner: Shawn, one more question to you as a follow-up to what Jim said, not only that should we expect more regulations, but we’ll see them coming from different governments, different strata of governments, so state, local, federal perhaps. For multinational organization, this could be a very complex undertaking, so I'm curious as to whether O-ACEML could also help when it comes to managing multiple regulations across multiple jurisdictions for larger organizations.

Those are really the kinds of things that O-ACEML has targeted, and I think there is a lot of benefit to organizations to using it.



Mullen: That was the goal when we came up with O-ACEML. Anybody could author it, and again, if a single system fell under the purview of multiple compliance requirements, we could plan that together and that system would be a multiple one.

It’s an international standard, we want it to be used by multiple compliance organizations. And compliance is a good thing. It’s just good IT governance. It will save companies money in the long run, as we saw with these statistics. The goal is to lower the cost of being compliant, so you get good IT governance, just with a lower cost.

Gardner: Thanks. This sponsored podcast is coming to you in conjunction with The Open Group Conference in Austin, Texas, in the week of July 18, 2011. Thanks to both our guests. Jim Hietala, the Vice President of Security at The Open Group. Thank you, Jim.

Hietala: Thank you, Dana.

Gardner: And also Shawn Mullen, Power Software Security Architect at IBM. Thank you, Shawn.

Mullen: Thank you, Dana.

Gardner: This is Dana Gardner, Principal Analyst at Interarbor Solutions. Thanks again for listening, and come back next time.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: The Open Group.

Transcript of a BriefingsDirect podcast from The Open Group Conference on the new Open Automated Compliance Expert Markup Language and how it can save companies time and money. Copyright Interarbor Solutions, LLC, 2005-2011. All rights reserved.

You man also be interested in:

Wednesday, July 27, 2011

Industry Moves to Fill Gap for Building Trusted Supply Chain Technology Accreditation

Transcript of a BriefingsDirect podcast from The Open Group Conference on The Open Group Trusted Technology Forum and setting standards for security and reliability.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: The Open Group.

Dana Gardner: Hi, this is Dana Gardner, Principal Analyst at Interarbor Solutions, and you're listening to BriefingsDirect.

Today, we present a sponsored podcast discussion in conjunction with The Open Group Conference in Austin, Texas, the week of July 18, 2011.

We've assembled a distinguished panel to update us on The Open Group Trusted Technology Forum, also known as the OTTF, and an accreditation process to help technology acquirers and buyers safely conduct global procurement and supply chain commerce. [Disclosure: The Open Group is a Sponsor of BriefingsDirect podcasts.]

We'll examine how the security risk for many companies and organizations has only grown, even as these companies form essential partnerships and integral supplier relationships. So, how can all the players in a technology ecosystem gain assurances that the other participants are adhering to best practices and taking the proper precautions?

Here to help us better understand how established standard best practices and an associated accreditation approach can help make supply chains stronger and safer is our panel.

We're here with Dave Lounsbury, the Chief Technical Officer at The Open Group. Welcome back, Dave.

Dave Lounsbury: Hello Dana. How are you?

Gardner: Great. We are also here with Steve Lipner, Senior Director of Security Engineering Strategy in the Trustworthy Computing Security at Microsoft. Welcome back, Steve.

Steve Lipner: Hi, Dana. Glad to be here.

Gardner: We're here also with Joshua Brickman, Director of the Federal Certification Program Office at CA Technologies. Welcome, Joshua.

Joshua Brickman: Thanks for having me.

Gardner: And, we're here too with Andras Szakal. He's the Vice President and CTO of IBM’s Federal Software Group. Welcome back, Andras.

Andras Szakal: Thank you very much, Dana. I appreciate it.

Gardner: Dave, let's start with you. We've heard so much lately about "hacktivism," break-ins, and people being compromised. These are some very prominent big companies, both public and private. How important is it that we start to engage more with things like the OTTF?

No backup plan

Lounsbury: Dana, a great quote coming out of this week’s conference was that we have moved the entire world’s economy to being dependent on the Internet, without a backup plan. Anyone who looks at the world economy will see, not only are we dependent on it for exchange of value in many cases, but even information about how our daily lives are run, traffic, health information, and things like that.

It's becoming increasingly vitally important that we understand all the aspects of what it means to have trust in the chain of components that deliver that connectivity to us, not just as technologists, but as people who live in the world.

Gardner: Steve Lipner, your thoughts on how this problem seems to be only getting worse?

Lipner: Well, the attackers are becoming more determined and more visible across the Internet ecosystem. Vendors have stepped up to improve the security of their product offerings, but customers are concerned. A lot of what we're doing in The Open Group and in the OTTF is about trying to give them additional confidence of what vendors are doing, as well as inform vendors what they should be doing.

Gardner: Joshua Brickman, this is obviously a big topic and a very large and complex area. From your perspective, what is it that the OTTF is good at? What is it focused on? What should we be looking to it for in terms of benefit in this overall security issue?

Brickman: One of the things that I really like about this group is that you have all of the leaders, everybody who is important in this space, working together with one common goal.

Today, we had a discussion where one of the things we were thinking about is, whether there's a 100 percent fail-safe solution to cyber? And there really isn't. There is just a bar that you can set, and the question is how much do you want to make the attackers spend, before they can get over that bar? What we're going to try to do is establish that level, and working together, I feel very encouraged that we are getting there, so far.

Gardner: Andras, we are not just trying to set the bar, but we're also trying to enforce, or at least have clarity into, what other players in an ecosystem are doing. So that accreditation process seems to be essential.

Szakal: We're going to develop a standard, or are in the process of developing a specification and ultimately an accreditation program, that will validate suppliers and providers against that standard.

It's focused on building trust into a technology provider organization through this accreditation program, facilitated through either one of several different delivery mechanisms that we are working on. We're looking for this to become a global program, with global partners, as we move forward.

Gardner: It seems as if almost anyone is a potential target, and when someone decides to target you, you do seem to suffer. We've seen things with Booz Allen, RSA, and consumer organizations like Sony. Is this something that almost everyone needs to be more focused on? Are we at the point now where there is no such thing as turning back, Dave Lounsbury?

Global effort

Lounsbury: I think there is, and we have talked about this before. Any electronic or information system now is really built on components and software that are delivered from all around the globe. We have software that’s developed in one continent, hardware that’s developed in another, integrated in a third, and used globally.

So, we really do need to have the kinds of global standards and engagement that Andras has referred to, so that there is that one bar for all to clear in order to be considered as a provider of trusted components.

Gardner: As we've seen, there is a weak link in any chain, and the hackers or the cyber criminals or the state sponsored organizations will look for those weak links. That’s really where we need to focus.

Lounsbury: I would agree with that. In fact, some of the other outcomes of this week’s conference have been the change in these attacks, from just nuisance attacks, to ones that are focused on monetization of cyber crimes and exfiltration of data. So the spectrum of threats is increasing a lot. More sophisticated attackers are looking for narrower and narrower attack vectors each time. So we really do need to look across the spectrum of how this IT technology gets produced in order to address it.

Gardner: Steve Lipner, it certainly seems that the technology supply chain is essential. If there is weakness there, then it's difficult for the people who deploy those technologies to cover their bases. It seems that focusing on the technology providers, the ecosystems that support them, is a really necessary first step to taking this to a larger, either public or private, buyer side value.

Lipner: The tagline we have used for The Open Group TTF is "Build with Integrity, Buy with Confidence." We certainly understand that customers want to have confidence in the hardware and software of the IT products that they buy. We believe that it’s up to the suppliers, working together with other members of the IT community, to identify best practices and then articulate them, so that organizations up and down the supply chain will know what they ought to be doing to ensure that customer confidence.

Gardner: Let's take a step back and get a little bit of a sense of where this process that you are all involved with is. I know you're all on working groups and in other ways involved in moving this forward, but it's been about six months now since The OTTF was developed initially, and there was a white paper to explain that.

Perhaps, one of you will volunteer to give us sort of a state of affairs where things are,. Then, we'd also like to hear an update about what's been going on here in Austin. Anyone?

Szakal: Well, as the chair, I have the responsibility of keeping track of our milestones, so I'll take that one.

A, we completed the white paper earlier this year, in the first quarter. The white paper was visionary in nature, and it was obviously designed to help our constituents understand the goals of the OTTF.

However, in order to actually make this a normative specification and design a program, around which you would have conformance and be able to measure suppliers’ conformity to that specification, we have to develop a specification with normative language.

First draft

We're finishing that up as we speak and we are going to have a first draft here within the next month. We're looking to have that entire specification go through company review in the fourth quarter of this year.

Simultaneously, we'll be working on the accreditation policy and conformance criteria and evidence requirements necessary to actually have an accreditation program, while continuing to liaise with other evaluation schemes that are interested in partnering with us. In a global international environment, that’s very important, because there exist more than one of these regimes that we will have to exist, coexist, and partner with.

Over the next year, we'll have completed the accreditation program and have begun testing of the process, probably having to make some adjustments along the way. We're looking at sometime within the first half of 2012 for having a completed program to begin ramping up.

Gardner: Is there an update on the public sector's, or in the U.S., the federal government’s, role in this? Are they active? Are they leading? How would you characterize the public role or where you would like to see that go?

Szakal: The forum itself continues to liaise with the government and all of our constituents. As you know, we have several government members that are part of the TTF and they are just as important as any of the other members. We continue to provide update to many of the governments that we are working with globally to ensure they understand the goals of the TTF and how they can provide value synergistically with what we are doing, as we would to them.

PWe continue to provide update to many of the governments that we are working with globally to ensure they understand the goals of the TTF.



Gardner: I'll throw this back out to the panel? How about the activities this week at the conference? What have been the progress or insights that you can point to from that?

Brickman: We've been meeting for the first couple of days and we have made tremendous progress on wrapping up our framework and getting it ready for the first review.

We've also been meeting with several government officials. I can’t say who they are, but what’s been good about it is that they're very positive on the work that we're doing, they support what we are doing and want to continue this discussion.

It’s very much a partnership, and we do feel like it’s not just an industry-led project, where we have participation from folks who could very much be the consumers of this initiative.

Gardner: Clearly, there are a lot of stakeholders around the world, across both the public and private domains.

Dave Lounsbury, what’s possible? What would we gain if this is done correctly? How would we tangibly look to improvements? I know that’s hard with security. It’s hard to point out what doesn’t happen, which is usually the result of proper planning, but how would you characterize the value of doing this all correctly say a year or two from now?

Awareness of security

Lounsbury: One of the trends we'll see is that people are increasingly going to be making decisions about what technology to produce and who to partner with, based on more awareness of security.

A very clear possible outcome is that there will be a set of simple guidelines and ones that can be implemented by a broad spectrum of vendors, where a consumer can look and say, "These folks have followed good practices. They have baked secure engineering, secure design, and secure supply chain processes into their thing, and therefore I am more comfortable in dealing with them as a partner."

Of course, what the means is that, not only do you end up with more confidence in your supply chain and the components for getting to that supply chain, but also it takes a little bit of work off your plate. You don’t have to invest as much in evaluating your vendors, because you can use commonly available and widely understood sort of best practices.

From the vendor perspective, it’s helpful because we're already seeing places where a company, like a financial services company, will go to a vendor and say, "We need to evaluate you. Here’s our checklist." Of course, the vendor would have to deal with many different checklists in order to close the business, and this will give them some common starting point.

Of course, everybody is going to customize and build on top of what that minimum bar is, depending on what kind of business they're in. But at least it gives everybody a common starting point, a common reference point, some common vocabulary for how they are going to talk about how they do those assessments and make those purchasing decisions.

This is a living type of an activity that you never really finish. There’s always something new to be done.



Gardner: Steve Lipner, do you think that this is going to find its way into a lot of RFPs, beginning a sales process, looking to have a major checkbox around these issues? Is that sort of how you see this unfolding?

Lipner: If we achieve the sort of success that we are aiming for and anticipating, you'll see requirements for the TTF, not only in RFPs, but also potentially in government policy documents around the world, basically aiming to increase the trust of broad collections of products that countries and companies use.

Gardner: Joshua Brickman, I have to imagine that this is a living type of an activity that you never really finish. There’s always something new to be done, a type of threat that’s evolving that needs to be reacted to. Would the TTF over time take on a larger role? Do you see it expanding into larger set of requirements, even as it adjusts to the contemporary landscape?

Brickman: That’s possible. I think that we are going to try to get something achievable out there in a timeframe that’s useful and see what sticks.

One of the things that will happen is that as companies start to go out and test this, as with any other standard, the 1.0 standard will evolve to something that will become more germane, and as Steve said, will hopefully be adopted worldwide.

Agile and useful

I
t’s absolutely possible. It could grow. I don’t think anybody wants it to become a behemoth. We want it to be agile, useful, and certainly something readable and achievable for companies that are not multinational billion dollar companies, but also companies that are just out there trying to sell their piece of the pie into the space. That’s ultimately the goal of all of us, to make sure that this is a reasonable achievement.

Lounsbury: Dana, I'd like to expand on what Joshua just said. This is another thing that has come out of our meetings this week. We've heard a number of times that governments, of course, feel the need to protect their infrastructure and their economies, but also have a realization that because of the rapid evolution of technology and the rapid evolution of security threats that it’s hard for them to keep up. It’s not really the right vehicle.

There really is a strong preference. The U.S. strategy on this is to let industry take the lead. One of the reasons for that is the fact that industry can evolve, in fact must evolve, at the pace of the commercial marketplace. Otherwise, they wouldn’t be in business.

So, we really do want to get that first stake in the ground and get this working, as Joshua said. But there is some expectation that, over time, the industry will drive the evolution of security practices and security policies, like the ones OTTF is developing at the pace of commercial market, so that governments won’t have to do that kind of regulation which may not keep up.

Gardner: Andras, any thoughts from your perspective on this ability to keep up in terms of market forces? How do you see the dynamic nature of this being able to be proactive instead of reactive?

One of our goals is to ensure that the viability of the specification itself, the best practices, are updated periodically.



Szakal: One of our goals is to ensure that the viability of the specification itself, the best practices, are updated periodically. We're talking about potentially yearly. And to include new techniques and the application of potentially new technologies to ensure that providers are implementing the best practices for development engineering, secure engineering, and supply chain integrity.

It's going to be very important for us to continue to evolve these best practices over a period of time and not allow them to fall into a state of static disrepair.

I'm very enthusiastic, because many of the members are very much in agreement that this is something that needs to be happening in order to actually raise the bar on the industry, as we move forward, and help the entire industry adopt the practices and then move forward in our journey to secure our critical infrastructure.

Gardner: Given that this has the potential of being a fairly rapidly evolving standard that may start really appearing in RFPs and be impactful for real world business success, how should enterprises get involved from the buy side? How should suppliers get involved from the sell side, given that this is seemingly a market driven, private enterprise driven activity?

I'll throw this out to the crowd. What's the responsibility from the buyers and the sellers to keep this active and to keep themselves up-to-date?

Lounsbury: Let me take the first stab at this. The reason we've been able to make the progress we have is that we've got the expertise in security from all of these major corporations and government agencies participating in the TTF. The best way to maintain that currency and maintain that drive is for people who have a problem, if you're on the buy side or expertise from either side, to come in and participate.

Hands-on awareness

You have got the hands-on awareness of the market, and bringing that in and adding that knowledge of what is needed to the specification and helping move its evolution along is absolutely the best thing to do.

That’s our steady state, and of course the way to get started on that is to go and look at the materials. The white paper is out there. I expect we will be doing snapshots of early versions of this that would be available, so people can take a look at those. Or, come to an Open Group Conference and learn about what we are doing.

Gardner: Anyone else have a reaction to that? I'm curious. Given that we are looking to the private sector and market forces to be the drivers of this, will they also be the drivers in terms of enforcement? Is this voluntary? One would hope that market forces reward those who seek accreditation and demonstrate adhesion to the standard, and that those who don't would suffer. Or is there a potential for more teeth and more enforcement? Again, I'll throw this out to the panel at large.

Szakal: As vendors, we'd would like to see minimal regulation and that's simply the nature of the beast. In order for us to conduct our business and lower the cost of market entry, I think that's important.

I think it's important that we provide leadership within the industry to ensure that we're following the best practices to ensure the integrity of the products that we provide. It's through that industry leadership that we will avoid potential damaging regulations across different regional environments.

It's important that we provide leadership within the industry to ensure that we're following the best practices to ensure the integrity of the products that we provide.



We certainly wouldn't want to see different regulations pop-up in different places globally. It makes for very messy technology insertion opportunity for us. We're hoping that by actually getting engaged and providing some self-regulation, we won't see additional government or international regulation.

Lipner: One of the things that my experience has taught me is that customers are very aware these days of security, product integrity, and the importance of suppliers paying attention to those issues. Having a robust program like the TTF and the certifications that it envisions will give customers confidence, and they will pay attention to that. That will change their behavior in the market even without formal regulations.

Gardner: Joshua Brickman, any thoughts on the self-regulation benefits? If that doesn’t work, is it self-correcting? Is there a natural approach that if this doesn’t work at first, that a couple of highly publicized incidents and corporations that suffer for not regulating themselves properly, would ride that ship, so to speak?

Brickman: First of all, industry setting the standard is an idea that has been thrown around a while, and I think that it's great to see us finally doing it in this area, because we know our stuff the best.

But as far as an incident indicating that it's not working, I don’t think so. We're going to try to set up a standard, whereby we're providing public information about what our products do and what we do as far as best practices. At the end of the day the acquiring agency, or whatever, is going to have to make decisions, and they're going to make intelligent decisions, based upon looking at folks that choose to go through this and folks that choose not to go through it.Bold
It will continue

The bad news that continues to come out is going to continue to happen. The only thing that they'll be able to do is to look to the companies that are the experts in this to try to help them with that, and they are going to get some of that with the companies that go through these evaluations. There's no question about it.

At the end of the day, this accreditation program is going to shake out the products and companies that really do follow best practices for secure engineering and supply chain best practices.

Gardner: What should we expect next? As we heard, there has been a lot of activity here in Austin at the conference. We've got that white paper. We're working towards more mature definitions and approaching certification and accreditation types of activities. What's next? What milestone should we look to? Andras, this is for you.

Szakal: Around November, we're going to be going through company review of the specification and we'll be publishing that in the fourth quarter.

We'll also be liaising with our government and international partners during that time and we'll also be looking forward to several upcoming conferences within The Open Group where we conduct those activities. We're going to solicit some of our partners to be speaking during those events on our behalf.

The only thing that they'll be able to do is to look to the companies that are the experts in this to try to help them.



As we move into 2012, we'll be working on the accreditation program, specifically the conformance criteria and the accreditation policy, and liaising again with some of our international partners on this particular issue. Hopefully we will, if all things go well and according to plan, come out of 2012 with a viable program.

Gardner: Dave Lounsbury, any further thoughts about next steps, what people should be looking for, or even where they should go for more information?

Lounsbury: Andras has covered it well. Of course, you can always learn more by going to www.opengroup.org and looking on our website for information about the OTTF. You can find drafts of all the documents that have been made public so far, and there will be our white paper and, of course, more information about how to become involved.

Gardner: Very good. We've been getting an update about The Open Group Trusted Technology Forum, OTTF, and seeing how this can have a major impact from a private sector perspective and perhaps head off issues about lack of trust and lack of clarity in a complex evolving technology ecosystem environment.

I'd like to thank our guests. We've been joined by Dave Lounsbury, Chief Technical Officer at The Open Group. Thank you, sir.

Lounsbury: Thank you, Dana.

Gardner: Steve Lipner, the Senior Director of Security Engineering Strategy in the Trustworthy Computing Security Group at Microsoft. Thank you, Steve.

Lipner: Thanks, Dana.

Gardner: Joshua Brickman, who is the Director of the Federal Certification Program Office in CA Technologies, has also joined us. Thank you.

Brickman: I enjoyed it very much.

Gardner: And Andras Szakal, Vice President and CTO of IBM’s Federal Software Group. Thank you, sir.

Szakal: It's my pleasure. Thank you very much, Dana.

Gardner: This discussion has come to you as a sponsored podcast in conjunction with The Open Group Conference in Austin, Texas. We are here the week of July 18, 2011. I want to thank our listeners as well.

This is Dana Gardner, Principal Analyst at Interarbor Solutions. Don’t forget to come back next time.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: The Open Group.

Transcript of a BriefingsDirect podcast from The Open Group Conference on The Open Group Trusted Technology Forum and setting standards for security and reliability. Copyright Interarbor Solutions, LLC, 2005-2011. All rights reserved.

You may also be interested in: