Showing posts with label computer security. Show all posts
Showing posts with label computer security. Show all posts

Wednesday, February 22, 2017

How Development and Management of Modern Applications Benefits from Data-Driven Continuous Intelligence

Transcript of a discussion on how modern applications are different, and what data and insight are needed to make them more robust, agile and responsive.

Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: Sumo Logic.

Dana Gardner: Welcome to the next edition of BriefingsDirect. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator.

Gardner
Today, more than ever, how a company's applications perform equates with how the company itself performs and is perceived. From airlines to retail, from finding cabs to gaming, how the applications work deeply impacts how the business processes and business outcomes work.

We’ll now explore how new levels of insight and intelligence into what really goes on underneath the covers of modern applications ensure that apps are built, deployed, and operated properly.

A new breed of continuous intelligence emerges by gaining data from systems infrastructure logs -- either on-premises or in the cloud -- and then cross-referencing that with intrinsic business metrics information.
Access the Webinar
On Gaining Operational Visibility
Into AWS
We’re here with an executive from Sumo Logic to learn how modern applications are different, what's needed to make them robust and agile, and how the right mix of data, metrics and machine learning provides the means to make and keep apps operating better than ever.

With that, please join me in welcoming our guest, Ramin Sayar, President and CEO of Sumo Logic. Welcome to BriefingsDirect, Ramin.

Ramin Sayar: Thank you very much, Dana. I appreciate it.

Gardner: There’s no doubt that the apps make the company, but what is it about modern applications that makes them so difficult to really know? How is that different from the applications we were using 10 years ago?

Sayar: You hit it on the head a little bit earlier. This notion of always-on, always-available, always-accessible types of applications, either delivered through rich web mobile interfaces or through traditional mechanisms that are served up through laptops or other access points and point-of-sale systems are driving a next wave of technology architecture supporting these apps.

These modern apps are around a modern stack, and so they’re using new platform services that are created by public-cloud providers, they’re using new development processes such as agile or continuous delivery, and they’re expected to constantly be learning and iterating so they can improve not only the user experience -- but the business outcomes.

Gardner: Of course, developers and business leaders are under pressure, more than ever before, to put new apps out more quickly, and to then update and refine them on a continuous basis. So this is a never-ending process.

User experience

Sayar: You’re spot on. The obvious benefits around always on is centered on the rich user interaction and user experience. So, while a lot of the conversation around modern apps tends to focus on the technology and the components, there are actually fundamental challenges in the process of how these new apps are also built and managed on an ongoing basis, and what implications that has for security. A lot of times, those two aspects are left out when people are discussing modern apps.

Sayar
Gardner: That's right. We’re now talking so much about DevOps these days, but in the same breath, we’re taking about SecOps -- security and operations. They’re really joined at the hip.

Sayar: Yes, they’re starting to blend. You’re seeing the technology decisions around public cloud, around Docker and containers, and microservices and APIs, and not only led by developers or DevOps teams. They’re heavily influenced and partnering with the SecOps and security teams and CISOs, because the data is distributed. Now there needs to be better visibility instrumentation, not just for the access logs, but for the business process and holistic view of the service and service-level agreements (SLAs).

Gardner: What’s different from say 10 years ago? Distributed used to mean that I had, under my own data-center roof, an application that would be drawing from a database, using an application server, perhaps a couple of services, but mostly all under my control. Now, it’s much more complex, with many more moving parts.

Sayar: We like to look at the evolution of these modern apps. For example, a lot of our customers have traditional monolithic apps that follow the more traditional waterfall approach for iterating and release. Often, those are run on bare-metal physical servers, or possibly virtual machines (VMs). They are simple, three-tier web apps.

We see one of two things happening. The first is that there is a need for either replacing the front end of those apps, and we refer to those as brownfield. They start to change from waterfall to agile and they start to have more of an N-tier feel. It's really more around the front end. Maybe your web properties are a good example of that. And they start to componentize pieces of their apps, either on VMs or in private clouds, and that's often good for existing types of workloads.
Now there needs to be better visibility instrumentation, not just for the access logs, but for the business process and holistic view of the service and service-level agreements.

The other big trend is this new way of building apps, what we call greenfield workloads, versus the brownfield workloads, and those take a fundamentally different approach.

Often it's centered on new technology, a stack entirely using microservices, API-first development methodology, and using new modern containers like Docker, Mesosphere, CoreOS, and using public-cloud infrastructure and services from Amazon Web Services (AWS), or Microsoft Azure. As a result, what you’re seeing is the technology decisions that are made there require different skill sets and teams to come together to be able to deliver on the DevOps and SecOps processes that we just mentioned.

Gardner: Ramin, it’s important to point out that we’re not just talking about public-facing business-to-consumer (B2C) apps, not that those aren't important, but we’re also talking about all those very important business-to-business (B2B) and business-to-employee (B2E) apps. I can't tell you how frustrating it is when you get on the phone with somebody and they say, “Well, I’ll help you, but my app is down,” or the data isn’t available. So this is not just for the public facing apps, it's all apps, right?

It's a data problem

Sayar: Absolutely. Regardless of whether it's enterprise or consumer, if it's mid-market small and medium business (SMB) or enterprise that you are building these apps for, what we see from our customers is that they all have a similar challenge, and they’re really trying to deal with the volume, the velocity, and the variety of the data around these new architectures and how they grapple and get their hands around it. At the end of day, it becomes a data problem, not just a process or technology problem.

Gardner: Let's talk about the challenges then. If we have many moving parts, if we need to do things faster, if we need to consider the development lifecycle and processes as well as ongoing security, if we’re dealing with outside third-party cloud providers, where do we go to find the common thread of insight, even though we have more complexity across more organizational boundaries?

Sayar: From a Sumo Logic perspective, we’re trying to provide full-stack visibility, not only from code and your repositories like GitHub or Jenkins, but all the way through the components of your code, to API calls, to what your deployment tools are used for in terms of provisioning and performance.

We spend a lot of effort to integrate to the various DevOps tool chain vendors, as well as provide the holistic view of what users are doing in terms of access to those applications and services. We know who has checked in which code or which branch and which build created potential issues for the performance, latency, or outage. So we give you that 360-view by providing that full stack set of capabilities.
Unlike others that are out there and available for you, Sumo Logic's architecture is truly cloud native and multitenant, but it's centered on the principle of near real-time data streaming.

Gardner: So, the more information the better, no matter where in the process, no matter where in the lifecycle. But then, that adds its own level of complexity. I wonder is this a fire-hose approach or boiling-the-ocean approach? How do you make that manageable and then actionable?

Sayar: We’ve invested quite a bit of our intellectual property (IP) on not only providing integration with these various sources of data, but also a lot in the machine learning  and algorithms, so that we can take advantage of the architecture of being a true cloud native multitenant fast and simple solution.

So, unlike others that are out there and available for you, Sumo Logic's architecture is truly cloud native and multitenant, but it's centered on the principle of near real-time data streaming.

As the data is coming in, our data-streaming engine is allowing developers, IT ops administrators, sys admins, and security professionals to be able to have their own view, coarse-grained or granular-grained, from our back controls that we have in the system to be able to leverage the same data for different purposes, versus having to wait for someone to create a dashboard, create a view, or be able to get access to a system when something breaks.

Gardner: That’s interesting. Having been in the industry long enough, I remember when logs basically meant batch. You'd get a log dump, and then you would do something with it. That would generate a report, many times with manual steps involved. So what's the big step to going to streaming? Why is that an essential part of making this so actionable?

Sayar: It’s driven based on the architectures and the applications. No longer is it acceptable to look at samples of data that span 5 or 15 minutes. You need the real-time data, sub-second, millisecond latency to be able to understand causality, and be able to understand when you’re having a potential threat, risk, or security concern, versus code-quality issues that are causing potential performance outages and therefore business impact.

The old way was hope and pray, when I deployed code, that I would find something when a user complains is no longer acceptable. You lose business and credibility, and at the end of the day, there’s no real way to hold developers, operations folks, or security folks accountable because of the legacy tools and process approach.

Center of the business

Those expectations have changed, because of the consumerization of IT and the fact that apps are the center of the business, as we’ve talked about. What we really do is provide a simple way for us to analyze the metadata coming in and provide very simple access through APIs or through our user interfaces based on your role to be able to address issues proactively.

Conceptually, there’s this notion of wartime and peacetime as we’re building and delivering our service. We look at the problems that users -- customers of Sumo Logic and internally here at Sumo Logic -- are used to and then we break that down into this lifecycle -- centered on this concept of peacetime and wartime.

Peacetime is when nothing is wrong, but you want to stay ahead of issues and you want to be able to proactively assess the health of your service, your application, your operational level agreements, your SLAs, and be notified when something is trending the wrong way.

Then, there's this notion of wartime, and wartime is all hands on deck. Instead of being alerted 15 minutes or an hour after an outage has happened or security risk and threat implication has been discovered, the real-time data-streaming engine is notifying people instantly, and you're getting PagerDuty alerts, you're getting Slack notifications. It's no longer the traditional helpdesk notification process when people are getting on bridge lines.
No longer do you need to do “swivel-chair” correlation, because we're looking at multiple UIs and tools and products.

Because the teams are often distributed and it’s shared responsibility and ownership for identifying an issue in wartime, we're enabling collaboration and new ways of collaboration by leveraging the integrations to things like Slack, PagerDuty notification systems through the real-time platform we've built.

So, the always-on application expectations that customers and consumers have, have now been transformed to always-on available development and security resources to be able to address problems proactively.

Gardner: It sounds like we're able to not only take the data and information in real time from the applications to understand what’s going on with the applications, but we can take that same information and start applying it to other business metrics, other business environmental impacts that then give us an even greater insight into how to manage the business and the processes. Am I overstating that or is that where we are heading here?

Sayar: That’s exactly right. The essence of what we provide in terms of the service is a platform that leverages the machine logs and time-series data from a single platform or service that eliminates a lot of the complexity that exists in traditional processes and tools. No longer do you need to do “swivel-chair” correlation, because we're looking at multiple UIs and tools and products. No longer do you have to wait for the helpdesk person to notify you. We're trying to provide that instant knowledge and collaboration through the real-time data-streaming platform we've built to bring teams together versus divided.

Gardner: That sounds terrific if I'm the IT guy or gal, but why should this be of interest to somebody higher up in the organization, at a business process, even at a C-table level? What is it about continuous intelligence that cannot only help apps run on time and well, but help my business run on time and well?

Need for agility

Sayar: We talked a little bit about the whole need for agility. From a business point of view, the line-of-business folks who are associated with any of these greenfield projects or apps want to be able to increase the cycle times of the application delivery. They want to have measurable results in terms of application changes or web changes, so that their web properties have either increased or potentially decreased in terms of user satisfaction or, at the end of the day, business revenue.

So, we're able to help the developers, the DevOps teams, and ultimately, line of business deliver on the speed and agility needs for these new modes. We do that through a single comprehensive platform, as I mentioned.

At the same time, what’s interesting here is that no longer is security an afterthought. No longer is security in the back room trying to figure out when a threat or an attack has happened. Security has a seat at the table in a lot of boardrooms, and more importantly, in a lot of strategic initiatives for enterprise companies today.

At the same time we're helping with agility, we're also helping with prevention. And so a lot of our customers often start with the security teams that are looking for a new way to be able to inspect this volume of data that’s coming in -- not at the infrastructure level or only the end-user level -- but at the application and code level. What we're really able to do, as I mentioned earlier, is provide a unifying approach to bring these disparate teams together.
Download the State
Of Modern Applications
In AWS Report
Gardner: And yet individuals can extract the intelligence view that best suits what their needs are in that moment.

Sayar: Yes. And ultimately what we're able to do is improve customer experience, increase revenue-generating services, increase efficiencies and agility of actually delivering code that’s quality and therefore the applications, and lastly, improve collaboration and communication.

Gardner: I’d really like to hear some real world examples of how this works, but before we go there, I’m still interested in the how. As to this idea of machine learning, we're hearing an awful lot today about bots, artificial intelligence (AI), and machine learning. Parse this out a bit for me. What is it that you're using machine learning  for when it comes to this volume and variety in understanding apps and making that useable in the context of a business metric of some kind?

Sayar: This is an interesting topic, because of a lot of noise in the market around big data or machine learning and advanced analytics. Since Sumo Logic was started six years ago, we built this platform to ensure that not only we have the best in class security and encryption capabilities, but it was centered on the fundamental purpose around democratizing analytics, making it simpler to be able to allow more than just a subset of folks get access to information for their roles and responsibilities, whether you're security, ops, or development teams.

To answer your question a little bit more succinctly, our platform is predicated on multiple levels of machine learning and analytics capabilities. Starting at the lowest level, something that we refer to as LogReduce is meant to separate the signal-to-noise ratio. Ultimately, it helps a lot of our users and customers reduce mean time to identification by upwards of 90 percent, because they're not searching the irrelevant data. They're searching the relevant and oftentimes occurring data that's not frequent or not really known, versus what’s constantly occurring in their environment.

In doing so, it’s not just about mean time to identification, but it’s also how quickly we're able to respond and repair. We've seen customers using LogReduce reduce the mean time to resolution by upwards of 50 percent.

Predictive capabilities

Our core analytics, at the lowest level, is helping solve operational metrics and value. Then, we start to become less reactive. When you've had an outage or a security threat, you start to leverage some of our other predictive capabilities in our stack.

For example, I mentioned this concept of peacetime and wartime. In the notion of peacetime, you're looking at changes over time when you've deployed code and/or applications to various geographies and locations. A lot of times, developers and ops folks that use Sumo want to use log compare or outlier predictor operators that are in their machine learning capabilities to show and compare differences of branches of code and quality of their code to relevancy around performance and availability of the service and app.

We allow them, with a click of a button, to compare this window for these events and these metrics for the last hour, last day, last week, last month, and compare them to other time slices of data and show how much better or worse it is. This is before deploying to production. When they look at production, we're able to allow them to use predictive analytics to look at anomalies and abnormal behavior to get more proactive.

So, reactive, to proactive, all the way to predictive is the philosophy that we've been trying to build in terms of our analytics stack and capabilities.
Sumo Logic is very relevant for all these customers that are spanning the data-center infrastructure consolidation to new workload projects that they may be building in private-cloud or public-cloud endpoints.

Gardner: How are some actual customers using this and what are they getting back for their investment?

Sayar: We have customers that span retail and e-commerce, high-tech, media, entertainment, travel, and insurance. We're well north of 1,200 unique paying customers, and they span anyone from Airbnb, Anheuser-Busch, Adobe, Metadata, Marriott, Twitter, Telstra, Xora -- modern companies as well as traditional companies.

What do they all have in common? Often, what we see is a digital transformation project or initiative. They either have to build greenfield or brownfield apps and they need a new approach and a new service, and that's where they start leveraging Sumo Logic.

Second, what we see is that's it’s not always a digital transformation; it's often a cost reduction and/or a consolidation project. Consolidation could be tools or infrastructure and data center, or it could be migration to co-los or public-cloud infrastructures.

The nice thing about Sumo Logic is that we can connect anything from your top of rack switch, to your discrete storage arrays, to network devices, to operating system, and middleware, through to your content-delivery network (CDN) providers and your public-cloud infrastructures.

As it’s a migration or consolidation project, we’re able to help them compare performance and availability, SLAs that they have associated with those, as well as differences in terms of delivery of infrastructure services to the developers or users.

So whether it's agility-driven or cost-driven, Sumo Logic is very relevant for all these customers that are spanning the data-center infrastructure consolidation to new workload projects that they may be building in private-cloud or public-cloud endpoints.

Gardner: Ramin, how about a couple of concrete examples of what you were just referring to.

Cloud migration

Sayar: One good example is in the media space or media and entertainment space, for example, Hearst Media. They, like a lot of our other customers, were undergoing a digital-transformation project and a cloud-migration project. They were moving about 36 apps to AWS and they needed a single platform that provided machine-learning analytics to be able to recognize and quickly identify performance issues prior to making the migration and updates to any of the apps rolling over to AWS. They were able to really improve cycle times, as well as efficiency, with respect to identifying and resolving issues fast.

Another example would be JetBlue. We do a lot in the travel space. JetBlue is also another AWS and cloud customer. They provide a lot of in-flight entertainment to their customers. They wanted to be able to look at the service quality for the revenue model for the in-flight entertainment system and be able to ascertain what movies are being watched, what’s the quality of service, whether that’s being degraded or having to charge customers more than once for any type of service outages. That’s how they're using Sumo Logic to better assess and manage customer experience. It's not too dissimilar from Alaska Airlines or others that are also providing in-flight notification and wireless type of services.

The last one is someone that we're all pretty familiar with and that’s Airbnb. We're seeing a fundamental disruption in the travel space and how we reserve hotels or apartments or homes, and Airbnb has led the charge, like Uber in the transportation space. In their case, they're taking a lot of credit-card and payment-processing information. They're using Sumo Logic for payment-card industry (PCI) audit and security, as well as operational visibility in terms of their websites and presence.
They were able to really improve cycle times, as well as efficiency, with respect to identifying and resolving issues fast.

Gardner: It’s interesting. Not only are you giving them benefits along insight lines, but it sounds to me like you're giving them a green light to go ahead and experiment and then learn very quickly whether that experiment worked or not, so that they can find refine. That’s so important in our digital business and agility drive these days.

Sayar: Absolutely. And if I were to think of another interesting example, Anheuser-Busch is another one of our customers. In this case, the CISO wanted to have a new approach to security and not one that was centered on guarding the data and access to the data, but providing a single platform for all constituents within Anheuser-Busch, whether security teams, operations teams, developers, or support teams.

We did a pilot for them, and as they're modernizing a lot of their apps, as they start to look at the next generation of security analytics, the adoption of Sumo started to become instant inside AB InBev. Now, they're looking at not just their existing real estate of infrastructure and apps for all these teams, but they're going to connect it to future projects such as the Connected Path, so they can understand what the yield is from each pour in a particular keg in a location and figure out whether that’s optimized or when they can replace the keg.

So, you're going from a reactive approach for security and processes around deployment and operations to next-gen connected Internet of Things (IoT) and devices to understand business performance and yield. That's a great example of an innovative company doing something unique and different with Sumo Logic.

Gardner: So, what happens as these companies modernize and they start to avail themselves of more public-cloud infrastructure services, ultimately more-and-more of their apps are going to be of, by, and for somebody else’s public cloud? Where do you fit in that scenario?

Data source and location

Sayar: Whether you’re running on-prem, whether you're running co-los, whether you're running through CDN providers like Akamai, whether you're running on AWS or Azure, Heroku, whether you're running SaaS platforms and renting a single platform that can manage and ingest all that data for you. Interestingly enough, about half our customers’ workloads run on-premises and half of them run in the cloud.

We’re agnostic to where the data is or where their applications or workloads reside. The benefit we provide is the single ubiquitous platform for managing the data streams that are coming in from devices, from applications, from infrastructure, from mobile to you, in a simple, real-time way through a multitenant cloud service.

Gardner: This reminds me of what I heard, 10 or 15 years ago about business intelligence (BI), drawing data, analyzing it, making it close to being proactive in its ability to help the organization. How is continuous intelligence different, or even better, and something that would replace what we refer to as BI?
The expectation is that it’s sub-millisecond latency to understand what's going on, from a security, operational, or user-experience point of view.

Sayar: The issue that we faced with the first generation of BI was it was very rear-view and mirror-centric, meaning that it was looking at data and things in the past. Where we're at today with this need for speed and the necessity to be always on, always available, the expectation is that it’s sub-millisecond latency to understand what's going on, from a security, operational, or user-experience point of view.

I'd say that we're on V2 or next generation of what was traditionally called BI, and we refer to that as continuous intelligence, because you're continuously adapting and learning. It's not only based on what humans know and what rules and correlation that they try to presuppose and create alarms and filters and things around that. It’s what machines and machine intelligence needs to supplement that with to provide the best-in-class type of capability, which is what we refer to as continuous intelligence.

Gardner: We’re almost out of time, but I wanted to look to the future a little bit. Obviously, there's a lot of investing going on now around big data and analytics as it pertains to many different elements of many different businesses, depending on their verticals. Then, we're talking about some of the logic benefit and continuous intelligence as it applies to applications and their lifecycle.

Where do we start to see crossover between those? How do I leverage what I’m doing in big data generally in my organization and more specifically, what I can do with continuous intelligence from my systems, from my applications?

Business Insights

Sayar: We touched a little bit on that in terms of the types of data that we integrate and ingest. At the end of the day, when we talk about full-stack visibility, it's from everything with respect to providing business insights to operational insights, to security insights.

We have some customers that are in credit-card payment processing, and they actually use us to understand activations for credit cards, so they're extracting value from the data coming into Sumo Logic to understand and predict business impact and relevant revenue associated with these services that they're managing; in this case, a set of apps that run on a CDN.

At the same time, the fraud and risk team are using us for threat and prevention. The operations team is using us for understanding identification of issues proactively to be able to address any application or infrastructure issues, and that’s what we refer to as full stack.

Full stack isn’t just the technology; it's providing business visibility insights to line the business users or users that are looking at metrics around user experience and service quality, to operational-level impacts that help you become more proactive, or in some cases, reactive to wartime issues, as we've talked about. And lastly, the security team helps you take a different security posture around reactive and proactive, around threat, detection, and risk.

In a nutshell, where we see these things starting to converge is what we refer to as full stack visibility around our strategy for continuous intelligence, and that is technology to business to users.
Try Sumo Logic for Free
To Get Critical Data and Insights
Into Apps and Infrastructure Operations
Gardner: I’m afraid we will have to leave it here. You've been listening to a sponsored BriefingsDirect discussion on how modern applications are different and what's needed to make them more robust, agile, and responsive. We've heard how new levels of insight and intelligence of what really goes on underneath the covers of modern apps across your lifecycle can ensure that those apps are built, deployed, and operated properly.

So, please join me in thanking our guest, Ramin Sayar, President and CEO of Sumo Logic. Thank you so much.

Sayar: Thank you very much.

Gardner: I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for this ongoing series of BriefingsDirect discussions. A big thank you to our sponsor today, Sumo Logic, and a big thank you as well to our audience. Please come back for our next edition.

Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: Sumo Logic.

Transcript of a discussion on how modern applications are different, and what data and insight are needed to make them more robust, agile and responsive. Copyright Interarbor Solutions, LLC, 2005-2017. All rights reserved.

You may also be interested in:

Tuesday, January 07, 2014

Learn How HP Implemented the TippingPoint Intrusion Prevention System Across its Security Infrastructure

Transcript of a BriefingsDirect podcast on how the strategy of dealing with malware is shifting from reaction to prevention.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: HP.

Dana Gardner: Hello, and welcome to the next edition of the HP Discover Podcast Series. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your co-host and moderator for this ongoing discussion of IT innovation and how it’s making an impact on people’s lives.

Gardner
Once again, we’re focusing on how IT leaders are improving the security and availability of services to deliver better experiences and payoffs for businesses and end users alike.

We have a fascinating show today. We’re going to be exploring the ins and outs of improving enterprise intrusion prevention systems (IPS), and we will see how HP and its global cyber security partners have made the HP Global Network more resilient and safe. We’ll will hear how a vision for security has been effectively translated into actual implementation.

To learn more about how HP itself has created role-based and granular access control benefits amid real-time yet intelligent intrusion protection, please join me in welcoming our guest, Jim O'Shea, Network Security Architect for HP Cyber Security Strategy and Infrastructure Engagement. Welcome to the show, Jim.

Jim O’Shea: Hello, Dana. Thank you.

Gardner: Before we get into the nitty-gritty, what do you think are some of the major trends that are driving the need for better intrusion prevention systems nowadays?

O’Shea: If you look at the past, it was about detection, and you had reaction technologies. We had firewalls that blocked and looked at the port level. Then, we evolved to trying to detect things that were malicious with intent by using IDS. But that was a reactionary-type thing. It was a nice approach, but we were reacting. Something happened, you reacted, but if you knew it was bad, why did we let it in in the first place?

The evolution was the IPS, the prevention. If you know it's bad, why do you even want to see it? Why do you want to try to react to it? Just block it. That’s the trend that we’ve been following.

Gardner: But we can’t just have a black-and-white situation. It’s much more gray. There are sorts of intrusion, I suppose, that we want. We want access control, rather than just a firewall. So is there a new thinking, a new vision, that’s been developed over the past several years about these networks and what should or shouldn't be allowed through them?

O’Shea: You’re talking about letting the good in. Those are the evolutions and the trends that we are all trying to strive for. Get the good traffic in. Get who you are in. Maybe look at what you have. You can explore the health of your device. Those are all trends that we’re all striving for now.

Gardner: I recall Jim, that there was a Ponemon Institute report about a year or so ago that really outlined some of the issues here. Do you recall that? Were there any issues in there that illustrate this trend toward a different type of network and a different approach to protection?

Number of attacks

O’Shea: The Ponemon study was illustrating the vast number of attacks and the trend toward the costs for intrusion. It was highlighting those type of trends, all of which we’re trying to head off. Those type of reports are guiding factors in taking a more proactive, automated-type response. [Learn more about intrusion prevention systems.]

Gardner: I suppose what’s also different nowadays is that we’re not only concerned with outside issues in terms of risk, but also insider attacks. It’s being able to detect behaviors and things that occur that data can detect. The analysis can then provide perhaps a heads-up across the network, regardless of whether they have access or not. What are the risk issues now when we think about insider attacks, rather than just outside penetration?

O’Shea: You’re exactly right. Are you hiring the right people? That’s a big issue. Are they being influenced? Those are all huge issues. Big data can handle some of that and pull that in. Our approach on intrusion prevention wasn’t to just look at what’s coming from the outside, but it was also look at data traversing the network.
You have a whole rogue wireless-type approach in which people can gain access and can they probe and poke around.

When we deployed the TippingPoint solution, we didn’t change our policies or profiles that we were deploying based on whether it’s starting on the inside or starting on the outside. It was an equal deployment.

An insider attack could also be somebody who walks into a facility, gains physical access, and connects to your network. You have a whole rogue wireless-type approach in which people can gain access and can they probe and poke around. And if it’s malware traffic from our perspective, with the IDS we took the approach, inside or outside -- doesn’t matter. If we can detect it, if we can be in the path, it’s a block.

Gardner: For those of our listeners who might not be familiar with the term “intrusion prevention systems,” maybe you could illustrate and flesh that out a bit. What do we mean by IPS? What are we talking about? Are these technologies? Are these processes, methodologies, or all of the above?

O’Shea: TippingPoint technology is an appliance-based technology. It’s an inline device. We deploy it inline. It sits in the network, and the traffic is flowing through it. It’s looking for characteristics or reputation on the type of traffic, and reputation is a more real-time change in the system. This network, IP address, or URL is known for malware, etc. That’s a dynamic update, but the static updates are signature-type, and the detection of vulnerability or a specific exploit aimed at an operating system.

So intrusion prevention is through the detection of that, and blocking and preventing that from completing its communication to the end node.

Gardner: And these work in conjunction with other approaches, such as security information, event management, and network-based anomaly detection. Is that correct? How do they work together?

Bigger picture

O’Shea: All the events get logged into HP ArcSight to create the bigger picture. Are you seeing these type of events occurring other places? So you have the bigger picture correlation.

Network-based anomaly detection is the ability to detect something that is occurring in the network and it's based on an IP address or it’s based on a flow. Taking advantage of reputation we can insert those IP addresses, detected based on flow, that are doing something anomalous.

It could be that they’re beaconing out, spreading a worm. If they look like they’re causing concerns with a high degree of accuracy, then we can put that into the reputation and take advantage of moving blocks.

So reputation is a self-deploying feature. You insert an IP address into it and it can self-update. We haven’t taken the automated step yet, although that’s in the plan. Today, it’s a manual process for us, but ideally, through application programming interfaces (APIs), we can automate all that. It works in a lab, but we haven’t deployed it on our production that way.

Gardner: Clearly HP is a good example of a large enterprise, one of the largest in the world, with global presence, with a lot of technology, a lot of intellectual property, and therefore a lot to protect. Let’s look at how you actually approached protecting the HP network.
We wanted to prevent mal traffic, mal-formed traffic, malware -- any traffic with the mal intent of reaching the data center.

What’s the vision, if you will, for HP's Global Cyber Security, when it comes to these newer approaches? Do you have an overarching vision that then you can implement? How do we begin to think about chunking out the problem in order to then solve it effectively?

O’Shea: You want to be able to detect, block, and prevent as an overarching strategy. We also wanted to take advantage of inserting a giant filter inline on all data that’s going into the data center. We wanted to prevent mal traffic, mal-formed traffic, malware -- any traffic with the "mal" intent of reaching the data center.

So why make that an application decision to block and rely on host-level defenses, when we have the opportunity to do it at the network? So it made the network more hygienically clean, blocking traffic that you don’t want to see.

We wrapped it around the data center, so all traffic going into our data centers goes through that type of filter. [Learn more about intrusion prevention systems.]

Gardner: You’ve mentioned a few HP products: TippingPoint and ArcSight, for example, but this is a larger ecosystem approach and play. Tell us a little bit about partnerships, other technologies, and even the partnerships for implementation, not just the technology, but the process and methodologies as well.

Key to deployment

O’Shea: That was key to our deployment, because it is an inline technology and you are going inline in the network. You’re changing flows, where it could be mal traffic, but yet maybe a researcher is trying to do something. So we need to have the ability to have that level of partnership with the network team. They have to see it. They have to understand what it is. It has to be manageable.

When we deployed it, we looked at what could go wrong and we designed around that. What could go wrong? A device failed. So we have an N+1 type installation. If a single device fails, we’re not down, we are not blocking traffic. We have the ability to handle the capacity of our network, which grows, and we are growing, and so it has to be built for the now and the future. It has to be manageable.

It has to be able to be understood by “first responders,” the people that get called first. Everybody blames the network first, and then it's the application afterward. So the network team gets pulled in on many calls, at all types of hours, and they have to be able to get that view.

That was key to get them broad-based training, so that the technology was there. Get a process integrated into how you’re going to handle updates and how you’re going to add beyond what TippingPoint recommended. TippingPoint makes a recommendation on profiles and new settings. If we take that, do we want to add other things? So we have to have a global cyber-security view and a global cyber-security input and have that all vetted.

The application team had to be onboard and aware, so that everybody understands. Finally, because we were going into a very large installed network that was handling a lot of different types of traffic, we brought in TippingPoint Professional Services and had everything looked at, re-looked at, and signed off on, so that what we’re doing is a best practice. We looked at it from multiple angles and took a lot of things into consideration.
We proxy the events. That gives us the ability to have multiple ArcSight instances and also to evolve.

Gardner: Now, we have different groups of people that need to work in concert to a larger degree than in the past. We have application folks, network folks, outside service providers, and network providers. It seems that we are asking for a complete view of security, which means people need to be coordinated and cooperative in ways that they hadn’t had to be before.

Is there something about TippingPoint and ArcSight that provides data, views, and analytics in such a way that it's easier for these groups to work together in ways that they hadn’t before? We know that they have to work together, but is there something about the technology that helps them work together, or gives them common views or inputs that grease the skids to collaboration?

O’Shea: One of the nice things about the way the TippingPoint events occur is that you have a choice. You can send them from an individual IDS units themselves or you can proxy them from the management console. Again, the ability to manage was critical to us, so we chose to do it from the console.

We proxy the events. That gives us the ability to have multiple ArcSight instances and also to evolve. ArcSight evolves. When they’re changing, evolving, and growing, and they want to bring up a new collector, we’re able to send very rapidly to the new collector.

ArcSight pulls in firewall logs. You can get proxy events and events from antivirus. You can pull in that whole view and get a bigger picture at the ArcSight console. The TippingPoint view is of what’s happening from the inline TippingPoint and what's traversing it. Then, the ArcSight view adds a lot of depth to that.

Very flexible

So it gives a very broad picture, but from the TippingPoint view, we’re very flexible and able to add and stay in step with ArcSight growth quickly. It's kind of a concert. That includes sending events on different ports. You’re not restricted to one port. If you want to create a secure port or a unique port for your events to go on to ArcSight, you have that ability.

Gardner: We’ve heard, of course, how important real-time reaction is, and even gaining insights to be able to anticipate and be proactive. What is it that you learned through this process that allowed you to make that latency reduced or eliminated so that the amount of time that things go on is cut. I’ve heard that a lot of times you can't prevent intrusion, but you can prevent the damage of intrusion. So how does it work in terms of this low latency time element?

O’Shea: With TippingPoint, you get to see when an exploit is triggered, TippingPoint has a concept of Zero Days and it has a concept of Reputation. Reputation is an ongoing change, and Zero Day is a deployment of a profile. Think of Reputation as a constant updating of signatures as sites change and how the industry is recognizing them. So that gives you an ability to have a view of a site that people frequented and may now be compromised. You have that ability to see that because the Reputation of the site changed.

With TippingPoint being a block technology, you have the low latency. The latency is being detected and blocked, but now, when you pull it back into ArcSight, you have the ability to see a holistic view. We’re seeing these events or something that looks similar. The network-based anomaly detection is reporting some strange things happening, or you have some antivirus things that are reporting.

That’s a different type of reaction. You can react and deploy and say that you want to take action against whatever it is you are seeing. Maybe you need to put up a new firewall block to alleviate something.
That’s a different type of reaction. You can react and deploy and say that you want to take action against whatever it is you are seeing.

Or on the other hand, if TippingPoint is not seeing it, maybe you have the opportunity to activate this new signature more rapidly and deploy new profile. This is something new, and you can take action right away.

Gardner: Jim, let's talk a bit about what you get when you do this correctly. So using HP’s example, what were some of the paybacks, both in technical terms, maybe metrics of success technically, but then also business results? What happens when you can deploy these systems, develop those partnerships, and get cooperation? How can we measure what we have done here?

O’Shea: One of the things that we did wrong in our deployment is that we didn’t have a baseline of what is mal or what is bad. So, as it was a moving deployment, we don’t have hard and fast metrics of a before and after view. But again, you don’t know what's bad until you start trying to detect it. It might not have been for us to even take that type of view.

We deployed TippingPoint. After the deployment we’ve had some DoS attacks against us, and they have been blocked and deflected. We’ve had some other events that we have been able to block and defend rapidly. [Learn more about intrusion prevention systems.]

If you think back historically of how we dealt with them, those were kind of a Whac-A-Mole-type of defenses. Something happened, and you reacted. So I guess the metric would be that we’re not as reactionary, but do we have hard metrics to prove that? I don’t have those.

How much volume?

Gardner: We can appreciate the scale of what the systems are capable of. Do we have a number of events detected or that sort of thing, blocks per month, any sense of how much volume we can handle?

O’Shea: We took a month’s sample. I’m trying to recall the exact number, but it was 100 million events in one month that were detected as mal events. That’s including Internet-facing events. That’s why the volume is high, but it was 100 million events that were automatically blocked and that were flagged as mal events.
The Professional Services teams have been able to deploy in a very large network and have worked with the requirements that a large enterprise has.

Gardner: How do you now take this out to the market? Is there a cyber-security platform? Do you have a services component? You’ve done this internally, but how do you take this out to the market, combining the products, the services, and the methodologies?

O’Shea: I’m not on the product marketing side, but TippingPoint has learned from us and we’ve partnered with them. We’re constantly sharing back with them. So the give-back to TippingPoint, as a product division, is that they can see real traffic, in a real high-volume network, and they can pretest their signatures.

There are active lighthouse-type installs, lighthouse meaning that they’re not actively blocking. They’re just observing, and they are testing their next iteration of software and the next group of profiles. They’re able to do that for themselves, and it's a give back that has worked. What we receive is a better product, and what everybody else receives is a better product.

The Professional Services teams have been able to deploy in a very large network and have worked with the requirements that a large enterprise has. That includes standard deployment, how things are connected and what the drawings are going to look like, as well as how are you going to cable it up.

A large enterprise has different standards than a small business would have, and that was a give back to the Professional Services to be able to deploy it in a large enterprise. It has been a good relationship, and there is always opportunity for improvement, but it certainly has helped.

Current trends

Gardner: Jim, looking to the future a little bit, we know that there’s going to be more and more cloud and hybrid-cloud types of activities. We’re certainly seeing already a huge uptick in mobile device and tablet use on corporate networks. This is also part of the bring-your-own-device (BYOD) trend that we’re seeing.

So should we expect a higher degree of risk and more variables and complication, and what does that portend for the use of these types of technologies going forward? How much gain do you get by getting on the IDS bandwagon sooner rather than later?

O’Shea: BYOD is a new twist on things and it means something different to everybody, because it's an acronym term, but let's take the view of you bringing in a product you buy.
BYOD is a new twist on things and it means something different to everybody, because it's an acronym term.

Somebody is always going to get a new device, they are going to bring in it, they are going to try it out, and they are going to connect it to the corporate network, if they can. And because they are coming from a different environment and they’re not necessarily to corporate standards, they may bring unwanted guests into the network, in terms of malware.

Now, we have the opportunity, because we are inline, to detect and block that right away. Because we are an integrated ecosystem, they will show up as anomalous events. ArcSight and our Cyber Defense Center will be able to see those events. So you get a bigger picture.

Those events can be then translated into removing that node from the network. We have that opportunity to do that. BYOD not only brings your own device, but it also brings things you don’t know that are going to happen, and the only way to block that is prevention and anomalous type detection, and then try to bring it altogether in a bigger picture.

Gardner: Well, great. I’m afraid we will have to leave it there. We’ve been learning about the modern ins and outs of improving enterprise intrusion prevention systems, and we’ve heard about how HP itself has created more of a granular access control benefit amid real-time, yet intelligent, intrusion detection and protection.

I’d like to thank the supporter for this series, HP Software, and remind our audience to carry on the dialogue through the Discover Group on LinkedIn. And of course, a big thank you to our guest, Jim O'Shea, Network Security Architect for HP Cyber Security Strategy and Infrastructure Engagement. Thanks so much, Jim.

O’Shea: Thank you.

Gardner: And lastly, our appreciation goes out to our global audience for joining us once again for this HP Discover Podcast discussion.

I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host for this ongoing series of HP-sponsored business success stories. Thanks again for listening, and come back next time.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: HP.
Learn more about prevention detection.

Transcript of a BriefingsDirect podcast on how the strategy of dealing with malware is shifting from reaction to prevention. Copyright Interarbor Solutions, LLC, 2005-2014. All rights reserved.

You may also be interested in:

Tuesday, June 05, 2012

Corporate Data, Supply Chains Remain Vulnerable to Cyber Crime Attacks, Says Open Group Conference Speaker

Transcript of a BriefingsDirect podcast in which cyber security expert Joel Brenner explains the risk to businesses from international electronic espionage.

Register for The Open Group Conference
July 16-18 in Washington, D.C.


Listen to the podcast. Find it on iTunes/iPod. Download the transcript. Sponsor: The Open Group.

Dana Gardner: Hello, and welcome to a special BriefingsDirect thought leadership interview series coming to you in conjunction with the Open Group Conference this July in Washington, D.C. I'm Dana Gardner, Principal Analyst at Interarbor Solutions, and I'll be your host throughout these discussions.

The conference will focus on how security impacts the enterprise architecture, enterprise transformation, and global supply chain activities in organizations, both large and small. Today, we're here on the security front with one of the main speakers at the July 16 conference, Joel Brenner, the author of "America the Vulnerable: Inside the New Threat Matrix of Digital Espionage, Crime, and Warfare."

Joel is a former Senior Counsel at the National Security Agency (NSA), where he advised on legal and policy issues relating to network security. Mr. Brenner currently practices law in Washington at Cooley LLP, specializing in cyber security. Registration remains open for The Open Group Conference in Washington, DC beginning July 16.

Previously, he served as the National Counterintelligence Executive in the Office of the Director of National Intelligence, and as the NSA’s Inspector General. He is a graduate of University of Wisconsin–Madison, the London School of Economics, and Harvard Law School. [Disclosure: The Open Group is a sponsor of BriefingsDirect podcasts.]

Joel, welcome to BriefingsDirect.

Joel Brenner: Thanks. I'm glad to be here.

Gardner: Your book came out last September and it affirmed this notion that the United States, or at least open Western cultures and societies, are particularly vulnerable to being infiltrated, if you will, from cybercrime, espionage, and dirty corporate tricks.

My question is why wouldn't these same countries be also very adept on the offense, being highly technical? Why are we particularly vulnerable, when we should be most adept at using cyber activities to our advantage?

Brenner: Let’s make a distinction here between the political-military espionage that's gone on since pre-biblical times and the economic espionage that’s going on now and, in many cases, has nothing at all to do with military, defense, or political issues.

The other stuff has been going on forever, but what we've seen in the last 15 or so years is a relentless espionage attack on private companies for reasons having nothing to do with political-military affairs or defense.

So the countries that are adept at cyber, but whose economies are relatively undeveloped compared to ours, are at a big advantage, because they're not very lucrative targets for this kind of thing, and we are. Russia, for example, is paradoxical. While it has one of the most educated populations in the world and is deeply cultured, it has never been able to produce a commercially viable computer chip.

Not entrepreneurial


We’re not going to Russia to steal advanced technology. We’re not going to China to steal advanced technology. They're good at engineering and they’re good at production, but so far, they have not been good at making themselves into an entrepreneurial culture.

That’s one just very cynical reason why we don't do economic espionage against the people who are mainly attacking us, which are China, Russia, and Iran. I say attack in the espionage sense.

The other reason is that you're stealing intellectual property when you’re doing economic espionage. It’s a bedrock proposition of American economics and political strategy around the world to defend the legal regime that protects intellectual property. So we don’t do that kind of espionage. Political-military stuff we're real good at.

Gardner: This raises the question for me. If we're hyper-capitalist, where we have aggressive business practices and we have these very valuable assets to protect, isn't there the opportunity to take the technology and thwart the advances from these other places? Wouldn’t our defense rise to the occasion? Why hasn't it?

Brenner: The answer has a lot to do with the nature of the Internet and its history. The Internet, as some of your listeners will know, was developed starting in the late '60s by the predecessor of the Defense Advanced Research Projects Agency (DARPA), a brilliant operation which produced a lot of cool science over the years.

The people who invented this, if you talk to them today, lament the fact that they didn't build a security layer into it.



It was developed for a very limited purpose, to allow the collaboration of geographically dispersed scientists who worked under contract in various universities with the Defense Department's own scientists. It was bringing dispersed brainpower to bear.

It was a brilliant idea, and the people who invented this, if you talk to them today, lament the fact that they didn't build a security layer into it. They thought about it. But it wasn't going to be used for anything else but this limited purpose in a trusted environment, so why go to the expense and aggravation of building a lot of security into it?

Until 1992, it was against the law to use the Internet for commercial purposes. Dana, this is just amazing to realize. That’s 20 years ago, a twinkling of an eye in the history of a country’s commerce. That means that 20 years ago, nobody was doing anything commercial on the Internet. Ten years ago, what were you doing on the Internet, Dana? Buying a book for the first time or something like that? That’s what I was doing, and a newspaper.

In the intervening decade, we’ve turned this sort of Swiss cheese, cool network, which has brought us dramatic productivity and all and pleasure into the backbone of virtually everything we do.

International finance, personal finance, command and control of military, manufacturing controls, the controls in our critical infrastructure, all of our communications, virtually all of our activities are either on the Internet or exposed to the Internet. And it’s the same Internet that was Swiss cheese 20 years ago and it's Swiss cheese now. It’s easy to spoof identities on it.

So this gives a natural and profound advantage to attack on this network over defense. That’s why we’re in the predicament we're in.

Both directions


Gardner: So the Swiss cheese would work in both directions. U.S. corporations, if they were interested, could use the same techniques and approaches to go into companies in China or Russia or Iran, as you pointed out, but they don't have assets that we’re interested in. So we’re uniquely vulnerable in that regard.

Let’s also look at this notion of supply chain, because corporations aren’t just islands unto themselves. A business is really a compendium of other businesses, products, services, best practices, methodologies, and intellectual property that come together to create a value add of some kind. It's not just attacking the end point, where that value is extended into the market. It’s perhaps attacking anywhere along that value chain.

What are the implications for this notion of the ecosystem vulnerability versus the enterprise vulnerability?

Brenner: Well, the supply chain problem really is rather daunting for many businesses, because supply chains are global now, and it means that the elements of finished products have a tremendous numbers of elements. For example, this software, where was it written? Maybe it was written in Russia -- or maybe somewhere in Ohio or in Nevada, but by whom? We don’t know.

There are two fundamental different issues for supply chain, depending on the company. One is counterfeiting. That’s a bad problem. Somebody is trying to substitute shoddy goods under your name or the name of somebody that you thought you could trust. That degrades performance and presents real serious liability problems as a result.

The supply chain problem really is rather daunting for many businesses, because supply chains are global now, and it means that the elements of finished products have a tremendous numbers of elements.



The other problem is the intentional hooking, or compromising, of software or chips to do things that they're not meant to do, such as allow backdoors and so on in systems, so that they can be attacked later. That’s a big problem for military and for the intelligence services all around the world.

The reason we have the problem is that nobody knows how to vet a computer chip or software to see that it won't do these squirrelly things. We can test that stuff to make sure it will do what it's supposed to do, but nobody knows how to test the computer chip or two million lines of software reliably to be sure that it won’t also do certain things we don't want it to do.

You can put it in a sandbox or a virtual environment and you can test it for a lot of things, but you can't test it for everything. It’s just impossible. In hardware and software, it is the strategic supply chain problem now. That's why we have it.

Gardner: So as organizations ramp up their security, as they look towards making their own networks more impervious to attack, their data isolated, their applications isolated, they still have to worry about all of the other components and services that come into play, particularly software. [Registration remains open for The Open Group Conference in Washington, DC beginning July 16.]

Brenner: If you have a worldwide supply chain, you have to have a worldwide supply chain management system. This is hard and it means getting very specific. It includes not only managing a production process, but also the shipment process. A lot of squirrelly things happen on loading docks, and you have to have a way not to bring perfect security to that -- that's impossible -- but to make it really harder to attack your supply chain.

Notion of cost

Gardner: Well, Joel, it sounds like we also need to reevaluate the notion of cost. So many organizations today, given the economy and the lagging growth, have looked to lowest cost procedures, processes, suppliers, materials, and aren't factoring in the risk and the associated cost around these security issues. Do people need to reevaluate cost in the supply chain by factoring in what the true risks are that we’re discussing?

Brenner: Yes, but of course, when the CEO and the CFO get together and start to figure this stuff out, they look at the return on investment (ROI) of additional security. It's very hard to be quantitatively persuasive about that. That's one reason why you may see some kinds of production coming back into the United States. How one evaluates that risk depends on the business you're in and how much risk you can tolerate.

This is a problem not just for really sensitive hardware and software, special kinds of operations, or sensitive activities, but also for garden-variety things. If you’re making titanium screws for orthopedic operations, for example, and you’re making them in -- I don’t want to name any country, but let’s just say a country across the Pacific Ocean with a whole lot of people in it -- you could have significant counterfeit problems there.

Explaining to somebody that the screw you just put through his spine is really not what it’s supposed to be and you have to have another operation to take it out and put in another one is not a risk a lot of people want to run.

So even in things like that, which don't involve electronics, you have significant supply-chain management issues. It’s worldwide. I don’t want to suggest this is a problem just with China. That would be unfair.

This is a problem not just for really sensitive hardware and software, special kinds of operations, or sensitive activities, but also for garden-variety things



Gardner: Right. We’ve seen other aspects of commerce in which we can't lock down the process. We can’t know all the information, but what we can do is offer deterrence, perhaps in the form of legal recourse, if something goes wrong, if in fact, decisions were made that countered the contracts or were against certain laws or trade practices. Is it practical to look at some of these issues under the business lens and say, "If we do that, will it deter people from doing it again?"

Brenner: For a couple of years now, I’ve struggled with the question why it is that liability hasn’t played a bigger role in bringing more cyber security to our environment, and there are a number of reasons.

We've created liability for the loss of personal information, so you can quantify that risk. You have a statute that says there's a minimum damage of $500 or $1,000 per person whose identifiable information you lose. You add up the number of files in the breach and how much the lawyers and the forensic guys cost and you come up with a calculation of what these things cost.

But when it comes to just business risk, not legal risk, and the law says intellectual property to a company that depends on that intellectual property, you have a business risk. You don’t have much of a legal risk at this point.

You may have a shareholder suit issue, but there hasn’t been an awful lot of that kind of litigation so far. So I don't know. I'm not sure that’s quite the question you were asking me, Dana.

Gardner: My follow on to that was going to be where would you go to sue across borders anyway? Is there an über-regulatory or legal structure across borders to target things like supply chain, counterfeit, cyber espionage, or mistreatment of business practice?

Depends on the borders


Brenner: It depends on the borders you're talking about. The Europeans have a highly developed legal and liability system. You can bring actions in European courts. So it depends what borders you mean.

If you’re talking about the border of Russia, you have very different legal issues. China has different legal issues, different from Russia, as well from Iran. There are an increasing number of cases where actions are being brought in China successfully for breaches of intellectual property rights. But you wouldn't say that was the case in Nigeria. You wouldn't say that was the case in a number of other countries where we’ve had a lot of cybercrime originating from.

So there's no one solution here. You have to think in terms of all kinds of layered defenses. There are legal actions you can take sometimes, but the fundamental problem we’re dealing with is this inherently porous Swiss-cheesy system. In the long run, we're going to have to begin thinking about the gradual reengineering of the way the Internet works, or else this basic dynamic, in which lawbreakers have advantage over law-abiding people, is not going to go away.

Think about what’s happened in cyber defenses over the last 10 years and how little they've evolved -- even 20 years for that matter. They almost all require us to know the attack mode or the sequence of code in order to catch it. And we get better at that, but that’s a leapfrog business. That’s fundamentally the way we do it.

Whether we do it at the perimeter, inside, or even outside before the attack gets to the perimeter, that’s what we’re looking for -- stuff we've already seen. That’s a very poor strategy for doing security, but that's where we are. It hasn’t changed much in quite a long time and it's probably not going to.

We’re talking about the Balkanization of the Internet. I think that's going to happen as more companies demand a higher level of protection.



Gardner: Why is that the case? Is this not a perfect opportunity for a business-government partnership to come together and re-architect the Internet at least for certain types of business activities, permit a two-tier approach, and add different levels of security into that? Why hasn’t it gone anywhere?

Brenner: What I think you’re saying is different tiers or segments. We’re talking about the Balkanization of the Internet. I think that's going to happen as more companies demand a higher level of protection, but this again is a cost-benefit analysis. You’re going to see even more Balkanization of the Internet as you see countries like Russia and China, with some success, imposing more controls over what can be said and done on the Internet. That’s not going to be acceptable to us.

Gardner: So it's a notion of public and private.

Brenner: You can say public and private. That doesn’t change the nature of the problem. It won’t happen all at once. We're not going to abandon the Internet. That would be crazy. Everything depends on it, and you can’t do that. It’d be a fairy tale to think of it. But it’s going to happen gradually, and there is research going on into that sort of thing right now. It’s also a big political issue.

Gardner: Let’s take a slightly different tack on this. We’ve seen a lot with cloud computing and more businesses starting to go to third-party cloud providers for their applications, services, data storage, even integration to other business services and so forth.

More secure

If there's a limited lumber, or at least a finite number, of cloud providers and they can institute the proper security and take advantage of certain networks within networks, then wouldn’t that hypothetically make a cloud approach more secure and more managed than every-man-for-himself, which is what we have now in enterprises and small to medium-sized businesses (SMBs)?

Brenner: I think the short answer is yes. The SMBs will achieve greater security by basically contracting it out to what are called cloud providers. That’s because managing the patching of vulnerabilities and other aspects and encryption is beyond what’s most small businesses and many medium-sized businesses can do, are willing to do, or can do cost-effectively.

For big businesses in the cloud, it just depends on how good the big businesses’ own management of IT is as to whether it’s an improvement or not. But there are some problems with the cloud.

People talk about security, but there are different aspects of it. You and I have been talking just now about security meaning the ability to prevent somebody from stealing or corrupting your information. But availability is another aspect of security. By definition, putting everything in one remote place reduces robustness, because if you lose that connection, you lose everything.

Consequently, it seems to me that backup issues are really critical for people who are going to the cloud. Are you going to rely on your cloud provider to provide the backup? Are you going to rely on the cloud provider to provide all of your backup? Are you going to go to a second cloud provider? Are you going to keep some information copied in-house?

By definition, putting everything in one remote place reduces robustness, because if you lose that connection, you lose everything.



What would happen if your information is good, but you can’t get to it? That means you can’t get to anything anymore. So that's another aspect of security people need to think through.

Gardner: We’re almost out of time, Joel, but I wanted to get into this sense of metrics, measurement of success or failure. How do you know you’re doing the right thing? How do you know that you're protecting? How do you know that you've gone far enough to ameliorate the risk?

Brenner: This is really hard. If somebody steals your car tonight, Dana, you go out to the curb or the garage in the morning, and you know it's not there. You know it’s been stolen.

When somebody steals your algorithms, your formulas, or your secret processes, you've still got them. You don’t know they’re gone, until three or four years later, when somebody in Central China or Siberia is opening a factory and selling stuff into your market that you thought you were going to be selling -- and that’s your stuff. Then maybe you go back and realize, "Oh, that incident three or four years ago, maybe that's when that happened, maybe that’s when I lost it."

What's going out

S
o you don’t even know necessarily when things have been stolen. Most companies don’t do a good job. They’re so busy trying to find out what’s coming into their network, they're not looking at what's going out.

That's one reason the stuff is hard to measure. Another is that ROI is very tough. On the other hand, there are lots of things where business people have to make important judgments in the face of risks and opportunities they can't quantify, but we do it.

We’re right to want data whenever we can get it, because data generally means we can make better decisions. But we make decisions about investment in R&D all the time without knowing what the ROI is going to be and we certainly don't know what the return on a particular R&D expenditure is going to be. But we make that, because people are convinced that if they don't make it, they’ll fall behind and they'll be selling yesterday’s products tomorrow.

Why is it that we have a bias toward that kind of risk, when it comes to opportunity, but not when it comes to defense? I think we need to be candid about our own biases in that regard, but I don't have a satisfactory answer to your question, and nobody else does either. This is one where we can't quantify that answer.

Gardner: It sounds as if people need to have a healthy dose of paranoia to tide them over across these areas. Is that a fair assessment?

People need to understand, without actually being paranoid, that life is not always what it seems. There are people who are trying to steal things from us all the time, and we need to protect ourselves.



Brenner: Well, let’s say skepticism. People need to understand, without actually being paranoid, that life is not always what it seems. There are people who are trying to steal things from us all the time, and we need to protect ourselves.

In many companies, you don't see a willingness to do that, but that varies a great deal from company to company. Things are not always what they seem. That is not how we Americans approach life. We are trusting folks, which is why this is a great country to do business in and live in. But we're having our pockets picked and it's time we understood that.

Gardner: And, as we pointed out earlier, this picking of pockets is not just on our block, but could be any of our suppliers, partners, or other players in our ecosystem. If their pockets get picked, it ends up being our problem too.

Brenner: Yeah, I described this risk in my book, “America the Vulnerable,” at great length and in my practice, here at Cooley, I deal with this every day. I find myself, Dana, giving briefings to businesspeople that 5, 10, or 20 years ago, you wouldn’t have given to anybody who wasn't a diplomat or a military person going outside the country. Now this kind of cyber pilferage is an aspect of daily commercial life, I'm sorry to say.

Gardner: Very good. I'm afraid we'll have to leave it there. We’ve been talking with Joel Brenner, the author of “America the Vulnerable: Inside the New Threat Matrix of Digital Espionage, Crime, and Warfare.” And as a lead into his Open Group presentation on July 16 on cyber security, Joel and I have been exploring the current cybercrime landscape and what can be done to better understand the threat and work against it.

This special BriefingsDirect discussion comes to you in conjunction with the Open Group Conference from July 16 to 20 in Washington, D.C. We’ll hear more from Joel and others at the conference on ways that security and supply chain management can be improved. I want to thank you, Joel, for joining us. It’s been a fascinating discussion.

Brenner: Pleasure. Thanks for having me.

Gardner: I’d certainly look forward to your presentation in Washington. I encourage our readers and listeners to attend the conference to learn more. This is Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for these thought leadership interviews. Thanks again for listening and come back next time.

Register for The Open Group Conference
July 16-18 in Washington, D.C.

Listen to the podcast. Find it on iTunes/iPod. Download the transcript. Sponsor: The Open Group.

Transcript of a BriefingsDirect podcast in which cyber security expert Joel Brenner explains the risk to businesses from international electronic espionage. Copyright The Open Group and Interarbor Solutions, LLC, 2005-2012. All rights reserved.

You may also be interested in: