Wednesday, September 18, 2013

Synthetic APIs Approach Improves Fragmented Data Acquisition for Thomson Reuters’ Content Sharing Platform

Transcript of a BriefingsDirect podcast on how Kapow Software helps a worldwide data company manage data acquisition in a cost-effective and consistent way.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: Kapow Software, a Kofax company.

Dana Gardner: Hello, and welcome to a special BriefingsDirect discussion series on how innovative companies are dodging data complexity through the use of Synthetic APIs.

Gardner
Dana Gardner, Principal Analyst at Interarbor Solutions, is your host throughout this series of Kapow Software-sponsored BriefingsDirect use case discussions.

We'll see how from across many different industries and regions of the globe, inventive companies are able to get the best information delivered to those who can act on it with speed and at massive scale. The next innovator interview examines the improved data use benefits at Thomson Reuters in London.

Here to explain how improved information integration and delivery can be made into business success, we're joined by Pedro Saraiva, product manager for Content Shared Platforms and Rapid Sourcing at Thomson Reuters. Glad to have you with us.

Pedro Saraiva: Thank you very much. Pleased to meet you.

Gardner: Pedro, you first launched Thomson Reuters content-sharing platform over four years ago, I'm told, after joining the company in 1996. And the platform there now enables agile delivery of automated content-acquisition solutions across a range of content areas.

Saraiva: That's right.

Gardner: Tell me what that really means. What are you delivering and to whom?

Saraiva: It's actually very simple. We're a business that requires a lot of information, a lot of data because our business is information -- intelligence information, and we need to do that in a cost-efficient manner. Part of that requires us to have the best technology. When we started four years ago, one of the most obvious patterns that we found was that we had a lot of fragmentation of our content acquisition processes where they were based, who was doing them, and more importantly, what processes they were following or not following.

Saraiva
The opportunity that we immediately saw was to consolidate it all, not just around the central capability, but into an optimal capability, with real experts around it making it work and effectively creating a platform as a service (PaaS) for our internal experts in each content area to perform their tasks just as usual, but faster, better, more reliably, and more consistently.

Fundamentally, we are a platform for web-content acquisition. And that is part of our content-shared platform because it's all part of a bigger picture, where we take content from so many sources and many different kinds of sources, and not just web.

Gardner: So, your customers are essentially other organizations within Thomson Reuters. Is that correct?

Content management

Saraiva: That's right. I don't know the exact percentage, but I would guess that about half of what we do is content management, rather than site technology, per se. And a lot of those content management tasks are highly specialized because that's the only way we're going to add value. We're going to understand the content, where it comes from, what it means, and we are going to present it and structure it in the best possible way for our customers.

So, the needs of our internal groups and internal content teams are huge, very demanding, and very specialized. But they all have certain things in common. We found many of them were using Excel macros or some other technologies to perform their activities.

We tried to capture what was common, in spite of all that diversity, to leverage the best possible value from the technology that we have. But also, from our know-how, expertise, and best practices around how to source content, how to be compliant with the required rules, and producing consistent, high-quality data that we could trust, we could claim to our customers that they could trust our content because we know exactly what happened to it from beginning to the end.

Gardner: Just for the benefit of our listeners, Thomson Reuters is a large company. Tell us how large, and tell us some numbers around the number of different units within the company that you are providing this data to.

Saraiva: We are a large organization. We have about 50,000 employees worldwide in the majority of countries. For example, our news operations have reporters on the ground throughout the world.

We have all languages represented, both internally and in terms of our customers, and the content that we provide to our customers. We're a truly diverse organization.
It takes shape in the vast number of different teams we have specializing in one kind of content.

We have a huge number of individual groups organized around the types of customers that we serve. Are they global? Are they regional? Are they local? Are they large organizations? Are they small organizations? Are they hedge funds? Are they fund managers? Are they investment banks? Are they analysts? We have a variety of customers that we serve within each of our customer organizations around the world.

And that degree of specialty that I mentioned earlier, at some point, has to take shape. It takes shape in the vast number of different teams we have specializing in one kind of content. It may be, perhaps, just a language, French or Chinese. It may be fundamentals, versus real-time data. We have to have the expertise and the centers of excellence for each of those areas, so that we really understand the content.

Gardner: You had massive redundancy in how people would go about this task of getting information from the web. It probably was costly. When you decided that you wanted to create a platform and have a centralized approach to doing this, what were the decisions that you made around technology? What were some of the hurdles that you had to overcome?

Saraiva:  We were looking for a platform that we would be able to support and manage in a cost-effective manner. We were looking for something that we could trust and rely on. We were looking for something that our users could make sense of and actually be productive with. So, that was relatively simple.

The biggest challenge, in my opinion, from the start, was the fact that it's very hard to take a big organization with an inherently fragmented set of operating units and try to change it, because trying to introduce a single, central capability. It sounds great on paper, but when you start trying to persuade your users that there's value to them in in migrating their current processes, they'll be concerned that the change is not in their interest.

Demonstrating value

And there is a degree of psychology at work in trying to not only work with that reluctance that all businesses have to face, but also to influence it positively and try to demonstrate that value to our end users was far in excess to the threat that they perceived.

Gardner: I've heard someone refer to that as having insanely good products. That's going to change people's behavior. Is that what you've been able to accomplish?

Saraiva: Absolutely. I can think of examples that are truly amazing, in my opinion. One is about the agility that we've gained through the introduction of technology such as this one, and not just the user of that technology, but the optimal use of it. Some time ago, before RSA was used in some departments, we had important customers who had an urgent, desperate need for a piece of information that we happened not to have, for whatever reason. It happens all the time.

We tried to politely explain that it might take us a while, because it would have to go through a development team that traditionally build C++ components. They were a small team and they were very busy. They had other priorities. Ultimately, that little request, for us, was a small part of everything we were trying to do. For that customer, it was the most important thing.

The conversation to explain why it was going to take so long why we were not giving them the importance that they deserved was a difficult conversation to have. We wanted to be better than that. Today, you can build a robot quickly. You can do it and plug it into the architecture that we have so that the customer can very quickly see it appearing almost real time in their product. That's an amazing change.
But ultimately, most importantly, we needed the confidence that we could get our job done.

Gardner: So, how did the Kapow platform come to your attention? What was the story behind your adoption of this?

Saraiva: We spent some time looking at the technologies available. We spoke with a number of other customers and other people we knew. We did our own research, including a little bit of the shotgun kind of research that you tend to do on the Internet, trying to find what's available. Very quickly, we had a short list of five technologies or so.

All of them promised to be great, but ultimately, they had to pass the acid test, which was evaluation in terms of our technical operations experts. Is this something that we are able to run? And also in terms of the capabilities we were expecting. They were quite demanding, because we had a variety of users that we needed to cater to.

But ultimately, most importantly, we needed the confidence that we could get our job done. If we are going to invest in a given technology, we want to know that it can be used to solve a given kind of problem without too much fuss, complexity, or delay, because if that doesn't happen, you have a problem. You have only partially achieved the promise, and you will forever be chasing alternatives to fill that gap.

Kapow absolutely gives us that kind of confidence. Our developers, who at first had a little bit of skepticism about the ability of a tool to be so amazing, tried it. After the first robot, typically, their reaction was "Wow." They love it, because they know they can do their job. And that's what we all want. We want to be able to do our jobs. Our customers want to use our products to do their jobs. We're all in the same kind of game. We just need to be very, very good at what we do. Kapow gave us that.

Gardner: Approximately how long have you been using Kapow? Do you have any metrics that might give an indication of what benefits are there? Maybe it's reduced number of developer hours or rapid use for creating robots that can get you the information you want. Any sense of the benefits?

Critically important

Saraiva: Perhaps, the most interesting examples are those about web sources that were critically important to us, and that until we were able to leverage Kapow, we just couldn't automate sensibly.

It was not even a matter of it taking a long time. We were not able to do it. With Kapow, it was a straightforward process. We just click, follow the process that really mirrors a complex workflow in the flow chart that we designed, and the job is done.

In terms of the rapid development of the solutions, it was at least a reduction from several months to weeks. And this is typical. You have cases where it's much faster. You have cases where it's slower, because there are complex, high-risk automation processes that we need to take some time to test. But the development process is shortened dramatically.

Gardner: We were recently at the Kapow User Summit. We've been hearing about newer versions, the Kapow platform 9.2. Is there anything in particular that you've heard here so far that has piqued your interest? Something you might be able to apply to some of these problems right away?

Saraiva: A lot of what we've been doing and focusing on over the last four years was around a pattern whereby we have data flowing into the company, being processed and transformed. We're adding our value, and it's flowing out to our customers. There is, however, another type of web sourcing and acquisition that we're now beginning to work with which is more interactive. It's more about the unpredictable, unplanned need for information on demand.
The main advantage of a cloud-based service running Kapow would be in freeing us from the hassle of having to manage our own infrastructure.

There, interestingly, we have the problem of integrating the button that produces that fetch for data into the end-user workflows. That was something that was not possible with previous versions of Kapow or not straightforward. We would have to build our own interfaces, our own queues, and our own API to interface with the robo-server.

Now, with Kapplets it all looks very, very straightforward because we can easily see that we could have an arbitrary optimized workflow solution or tool for some of our users that happens to embed a Kapplet that allows a user to perform research on demand, perhaps on the customer, perhaps on a company for the kind of data that we wouldn't traditionally be acquiring data on a constant fixed basis.

Gardner: Looking to the future about deployments, we heard the possibility of a cloud version of Kapow. How would you prefer to move in the future on deployments? It sounds as if the direction of bridging organizational boundaries continues for you, maybe delivering this to mobile devices specifically having a cloud-based Kapow set of platform services would make sense.

Saraiva: Over time, things keep changing.  Although currently, we run a relatively standard, low-scale infrastructure, it's always a cost, an overhead, and an extra worry that you have to configure networks.

Security

And you have to worry about security. You have to ensure that things are being monitored and that you respond to alarms and so on. In theory, if we were able to get exactly the same service that we now have internally based in the cloud, we could scale it much more transparently without much planning. That would definitely give us an advantage.

So, right now, I'm beginning to think about that precise question. For the next few years, are we going to have just hosted infrastructure at our premises, or are we going to begin leveraging the cloud properly, because then we can focus on what we really want which is to get value out of robots.
 
Gardner: I'm afraid we're about out of time, but quickly, now that you've been doing this for some time, do you any advice that you might offer to others who are grappling with similar issues around multiple data sources, not being able to use APIs, needing a synthetic API approach, what lessons have you learned that you might be able to share?
I've been amazed at what is possible with technologies such as Kapow.

Saraiva: I suppose the most important message I would want to share is about confidence in technology. When I started this, I had worked for years in technology, many of those years in web technology, some complex web technology. And yet, when I started thinking about web content acquisition, I didn't really think it could be done very well.

I thought this is going to be a challenge, which is partly the reason why I was interested in it. And I've been amazed at what is possible with technologies such as Kapow. So, my message would be don't worry that technology such as Kapow will not be able to do the job for you. Don't fear that you will be better off using your own bespoke C++ based solution. Go for it, because it really works. Go for it and make the most of it, because you will need it with so much data, especially on the Internet. You have to have that.

Gardner: I’m afraid we’ll have to leave it there. We've been talking about how Thomson Reuters in London has improved information integration and delivery using Kapow technology and a Synthetic APIs approach to gain significant business benefits.

Please join me in thanking our guest, Pedro Saraiva, product manager for Content Shared Platforms and Rapid Sourcing at Thomson Reuters. Thanks for being on BriefingsDirect.

And thanks to our audience for joining this special discussion, coming to you from the recent 2013 Kapow.wow user conference in Redwood Shores, California.

I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host throughout this series of Kapow Software-sponsored BriefingsDirect discussions. Thanks for listening, and come back next time.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: Kapow Software, a Kofax company.

Transcript of a BriefingsDirect podcast on how Kapow Software helps a worldwide data company manage data acquisition in a cost-effective and consistent way. Copyright Interarbor Solutions, LLC, 2005-2013. All rights reserved.

You may also be interested in:

Tuesday, September 17, 2013

When Real-Time is No Longer Good Enough, the Predictive Business Emerges

Transcript of a BriefingsDirect podcast on how a predictive business strategy enables staying competitive, and not just surviving -- but thriving -- in fast-paced and dynamic markets.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: SAP Cloud.

Dana Gardner: Hi, this is Dana Gardner, Principal Analyst at Interarbor Solutions, and you're listening to BriefingsDirect. Today, we present a sponsored podcast discussion examining a momentous shift in business strategy. Join us as we explore the impact that big data, cloud computing, and mobility are having on how businesses must act and react in their markets.

Gardner
We'll explore how the agility goal of real-time responses is no longer good enough. What’s apparent across more business ecosystems is that businesses must do even better to become so data-driven that they extend their knowledge and ability to react well into the future. In other words, we're now all entering the era of the predictive business.

To learn more about how heightened competition amid a data revolution requires businesses and IT leaders to adjust their thinking to the next, and the next, and the next move on their respective chess boards, join me with our guest for today, Tim Minahan, the Chief Marketing Officer for SAP Cloud. Welcome, Tim. [Disclosure: SAP Cloud is a sponsor of BriefingsDirect podcasts.]

Tim Minahan: Thanks for having me, Dana.

Gardner: It’s hard to believe that the pace of business agility continues to accelerate. Tim, what’s driving this time-crunch? What are some of the changes afoot that require this need for -- and also enabling the capabilities to deliver on -- this notion of predictive business? We're in some sort of a rapid cycle of cause and effect, and it’s rather complicated.

Minahan: This is certainly not your father’s business environment. Big is no longer a guarantee to success. If you just look at the past 10 years, 40 percent of the Fortune 500 was replaced. So the business techniques and principles that worked 10, 5 or even three years ago are no longer relevant. In fact, they maybe a detriment to your business.

Minahan
Just ask companies like Tower Records, Borders Bookstore, or any of the dozens more goliaths that were unable or unwilling to adapt to this new empowered customer or to adapt new business models that threatened long-held market structures and beliefs.

The world, as you just said, is changing so unbelievably fast that the only constant is change. And to survive, businesses must constantly innovate and adapt. Just think about it. The customer today is now more connected and more empowered and more demanding.

You have one billion people in social networks that are talking about your brand. In fact, I was just reading a recent study that showed Fortune 100 companies were mentioned on social channels like Facebook, Twitter, and LinkedIn a total of 10.5 million times in one month. These comments are really shaping your brand image. They're influencing your customer’s views and buying decisions, and really empowering that next competitor.

But the consumer, as you know, is also mobile. There are more than 15 billion mobile devices, which is scary. There are twice as many smart phones and tablets in use than there are people on the planet. It’s changing how we share information, how we shop, and the levels of service that customers expect today.

It’s also created, as you stated, a heck of a lot of data. More data was created in the last 18 months than had been created since the dawn of mankind. That’s a frightening fact, and the amount of data on your company, on your consumer preferences, on buying trends, and on you will double again in the next 18 months.

Changing consumer

The consumer is also changing. We're seeing an emerging middle class of five billion consumers sprouting up in emerging markets around the world. Guess what? They're all unwired and connected in a mobile environment.

What's challenging for your business is that you have a whole new class of millennials entering the workforce. In fact, by next year, nearly half of the workforce will have been born after 1980 -- making me feel old. These workers just grew up with the web. They are constantly mobile.

These are workers that shun traditional business structures of command-and-control. They feel that information should be free. They want to collaborate with each other, with their peers and partners, and even competitors. And this is uncomfortable for many businesses.

For this always on, always changing world, as you said, real time just isn’t enough anymore. Knowing in real time that your manufacturing plant went down and you won’t be able to make the holiday shipping season -- it’s just knowing that far too late. Or knowing that your top customer just defected to your chief competitor in real time is knowing that far too late. Even learning that your new SVP of sales, who looks so great on paper, is an awful fit with your corporate culture or your go-to-market strategy is just knowing that far too late.

But to your point, what disrupts can also be the new advantage. So technology, cloud, social, big data, and mobile are all changing the face of business. The need is to exploit them and not to be disrupted by them.

Gardner: What’s interesting, Tim, is that the tools and best practices that you might use to better understand your external world, your market, your supply chain, and your ecosystem of partners is also being applied internally.
Too often, we get enamored with the technology side of the story, but the biggest change that’s going to occur in business is going to be the culture change.

These are issues that are affecting companies to operate better internally, organizationally, managing their staff, their employees and how they're working differently -- but at the same time, we had this outward-facing need, too. Another dimension of complexity here is that you not only have to change the way you're operating vis-à-vis your employees, but your customers, partners, supply chain, etc. as well. How does the predictive business create a whole greater than the sum of the parts when we think about this on the total shift internal and external?

Minahan: Too often, we get enamored with the technology side of the story, but the biggest change that’s going to occur in business is going to be the culture change. There's  the need to adapt to this new millennial workforce and this new empowered customer and the need to reach this new emerging middle-class around the world.

In today’s fast-paced business world, companies really need to be able to predict the future with confidence, assess the right response, and then have the agility organizationally and systems-wise to quickly adapt their business processes to capitalize on these market dynamics and stay ahead of the competition.

They need to be able to harness the insights of disruptive technologies of our day, technologies like social, business networks, mobility, and cloud to become this predictive business.

Not enough

I want to be clear here that the predictive business isn't just about advanced analytics. It’s not just about big data. That’s certainly a part of it, but just knowing something is going to happen, just knowing about a market opportunity or a pending risk just isn’t enough.

You have to have that capacity and insight to assess a myriad of scenarios to detect the right course of action, and then have the agility in your business processes, your organizational structures, and your systems to be able to adapt to capitalize on these changes.

Gardner: Tim, you and I have been talking for several years now about the impact of cloud. We were also trying to be predictive ourselves and to extrapolate and figure out where this is going. I think it turns out that it’s been even more impactful than we thought.

How about the notion of having a cloud partner, maybe not only your own cloud, but a hybrid or public cloud environment, where the agility comes from working with a new type of delivery mode or even a whole new type of IT model?

Minahan: We've seen the impact of cloud. You've probably heard a lot about it. You've been tracking it for years. The original discussion was all about total cost of ownership (TCO). It was all about the cost benefits of the cloud. While the cloud certainly offers a cost advantage, the real benefit the cloud brings to business is in two flavors -- innovation and agility.
There's now the agility at the business level to configure new business processes without costly IT or consulting engagements.

You're seeing rapid innovation cycles, albeit incremental innovation updates, several times per year that are much more digestible for a company. They can see something coming, be able to request an innovation update, and have their technology partner several times a year adapt and deliver new functionality that’s immediately available to everyone.

Then there's now the agility at the business level to configure new business processes without costly IT or consulting engagements. With some of the more advanced cloud platforms, they can even create their own process extensions to meet the unique needs of their industry and their business.

Gardner: It’s also, I think, a little bit of return to the past, when we think about the culture and the process. It’s been 20 years plus since people started talking about reengineering their business processes to accommodate technology. Now the technology has accelerated to such a degree that we can start really making progress on productivity and quality vis-à-vis these new approaches.

So without going too far into the past, how is the notion of business transformation being delivered now, when we think about being predictive, using these technology tools, different delivery models, and even different cultures?

Minahan: You're already seeing examples of the predictive business in action across industries today. Leading companies are turning that combination of insight, the big data analytics, and these agile computing models and organizational structures into entirely new business models and competitive advantage.

Strategic marketing

Let’s just look at some of these examples that are before us. Take Cisco, where their strategic marketing organization not only mines historical data around what prompted people to buy, or what they have bought, and what were their profiles. They married that with real-time social media mentions to look for customers, ferret out customers, who reveal a propensity to buy and a high readiness to buy.

They then arm their sales team, push these signals out to their sales force, and recommended the right offer that would likely convert that customer to buy. That had a massive impact. They saw a sales uplift of more than $4 billion by bringing all of those activities together.

It’s not just in the high-tech sector. I know we talk about that a lot, but we see it in other industries like healthcare. Mount Sinai Hospital in New York examined the historical treatment approaches, survival rates, and the stay duration of the hospitals to determine the right treatments to optimize care and throughput a patient.

It constantly runs and adapts simulations to optimize its patients first 8-12 hours in the hospital. With improved utilization based on those insights and the ability to adapt how they're handling their patients, the hospital not only improved patient health and survival rates, but also achieved the financial effect of adding hundreds of new beds without physically adding one.

In fact, if you look at it, the whole medical industry is built on predictive business models using the symptoms of millions of patients to diagnose new patients and to determine the right courses of action.
So all around us, businesses are beginning to adapt and take advantage of these predictive business models.

Closer to home for you Dana, there is also an example of the predictive business, I don’t know if you've read Nate Silver's phenomenal book, "The Signal and the Noise," but he talks about going beyond Moneyball, and how the Boston Red Sox were using predictive systems that really have changed how baseball drafts rookie players.

The difference between Moneyball and rookies is that rookies don’t have a record in the pros. There's no basis from which to determine what their on-base percentage will be or how they will perform. But this predictive model goes beyond standard statistics here and looks at similar attributes of other professional players to determine who are the right candidates that they should be recruiting and projecting what their performance might be based on a composite of other players that have like-attributes.

Their first example of this on the Red Sox was with Dustin Pedroia, who no one wanted to recruit. They said he was too short, too slow, and not the right candidate to play second base. But using this new model, the Red Sox modeled him against previous players and found out some of the best second basemen in the world actually have similar attributes.

So they wanted to take him early in the draft. The first year, he took the rookie of the year title in 2007 and helped the Red Sox win the world series for only the second time, since 1918. He's gone on to win the MVP the following year, and he’s been a top all star performer ever since.

So all around us, businesses are beginning to adapt and take advantage of these predictive business models.

Change in thinking

Gardner: It's curious that when you do take a data-driven approach, you have to give up some of the older approaches around intuition, gut instinct, or some of the metrics that used to be important. That really requires you to change your thinking and, rather than go to the highest paid person’s opinion when you need to make a decision, it's really now becoming more of a science.

So what do you get Tim when you do this correctly? The Red Sox, they got to win a World Series and they've been able to have a very strong club much more consistently than in the past. But what do businesses get when they become more data-driven, when they adjust their culture, take advantage of some of the new tools, and recognize the shift, the consumer behavior? How impactful can this be?

Minahan: It can be tremendously impactful. We truly believe that you get a whole new world of business. You get a business model and organizational and systems infrastructure that has the ability to adapt to all the massive transformation and the rapid changes that we discussed earlier. We believe the predictive business will transform every function within the enterprise and across the value chain.

Just think of sales and marketing. Sales and marketing professionals, as we just talked about with Cisco, will now be empowered to engage customers like never before by tapping into social activity, buying activity on business networks, and geo-location insights to identify prospects and develop optimal offers and engage and influence perspective customers right at the point of purchase.
It can be tremendously impactful. We truly believe that you get a whole new world of business.

I think of pushing offers, coupons, to mobile devices of prospective buyers based on their social finger print and their actual physical location or service organizations. We talk about this Internet of things. We haven’t even scratched a surface on this, but they can massively drive customer satisfaction and loyalty to new levels by predicting and proactively resolving potential product or service disruption even before they happen.

Think about your device being able to send a signal and demonstrate a propensity to break down in the future. It may be possible to send a firmware update to fix it without your even knowing.

That’s the power that we’ve already seen with this type of thing in the supply chain. Procurement, logistics and supply chain teams are now being alerted to potential future risks in their sub-tier supply chains and being guided to alternative suppliers based on optimal resolutions and community-generated ratings and buying patterns of like buyers on a business network. We've talked about that in the past.

We really believe that the future of business is the predictive business. The predictive business is not going to be an option going forward. It's not a luxury. It will be what's required not only to win, but eventually, to survive. Your customers are demanding it, your employees are requiring it, and your livelihood is going to depend on it.

The need to adapt

Gardner: I suppose we're seeing instances where newer companies, upstarts, easily enter the market. Because they're not encumbered by past practices, using some of these newer tools can actually make a tremendous amount of headway for them.

Tesla Motors comes to mind as one reflection of that. Netflix is another that shows how disruptive the company can be in the short term. But we're not just talking about green field companies. We're talking about the need for entrenched global enterprises to be able to move quickly to change and adapt to some of these opportunities for transformation.

Any thoughts before we begin to close out, Tim, on how big companies that have so much complexity, so many moving parts, can start to evolve to be predictive and to be ready for some serious competition?

Minahan: Number one is that you can't have the fear of change. You need to set that aside. At the outset of this discussion, we talked about changes all around us, whether it's externally, with the new empowered consumer who is more informed and connected than ever before, or internally with a new millennial workforce that’s eager to look at new organizational structures and processes and collaborate more, not just with other employees but their peers, and even competitors, in new ways.

That's number one, and probably the hardest thing. On top of that, this isn't just a single technology role. You need to be able to embrace a lot of the new technologies out there. When we look at one of the attributes of an enabling platform for the predictive business, it really comes down to a few key areas.
You have assess multiple scenarios and determine the best course of action faster than ever before.

You need the convenience and the agility of the cloud, improved IT resources and use basically everything as a service -- apps, infrastructure, and platform. You can dial up the capabilities, processing power, or the resource that you need, quickly configure and adapt your business processes at the business level, without massive IT or consulting engagements. Then, you have to have the agility to use some of these new-age cloud platforms to create your own and differentiated business processes and applications.

The second thing is that it's critically important to gather those new insights and productivity, not just from social networks but from business networks, with new rich data sources, from real time market and customer sentiments, through social listening and analytics, the countless bits and histories of transactional and relationship data available on robust business networks.

Then, you have to manage all of this. You also need to advance your analytical capabilities. You need the power and speed of big data, in-memory analytics platforms, and exploiting new architectures like Hadoop and others to enable companies to aggregate, correlate and assess just countless bits of information that are available today and doubling every 18 months.

You have assess multiple scenarios and determine the best course of action faster than ever before. Then, ultimately, one of the major transformational shifts, which is also a big opportunity, is that you need to be able to assess and deliver with ease all of this information to mobile devices.

This is true whether it's your employees who can engage in a process and get insights where they are in the field or whether it's your customer you need to reach, either across the street or halfway around the globe. So the whole here is greater than the sum of the parts. Big data alone is not enough. Cloud alone is not enough. You need all of these enabling technologies working together and leveraging each other. The next-generation business architecture must marry all of these capabilities to really drive this predictive business.

Next generation

Gardner: So clearly at SAP Cloud, you will be giving us a lot of thought. I think you appreciate the large dimension of this, but also the daunting complexity that’s faced in many companies. I hope in our next discussion, Tim, we can talk a little bit about some of the ideas you have about what the next generation of business services platform and agility capability that gets you into that predictive mode would be. Maybe you could just give us a sense very quickly now about the direction and role that an organization like SAP Cloud would play?

Minahan: SAP, as you know, has had a history of helping business continually innovate and drive this next wave of productivity and unlock new value and advantage for the business. The company is certainly building to be this enabling platform and partner for this next wave of business. It's making the right moves both organically and otherwise to enable the predictive business.

If you think about the foundation we just went through and then marry it up against, where SAP is invested and innovated, it's now the leading cloud provider for businesses. More business professionals are using cloud solutions from SAP than from any other vendor.
The company is certainly building to be this enabling platform and partner for this next wave of business.

It's leapt far ahead in the world of analytics and performance with the next generation in-memory platform in HANA. It's the leader in mobile business solutions and social business collaboration with Jam, and as we discussed right here on your show, it now owns the world’s largest and most global business network with the acquisition of Ariba.

That’s more than 1.2 million connected companies transacting over half a trillion dollars worth of commerce, and a new company joining every two minutes to engage, connect, and get more informed to better collaborate. We're very, very excited about the promise of the predictive business and SAPs ability to deliver and innovate on the platform to enable it.

Gardner: Well, great. I'm afraid we'll have to leave it there, but I do expect we’ll be revisiting this topic of the predictive business for quite some time. You've been listening to a sponsored BriefingsDirect podcast discussion on this momentous shift in business strategy to the agility required to become a predictive business.

And we've heard how the business goal of real-time responses is really no longer good enough. We can begin now to plot a course to a better evidence-based insights capability that will allow companies to proactively shape their business goals and align their resources to gain huge advantages first and foremost in their industry. So with that, please join me in thanking our guest, Tim Minahan, Chief Marketing Officer for SAP Cloud. Thanks so much, Tim.

Minahan: Thanks, Dana, it's been great to be here.

Gardner: I would like to also thank our audience for joining us. This is Dana Gardner, Principal Analyst at Interarbor Solutions. Thanks again for coming, and do come back next time.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: SAP Cloud.

Transcript of a BriefingsDirect podcast on how a predictive business strategy enables staying competitive, and not just surviving -- but thriving -- in fast-paced and dynamic markets. Copyright Interarbor Solutions, LLC, 2005-2013. All rights reserved.

You may also be interested in:

Thursday, September 12, 2013

Thought Leader Interview: HP's Global CISO Brett Wahlin on the Future of Security and Risk

Transcript of a BriefingsDirect podcast on how increased and more sophisticated attacks are forcing enterprises to innovate and expand security practices to not only detect, but predict system intrusions.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: HP.
Follow the HP Protect 2013 activities next week, Sept. 16-19.


Dana Gardner: Hello, and welcome to the next edition of the HP Discover Performance Podcast Series. I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your moderator for this ongoing discussion of IT innovation and how it’s making an impact on people’s lives.

Gardner
Once again, we're focusing on how IT leaders are improving security and reducing risk as they adapt to the new harsh realities of doing business online.

I'm now joined by our co-host for this sponsored podcast series, Paul Muller, Chief Software Evangelist at HP Software. Welcome back, Paul. How are you today?

Paul Muller: Dana, very well. It's great to be back, and I'm looking forward to today’s conversation.

Gardner: Yes, we have a big discussion today. We're joined by HP’s Global Chief Information Security Officer (CISO) to learn about how some of the very largest global enterprises like HP are exploring all of their options for doing business safely and continuously. So with that, let's welcome our guest, Brett Wahlin, Vice President and Global CISO at HP. Welcome, Brett. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

Brett Wahlin: Thank you, Dana.

Gardner: Brett, there's been a lot of discussion, of course, about security and a lot of discussion about big data. I'm very curious as to how these are related.

It seems to me that I've read and heard quite a bit about how big data can be used to improve security and provide insights into what's going on within systems and even some greater analysis capabilities. Is that what you're finding and hearing from other CISOs -- that there is a great tool in big data that’s related to security?

Wahlin: Yes, big data is quite an interesting development for us in the field of security. If we look back on how we used to do security, trying to determine where our enemies were coming from, what their capacities were, what their targets were, and how we're gathering intelligence to be able to determine how best to protect the company, our resources were quite limited.

Wahlin
We've found that through the use of big data, we're now able to start gathering reams of information that were never available to us in the past. We tend to look at this almost in a modern-warfare type of perspective.

If you're a battlefield commander, and you're looking at how to deploy defenses, how would you deploy those offenses, and what would be the targets that your enemies are looking for? You typically then look at gathering intelligence. This intelligence comes through multiple sources, whether it's electronic or human signals, and you begin to process the intelligence that's gathered, looking for insights into your enemy.

Moving defenses

This could be the enemy’s capabilities, motivation, resourcing, or targets. Then, by that analysis of that intelligence, you can go through a process of moving your defenses, understanding where the targets may be, and adjusting your troops on the ground.

Big data has now given us the ability to collect more intelligence from more sources at a much more rapid pace. As we go through this, we're looking at understanding these types of questions that we would ask as if we were looking at direct adversaries.

We're looking at what these capabilities are, where people are attacking from, why they're attacking us, and what targets they're looking for within our company. We can gather that data much more rapidly through the use of big data and apply these types of analytics.

We begin to ask different questions of the data and, based on the type of questions we're asking, we can come up with some rather interesting information that we never could get in the past. This then takes us to a position where that advanced analytics allows us to almost predict where an enemy might hit.

That’s in the future, I believe. Security is going from the use of prevention, where I'm tackling a known bad thing, to the point where I can use big data to analyze what's happening in real time and then predict where I may be attacked, by whom, and at what targets. That gives me the ability to move the defenses around in such a way that I can protect the high-value items, based on the intelligence that I see coming in through the analytics that we get out of big data.

Muller
Muller: Brett, you talk a lot about the idea of getting in front of the problem. Can you talk a little bit about your point of view on how security, from your perspective as a practitioner, has evolved over the last 10-15 years?

Wahlin: Certainly. That’s a great question. Years ago, we used to be about trying to prevent the known bad from happening. The questions we would ask would always be around, can it happen to us, and if it does, can we respond to it? What we have to look at now is the fact that the question should change. It should be not, "Can it happen to us," but "When is it going to happen to us?" And not, "Can we respond to it," but "How can we survive it?"

If we look at that type of a mind-shift change, that takes us back to the old ways of doing security, where you try to prevent, detect, and respond. Basically, you prevented the known bad things from happening.

This went back to the days of -- pick your favorite attack from years ago. One that I remember is very telling. It was Code Red, and we weren’t prepared for it. It hit us. We knew what the signature looked like and we were able to stop it, once we identified what it was. That whole preventive mechanism, back in the day, was pretty much what people did for security.

Fast forward several years, and you get into that new era of security threats highlighted by attacks like Aurora, when it came out. Suddenly, we had the acronyms that flew all over, such as APT -- advanced persistent threats -- and advanced malware. Now, we have attacks that you can't prevent, because you don’t know them. You can't see them. They're zero-days. They're undiscovered malware that’s in your system already.

Detect and respond

That changed the way we moved our security. We went from prevent to a big focus on not just preventing, because that becomes a hygiene function. Now, we move in to detect-and-respond view, where we're looking for anomalies. We're looking for the unknown. We're beefing up the ability to quickly respond to those when we find them.

The evolution, as we move forward, is to add a fourth dimension to this. We prevent, detect, respond, and predict. We use elements like big data to understand not only how to get situational awareness, where we connect the dots within our environment, but taking it one step further and being able to predict where that next stop might land. As we evolve in this particular area, getting to that point where we can understand and predict will become a key capability that security departments must have in future.

Gardner: A reminder to our audience, follow the HP Protect 2013 activities next week, Sept. 16-19. Now, Brett, how long you have been at HP and where had you been before that?

Wahlin: I've been at HP for approximately eight months. Prior to joining HP, I was the CSO at Sony Network Entertainment. My role there was to put the security in place after the infamous PlayStation breach. Prior to that, I was also the CSO at McAfee. I did a stint as CSO at Los Alamos Laboratory.
One of the elements that we look at, of course, is how to add all this additional complexity and additional capability into security and yet still continue to drive value to the business and drive costs out

Years ago, I got my start doing counterintelligence for the US Army during the Cold War. So we had a lot of opportunity to drive and practice the intelligence gathering and analytics components to which I'm referring around the big-data conversation.

Gardner: I hear you talking about getting more data, being proactive, and knowing yourself, as an organization, in order to be better prepared for attacks. It sounds quite similar to what we have been hearing for many years from the management side of the things, the operations side, to know yourself to be able better maintain performance standards and therefore be able to quickly remediate when something went wrong.

Are we seeing a confluence between good IT management practices and good security practices, and should we still differentiate between the two?

Wahlin: As we move into the good management of IT, the good management of knowing yourself, there's a hygiene element that appears within the correlation end of the security industry. One of the elements that we look at, of course, is how to add all this additional complexity and additional capability into security and yet still continue to drive value to the business and drive costs out. So we look for areas of efficiencies and again we will draw many similarities.

As you understand the managing of your environments and knowing yourself, we'll begin to apply known standards that we'll really use in the governance perspective. This is where you will take your hygiene, instead of looking at a very elaborate risk equations. You'll have your typical "risk equals threat times vulnerability times impact," and what are my probabilities.

Known standards

It gets very confusing. So we're trying to cut cost out of those, saying that there are known standards out there. Let's just use them. You can use the ISO 27001, NIST 800-53, or even something like a PCI DSS. Pick your standard, and that then becomes the baseline of control that you want to do. This is knowing yourself.

With these controls, you apply them based on risk to the company. Not all controls are applied equally, nor should they be. As you apply the control based on risk, there is evaluation assessment. Now, I have a known baseline that I can measure myself against.

As you began to build that known baseline, did you understand how well you're doing from a hygiene perspective? These are all the things that you should be doing that give you a chance to understand what your problem areas are.

As you begin to understand those metrics, you can understand where you might have early-warning indicators that would tell you that that you might need to pay attention to certain types of threats, risks, or areas within the company.
There are two types of organizations -- those that have been hacked and those that know they're being hacked.

There are a lot of similarities as you would look at the IT infrastructures, server maintenance, and understanding of those metrics for early warnings or early indicators of problems. We're trying to do the same security, where we make it very repeatable. We can make it standards-based and we can then extend that across the company, of course always being based on risk.

Muller: There is one more element to that, Dana, such as the evolution of IT management through, say, a framework like ITIL, where you very deliberately break down the barriers between silos across IT.

Similarly, I increasingly find with security that collaboration across organizations -- the whole notion of general threat intelligence – forms one of the greatest sources of potential intelligence about an imminent threat. That can come from the operational data, or a lot of operational logs, and then sharing that situational awareness between the operations team is powerful.

At least this works in the experience that I have seen with many of our clients as they improve security outcomes through a heightened sense of what's actually going on, across the infrastructure with customers or users.

Gardner: Paul, as you’re traveling around and talking with a lot of organizations, do you sense that they're sharing Brett’s perception that risk is sort of the über concept, and that security and performance management fall under that? Or are they still sort of catching up to that concept, or even resisting it?

Muller: There's sort of a veiled security joke. There are two types of organizations -- those that have been hacked and those that know they're being hacked.

One of the greatest challenges we have in moving through Brett’s evolution that he described is that many executives still have the point of view that I have a little green light on my desktop, and that tells me I don’t have any viruses today. I can assume that my organization is safe. That is about as sophisticated a view of security as some executives have.

Increased awareness

Then, of course, you have an increasing level of awareness that that is a false sense of security, particularly in the financial services industry, and increasingly in many governments, certainly national government. Just because you haven't heard about a breach today, that doesn’t mean that one isn't actually either being attempted or is, in fact, being successful.

One of the great challenges we have is just raising that executive awareness that a constant level of vigilance is critical. The other place where we're slowly making progress is that it's not necessarily a bad thing to share negative experiences.

The culture 10 or 15 years ago was that you don’t talk about a breach; you bury it. Increasingly, we see companies like Heartland Payment Systems quite famously getting out there and being a big believer in sharing the patterns of breach that occurred to help others be more aware of how and when these things occur, but also increasingly sharing threat intelligence.

For example, if you're one bank and someone is attempting to break into your systems using a known pattern of attack, it's highly likely they're trying to do it with your peers. Given that your defenses between your peers and yourself might be slightly less than that between you and the outside world, it's a good idea to share that ahead of time. Getting back to Brett’s point, the heightened sense of threat intelligence is going to help you predict and respond more reliably.
We have to understand which ones of these we need to pay attention to and have the ability to not only correlate amongst ourselves at the company, but correlate across an industry.

Wahlin: Absolutely. We look at the inevitability of the fact that networks are penetrated, and they're penetrated on a daily basis. There's a difference between having unwanted individuals within your network and having the data actually exfiltrated and having a reportable breach.

As we understand what that looks like and how the adversaries are actually getting into our environment, that type of intelligence sharing typically will happen amongst peers. But the need for the ability to actually share and do so without repercussions is an interesting concept. Most companies won't do it, because they still have that preconceived notion that having somebody in your environment is binary -- either my green light is on, and it's not happening, or I've got the red light on, and I've got a problem.

In fact, there are multiple phases of gray that are happening in there, and the ability to share the activities, while they may not be detrimental, are indicators that you have an issue going on and you need to be paying attention to it, which is key when we actually start pointing intelligence.

I've seen these logs. I've seen this type of activity. Is that really an issue I need to pay attention to or is that just an automated probe that’s testing our defenses? If we look at our environment, the size of HP and how many systems we have across the globe, you can imagine that we see that type of activity on a second-by-second basis.

We have to understand which ones of these we need to pay attention to and have the ability to not only correlate amongst ourselves at the company, but correlate across an industry.

HP may be attacked. Other high-tech companies may also be attacked. We'll get supply-chain attacks. We look at various types of politically motivated attacks. Why are they hitting us? So again, it's back to the situational awareness. Knowing the adversary and knowing their motivations, that data can be shared. Right now, it's usually in an ad-hoc way, peer-to-peer, but definitely there's room for some formalized information sharing.

Information sharing

Muller: Especially when you consider the level of information sharing that goes on in the cybercrime world. They run the equivalent of a Facebook almost. There is a huge amount of information sharing that goes on in that community. It's quite well structured. It's quite well organized. It hasn’t necessarily always been that well organized on the defense side of the equation. I think what you're saying is that there's opportunity for improvement.

Wahlin: Yes, and as we look at that opportunity, the counterintelligence person in me always has to stand up and say, "Let's make sure that we're sharing it and we understand our operational security, so that we're sharing that in a way that we're not giving away our secrets to our adversaries." So while there is an opportunity, we also have to be careful with how we share it.

Muller: You, of course, wind up in the situation where you could be amplifying bad information as well. If you were paranoid enough, you could assume that the adversary is actually deliberately planting some sort of distraction at one corner of the organization in order to get to everybody focused on that, while they quietly sneak in through the backdoor.

Wahlin: Correct.

Gardner: Brett, returning to this notion of actionable intelligence and the role of big data as an important tool, where do you go for the data? Is it strictly the systems, the systems log information? Is there an operational side to that that you tap more than the equipment, more than the behaviors? What are the sources of data that you want to analyze in order to be better at security?
Let's make sure that we're sharing it and we understand our operational security, so that we're sharing that in a way that we're not giving away our secrets to our adversaries.

Wahlin: The sources that we use are evolving. We have our traditional sources, and within HP, there is an internal project that is now going into alpha. It's called Project HAVEn and that’s really a combination of ArcSight, Vertica, and Autonomy, integrating with Hadoop. As we build that out and figure out what our capabilities are to put all this data into a large collection and being able to ask the questions and get actionable results out of this, we begin to then analyze our sources.

Sources are obvious as we look at historical operation and security perspective. We have all the log files that are in the perimeter. We have application logs, network infrastructure logs, such as DNS, Active Directory, and other types of LDAP logs.

Then you begin to say, what else can we throw in here? That’s pretty much covered in a traditional ArcSight type of an implementation. But what happens if I start throwing things such as badge access or in-and-out card swipes? How about phone logs? Most companies are running IP phone. They will have logs. So what if I throw that in the equation?

What if I go outside to social media and begin to throw things such as Twitter or Facebook feeds into this equation? What if I start pulling in public searches for government-type databases, law enforcement databases, and start adding these? What results might I get based on all that data commingling?

We're not quite sure at this point. We've added many of these sources as we start to look and ask questions and see from which areas we're able to pull the interesting correlations amongst different types of data to give us that situational awareness.

There's still much to be done here, much to be discovered, as we understand the types of questions that we should be asking. As we look at this data and the sources, we also look at how to create that actionable intelligence.

Disparate sources

The type of analysts that we typically use in a security operations center are very used to ArcSight. I ingest the log and I see correlations. They're time-line driven. Now, we begin to ask questions of multiple types of data sources that are very disparate in their information, and that takes a different type of analyst.

Not only do we have different types of sources, but we have to have different types of skill sets to ask the right questions of those sources. This will continue to evolve. We may or may not find value as we add sources. We don’t want to add a source just for the heck of it, but we also want to understand that we can get very creative with the data as it comes together.

Muller: Brett makes a great point. There are actually two things that I think are important to follow up on here. The first is that, as it's true of every type of analytics conversation I am having today, everyone talks about the term "data scientist." I prefer the term "data artist," because there's a certain artistry to working out what information feeds I want to bring in.

Maybe "judgment" might be a better word in the context of security, a certain judgment or stylistic question in terms of what data feed I want to bring in. It's that creativity in terms of looking at something that doesn’t seem obvious from the outside, but could be a great leading indicator of potential threat.

The other element is that, once we've got that information, one of the challenges is that we don’t want to add to the overhead or the burden of processing that information. So it's being able to increasing apply intelligence to, as Brett talked about, mechanistic patterns that you can determine with traditional security information. Event management solutions are rather mechanistic. In other words, you apply a set of logical rules to them.
When you're looking at behavioral activities, rules may not be quite as robust as looking at techniques such as information clustering.

Increasingly, when you're looking at behavioral activities, rules may not be quite as robust as looking at techniques such as information clustering, where you look for hotspots of what seem like unrelated activities at first, but turn out later to be related.

There's a whole bunch of science in the area of crime investigation that we've applied to cybercrime, using some of the techniques, Autonomy for example, to uncover fraud in the financial services market. That automation behind those techniques increasingly is being applied to the big-data problem that security is starting to deal with.

Gardner: I was thinking that, too, Brett, when you were describing this opportunity to bring so much different information together. Yes, you would get some great benefits for security and risk purposes, but to Paul’s point, you also might have unintended consequences in terms of being able to better understand processes, operational efficiencies, and seeing market opportunities that you couldn’t see before.

Have you plumbed that at all? I know it's been a short time since you've been at HP, but are there ancillary paybacks that would be of a business interest in addition to being a security benefit?

Wahlin: Yes. As we further evaluate these data sources and the ability to understand, I believe that the insight into using the big data, not only for security, but as more of a business intelligence (BI) type of perspective has been well-documented. Our focus has really been on trying to determine the patterns and characteristics of usage.

Developing patterns

While we look at it from a purely security mindset, where we try to develop patterns, it takes on a counter-intelligence way of understating how people go, where people go, and what do they do. As people try to be unique, they tend to fall into patterns that are individual and specific to themselves. Those patterns may be over weeks or months, but they're there.

Right now, a lot of times, we'll be asked as a security organization to provide badge swipes as people go in and out of buildings. Can we take that even further and begin to understand where the efficiency would come in based on behaviors and characteristics with workforces. Can we divide that into different business units or geography to try to determine the best use of limited resources across companies? This data could be used in those areas.

The unintended consequence that you brought up, as we look at this and begin to come up with patterns of individuals, is that it begins to reveal a lot about how people interact with systems -- what systems they go to, how often they do things -- and that can be used in a negative way. So there are privacy implications that come right to the forefront as we begin to identify folks.

That that will be an interesting discussion going forward, as the data comes out, patterns start to unfold, patterns become uniquely identifiable to cities, buildings, and individuals. What do we do with those unintended consequences?
There are always situations where any new technology or any new capability could ultimately be used in a negative fashion.

It's almost going to be sort of a two-step, where we can make a couple of steps forward in progress and technology, then we are going to have to deal with these issues, and it might take us a step back. It's definitely evolving in this area, and these unintended consequences could be very detrimental if not addressed early.

We don’t want to completely shut down these types of activities based on privacy concerns or some other type of legalities, when we could actually potentially solve for those problems in a systematic perspective, as we move forward with the investigation of the usage of those technologies.

Muller: The concern that Brett raises is the flip side of a conversation I've been having surprisingly frequently, and it’s partly as a result of heightened awareness of some of the reported intelligence gathering activities associated with national governments around the world and the concerns as relates to privacy.

The flip side of this that we need to keep in mind is that, going back to the unintended consequences conversation, every technology that we introduce, whether it's the car, cell phone, or pocket camera, all can have obviously great positive effects. We can put them to great use. There are always situations where any new technology or any new capability could ultimately be used in a negative fashion by bad people, or sometimes even unintentionally.

The question we always need to bear in mind here is, as Brett talks about it, what are the potential unintended consequences? How can we get in front of those potential misuses early? How can we be vigilant of those misuses and put in place good governance ahead of time?

There are three approaches. One is to bury your head in the send and pretend it will never happen. Second is to avoid adopting a technology at all for fear of those unintended consequences. The third is to be aware of them and be constantly looking for breaches of policy, breaches of good governance, and being able to then correct for those if and when they do occur.

Closed-loop cycle

Gardner: Just briefly, if the governance can be put in place, and privacy protections maintained, the opportunity is vast for a tight closed-loop cycle -- of almost a focus group -- in real time of what employees are doing with their systems, what applications they use, and how.

This can be applied to product development and, for a company like HP in the technology product development field, it could be a very, very powerful and valuable data, in addition, of course, to being quite powerful for security and risk-reduction purposes.

So it’ll be a very interesting next few years, certainly with HAVEn, Vertica and HP’s security businesses. They're probably a harbinger of what other organizations will be doing. Going back to HP, Brett, tell us a bit about what you think HP is doing that will set the stage and perhaps help others to learn how to get started in terms of better security and better leveraging of big data as a tool for better security.

Wahlin: As HP progresses into the predicted security front, we're one of, I believe, two companies that are actually trying to understand how to best use HAVEn as we begin the analytics to determine the appropriate usage of the data that is at our fingertips. That takes a predictive capability that HP will be building.
The lagging piece of this would be the actual creation of agile security.

We've created something called the Cyber Intelligence Center. The whole intent of that is to develop the methodologies around how the big data is used, the plumbing, and then the sources for which we actually create the big data and how we move logs into big data. That's very different than what we're doing today, traditional ArcSight loggers and ESMs. There are a lot of mechanics that we have to build for that.

Then, as we move out of that, we begin to look at the actual actionable intelligence creation to use the analytics. What questions should we ask? Then, when we get the answer, is it something we need to do something about? The lagging piece of this would be the actual creation of agile security. In some places, we even call it mobile security, and it's different than mobility. It's security that can actually move.

If you look at the war-type of analogies, back in the day, you had these columns of men with rifles, and they weren’t that mobile. Then, as you got into mechanized infantry and other types of technologies came online, airplanes and such, it became much more mobile. What's the equivalent to that in the cyber security world, and how do we create that.

Right now, it's quite difficult to move a firewall around. You don’t just unplug or re-VLAN a network. It's very difficult. You bring down applications. So what is the impact of understanding what's coming at you, maybe tomorrow, maybe next week? Can we actually make a infrastructure such that it can be reconfigured to not only to defend against that attack, but perhaps even introduce some adversarial confusion.

I've done my reconnaissance. It looks like this. I come at it tomorrow, and it looks completely different. That is the kill chain that will set back the adversary quite a bit, because most of the time, during a kill chain, it's actually trying to figure out where am I, what I have, where the are assets located, and doing reconnaissance through the network.

So there are a lot of interesting things that we can do as we come to this next step in the evolution of security. At HP, we're trying to develop that at scale. Being the large company that we are, we get the opportunity to see an enormous amount of data that we wouldn’t see if we are another company.

Numerous networks

For example, HP has millions of IP addresses and subnets that are out there. We have to try to account for and figure out what's happening on any one of these networks. This gives us insight to the types of traffic, types of application configurations, types of interconnects between different subnets, types of devices, anything from printers all the way through unreleased operating systems.

How do you deal with things such as manufacturing supply chains, that are all connected to these networks. Those types of inputs begin to create the methodologies that feed into the an upcoming cyber intelligence center.

Gardner: Paul, it almost sounds as if security is an accelerant to becoming a better organization, a more data-driven organization which will pay dividends in many ways. Do you agree that security is still necessary, still pertinent, now that it's perhaps forcing the hand of organizations to modernize in ways that they may not have done, if we weren’t facing such a difficult security environment?

Muller: I completely agree with you. Information security and the arms race, quite literally the analogy, is a forcing function for many organizations. It would be hard to say this without a sense of chagrin, but the great part about this is that there are actually technologies that are being developed as a result of this. Take ArcSight Logo as an example, as a result of this arms race.
Just as the space race threw up a whole bunch of technologies like Teflon or silicon adhesives that we use today, the the security arms race is generating some great byproducts.

Those technologies can now be applied to business problems, gathering real-time operational technology data, such as seismic events, Twitter feeds, and so forth, and being able to incorporate those back in for business and public-good purposes. Just as the space race threw up a whole bunch of technologies like Teflon or silicon adhesives that we use today, the the security arms race is generating some great byproducts that are being used by enterprises to create value, and that’s a positive thing.

Gardner: Last word to you, Brett, before we sign off. Do you concur on this notion of security as an imperative, but that has a greater longer term benefit?

Wahlin: Absolutely. The analogy of the space race is perfect, as you look at trying to do the security maturation within an environment. You begin to see that a lot of the things that we're doing, whether it's understanding the environment, being able to create the operational metrics around an environment, or push into the fact that we've got to get in front of the adversaries to create the environment that is extremely agile is going to throw off a lot of technology innovations.

It’s going to throw off some challenges to the IT industry and how things are put together. That’s going to force typically sloppy operations -- such as I am just going to throw this up together, I am not going to complete an acquisition, I don’t document, I don't understand my environmental -- to clean it up as we go through those processes.

The confusion and the complexity within an environment is directly opposed to creating a sense of security. As we create the more secure environment, environments that are capable of detecting anomalies within them, you have to put the hygienic pieces in place. You have to create the technologies that will allow you to leapfrog the adversaries. That’s definitely going to be both a driver for business efficiencies, as well as technology, and innovation as it comes down.

Gardner: Well, very good. I'm afraid we will have to leave it there. We've been exploring how IT leaders are improving security and reducing risks as they adapt to new and often harsh realities of doing business in cyber land and we have been learning through an example of HP and how it's adapting its well.

So with that please join me in thanking our cohost, Paul Muller, the Chief Software Evangelist at HP Software. Thanks so much, Paul.

Muller: It's a pleasure, Dana.

Gardner: And I would like to thank our supporter for this series HP Software and remind our audience to carry on the dialog with Paul through his blog, tweets, and The Discover Performance Group on LinkedIn.You can also follow more HP security ideas on these products and research blogs.

Then lastly, a huge thank you to our special guest, Brett Wahlin, Vice President and Global Chief Information Security Officer at HP. Thanks so much, Brett.

Wahlin: Thank you, Dana, and thanks, Paul.

Gardner: And you can gain more insight and information on the best in IT performance management at HP.com/go/discoverperformance and you can always access this and other episodes in ongoing HP Discover Performance podcast series on iTunes under BriefingsDirect.

I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your co-host and moderator for this ongoing discussion of IT innovation. Thanks again for listening and comeback next time.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: HP.
Follow the HP Protect 2013 activities next week, Sept. 16-19.


Transcript of a BriefingsDirect podcast on how increased and more sophisticated attacks are forcing enterprises to innovate and expand security practices to not only detect, but predict system intrusions.  Copyright Interarbor Solutions, LLC, 2005-2013. All rights reserved.

You may also be interested in: