Tuesday, January 07, 2014

Learn How HP Implemented the TippingPoint Intrusion Prevention System Across its Security Infrastructure

Transcript of a BriefingsDirect podcast on how the strategy of dealing with malware is shifting from reaction to prevention.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: HP.

Dana Gardner: Hello, and welcome to the next edition of the HP Discover Podcast Series. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your co-host and moderator for this ongoing discussion of IT innovation and how it’s making an impact on people’s lives.

Gardner
Once again, we’re focusing on how IT leaders are improving the security and availability of services to deliver better experiences and payoffs for businesses and end users alike.

We have a fascinating show today. We’re going to be exploring the ins and outs of improving enterprise intrusion prevention systems (IPS), and we will see how HP and its global cyber security partners have made the HP Global Network more resilient and safe. We’ll will hear how a vision for security has been effectively translated into actual implementation.

To learn more about how HP itself has created role-based and granular access control benefits amid real-time yet intelligent intrusion protection, please join me in welcoming our guest, Jim O'Shea, Network Security Architect for HP Cyber Security Strategy and Infrastructure Engagement. Welcome to the show, Jim.

Jim O’Shea: Hello, Dana. Thank you.

Gardner: Before we get into the nitty-gritty, what do you think are some of the major trends that are driving the need for better intrusion prevention systems nowadays?

O’Shea: If you look at the past, it was about detection, and you had reaction technologies. We had firewalls that blocked and looked at the port level. Then, we evolved to trying to detect things that were malicious with intent by using IDS. But that was a reactionary-type thing. It was a nice approach, but we were reacting. Something happened, you reacted, but if you knew it was bad, why did we let it in in the first place?

The evolution was the IPS, the prevention. If you know it's bad, why do you even want to see it? Why do you want to try to react to it? Just block it. That’s the trend that we’ve been following.

Gardner: But we can’t just have a black-and-white situation. It’s much more gray. There are sorts of intrusion, I suppose, that we want. We want access control, rather than just a firewall. So is there a new thinking, a new vision, that’s been developed over the past several years about these networks and what should or shouldn't be allowed through them?

O’Shea: You’re talking about letting the good in. Those are the evolutions and the trends that we are all trying to strive for. Get the good traffic in. Get who you are in. Maybe look at what you have. You can explore the health of your device. Those are all trends that we’re all striving for now.

Gardner: I recall Jim, that there was a Ponemon Institute report about a year or so ago that really outlined some of the issues here. Do you recall that? Were there any issues in there that illustrate this trend toward a different type of network and a different approach to protection?

Number of attacks

O’Shea: The Ponemon study was illustrating the vast number of attacks and the trend toward the costs for intrusion. It was highlighting those type of trends, all of which we’re trying to head off. Those type of reports are guiding factors in taking a more proactive, automated-type response. [Learn more about intrusion prevention systems.]

Gardner: I suppose what’s also different nowadays is that we’re not only concerned with outside issues in terms of risk, but also insider attacks. It’s being able to detect behaviors and things that occur that data can detect. The analysis can then provide perhaps a heads-up across the network, regardless of whether they have access or not. What are the risk issues now when we think about insider attacks, rather than just outside penetration?

O’Shea: You’re exactly right. Are you hiring the right people? That’s a big issue. Are they being influenced? Those are all huge issues. Big data can handle some of that and pull that in. Our approach on intrusion prevention wasn’t to just look at what’s coming from the outside, but it was also look at data traversing the network.
You have a whole rogue wireless-type approach in which people can gain access and can they probe and poke around.

When we deployed the TippingPoint solution, we didn’t change our policies or profiles that we were deploying based on whether it’s starting on the inside or starting on the outside. It was an equal deployment.

An insider attack could also be somebody who walks into a facility, gains physical access, and connects to your network. You have a whole rogue wireless-type approach in which people can gain access and can they probe and poke around. And if it’s malware traffic from our perspective, with the IDS we took the approach, inside or outside -- doesn’t matter. If we can detect it, if we can be in the path, it’s a block.

Gardner: For those of our listeners who might not be familiar with the term “intrusion prevention systems,” maybe you could illustrate and flesh that out a bit. What do we mean by IPS? What are we talking about? Are these technologies? Are these processes, methodologies, or all of the above?

O’Shea: TippingPoint technology is an appliance-based technology. It’s an inline device. We deploy it inline. It sits in the network, and the traffic is flowing through it. It’s looking for characteristics or reputation on the type of traffic, and reputation is a more real-time change in the system. This network, IP address, or URL is known for malware, etc. That’s a dynamic update, but the static updates are signature-type, and the detection of vulnerability or a specific exploit aimed at an operating system.

So intrusion prevention is through the detection of that, and blocking and preventing that from completing its communication to the end node.

Gardner: And these work in conjunction with other approaches, such as security information, event management, and network-based anomaly detection. Is that correct? How do they work together?

Bigger picture

O’Shea: All the events get logged into HP ArcSight to create the bigger picture. Are you seeing these type of events occurring other places? So you have the bigger picture correlation.

Network-based anomaly detection is the ability to detect something that is occurring in the network and it's based on an IP address or it’s based on a flow. Taking advantage of reputation we can insert those IP addresses, detected based on flow, that are doing something anomalous.

It could be that they’re beaconing out, spreading a worm. If they look like they’re causing concerns with a high degree of accuracy, then we can put that into the reputation and take advantage of moving blocks.

So reputation is a self-deploying feature. You insert an IP address into it and it can self-update. We haven’t taken the automated step yet, although that’s in the plan. Today, it’s a manual process for us, but ideally, through application programming interfaces (APIs), we can automate all that. It works in a lab, but we haven’t deployed it on our production that way.

Gardner: Clearly HP is a good example of a large enterprise, one of the largest in the world, with global presence, with a lot of technology, a lot of intellectual property, and therefore a lot to protect. Let’s look at how you actually approached protecting the HP network.
We wanted to prevent mal traffic, mal-formed traffic, malware -- any traffic with the mal intent of reaching the data center.

What’s the vision, if you will, for HP's Global Cyber Security, when it comes to these newer approaches? Do you have an overarching vision that then you can implement? How do we begin to think about chunking out the problem in order to then solve it effectively?

O’Shea: You want to be able to detect, block, and prevent as an overarching strategy. We also wanted to take advantage of inserting a giant filter inline on all data that’s going into the data center. We wanted to prevent mal traffic, mal-formed traffic, malware -- any traffic with the "mal" intent of reaching the data center.

So why make that an application decision to block and rely on host-level defenses, when we have the opportunity to do it at the network? So it made the network more hygienically clean, blocking traffic that you don’t want to see.

We wrapped it around the data center, so all traffic going into our data centers goes through that type of filter. [Learn more about intrusion prevention systems.]

Gardner: You’ve mentioned a few HP products: TippingPoint and ArcSight, for example, but this is a larger ecosystem approach and play. Tell us a little bit about partnerships, other technologies, and even the partnerships for implementation, not just the technology, but the process and methodologies as well.

Key to deployment

O’Shea: That was key to our deployment, because it is an inline technology and you are going inline in the network. You’re changing flows, where it could be mal traffic, but yet maybe a researcher is trying to do something. So we need to have the ability to have that level of partnership with the network team. They have to see it. They have to understand what it is. It has to be manageable.

When we deployed it, we looked at what could go wrong and we designed around that. What could go wrong? A device failed. So we have an N+1 type installation. If a single device fails, we’re not down, we are not blocking traffic. We have the ability to handle the capacity of our network, which grows, and we are growing, and so it has to be built for the now and the future. It has to be manageable.

It has to be able to be understood by “first responders,” the people that get called first. Everybody blames the network first, and then it's the application afterward. So the network team gets pulled in on many calls, at all types of hours, and they have to be able to get that view.

That was key to get them broad-based training, so that the technology was there. Get a process integrated into how you’re going to handle updates and how you’re going to add beyond what TippingPoint recommended. TippingPoint makes a recommendation on profiles and new settings. If we take that, do we want to add other things? So we have to have a global cyber-security view and a global cyber-security input and have that all vetted.

The application team had to be onboard and aware, so that everybody understands. Finally, because we were going into a very large installed network that was handling a lot of different types of traffic, we brought in TippingPoint Professional Services and had everything looked at, re-looked at, and signed off on, so that what we’re doing is a best practice. We looked at it from multiple angles and took a lot of things into consideration.
We proxy the events. That gives us the ability to have multiple ArcSight instances and also to evolve.

Gardner: Now, we have different groups of people that need to work in concert to a larger degree than in the past. We have application folks, network folks, outside service providers, and network providers. It seems that we are asking for a complete view of security, which means people need to be coordinated and cooperative in ways that they hadn’t had to be before.

Is there something about TippingPoint and ArcSight that provides data, views, and analytics in such a way that it's easier for these groups to work together in ways that they hadn’t before? We know that they have to work together, but is there something about the technology that helps them work together, or gives them common views or inputs that grease the skids to collaboration?

O’Shea: One of the nice things about the way the TippingPoint events occur is that you have a choice. You can send them from an individual IDS units themselves or you can proxy them from the management console. Again, the ability to manage was critical to us, so we chose to do it from the console.

We proxy the events. That gives us the ability to have multiple ArcSight instances and also to evolve. ArcSight evolves. When they’re changing, evolving, and growing, and they want to bring up a new collector, we’re able to send very rapidly to the new collector.

ArcSight pulls in firewall logs. You can get proxy events and events from antivirus. You can pull in that whole view and get a bigger picture at the ArcSight console. The TippingPoint view is of what’s happening from the inline TippingPoint and what's traversing it. Then, the ArcSight view adds a lot of depth to that.

Very flexible

So it gives a very broad picture, but from the TippingPoint view, we’re very flexible and able to add and stay in step with ArcSight growth quickly. It's kind of a concert. That includes sending events on different ports. You’re not restricted to one port. If you want to create a secure port or a unique port for your events to go on to ArcSight, you have that ability.

Gardner: We’ve heard, of course, how important real-time reaction is, and even gaining insights to be able to anticipate and be proactive. What is it that you learned through this process that allowed you to make that latency reduced or eliminated so that the amount of time that things go on is cut. I’ve heard that a lot of times you can't prevent intrusion, but you can prevent the damage of intrusion. So how does it work in terms of this low latency time element?

O’Shea: With TippingPoint, you get to see when an exploit is triggered, TippingPoint has a concept of Zero Days and it has a concept of Reputation. Reputation is an ongoing change, and Zero Day is a deployment of a profile. Think of Reputation as a constant updating of signatures as sites change and how the industry is recognizing them. So that gives you an ability to have a view of a site that people frequented and may now be compromised. You have that ability to see that because the Reputation of the site changed.

With TippingPoint being a block technology, you have the low latency. The latency is being detected and blocked, but now, when you pull it back into ArcSight, you have the ability to see a holistic view. We’re seeing these events or something that looks similar. The network-based anomaly detection is reporting some strange things happening, or you have some antivirus things that are reporting.

That’s a different type of reaction. You can react and deploy and say that you want to take action against whatever it is you are seeing. Maybe you need to put up a new firewall block to alleviate something.
That’s a different type of reaction. You can react and deploy and say that you want to take action against whatever it is you are seeing.

Or on the other hand, if TippingPoint is not seeing it, maybe you have the opportunity to activate this new signature more rapidly and deploy new profile. This is something new, and you can take action right away.

Gardner: Jim, let's talk a bit about what you get when you do this correctly. So using HP’s example, what were some of the paybacks, both in technical terms, maybe metrics of success technically, but then also business results? What happens when you can deploy these systems, develop those partnerships, and get cooperation? How can we measure what we have done here?

O’Shea: One of the things that we did wrong in our deployment is that we didn’t have a baseline of what is mal or what is bad. So, as it was a moving deployment, we don’t have hard and fast metrics of a before and after view. But again, you don’t know what's bad until you start trying to detect it. It might not have been for us to even take that type of view.

We deployed TippingPoint. After the deployment we’ve had some DoS attacks against us, and they have been blocked and deflected. We’ve had some other events that we have been able to block and defend rapidly. [Learn more about intrusion prevention systems.]

If you think back historically of how we dealt with them, those were kind of a Whac-A-Mole-type of defenses. Something happened, and you reacted. So I guess the metric would be that we’re not as reactionary, but do we have hard metrics to prove that? I don’t have those.

How much volume?

Gardner: We can appreciate the scale of what the systems are capable of. Do we have a number of events detected or that sort of thing, blocks per month, any sense of how much volume we can handle?

O’Shea: We took a month’s sample. I’m trying to recall the exact number, but it was 100 million events in one month that were detected as mal events. That’s including Internet-facing events. That’s why the volume is high, but it was 100 million events that were automatically blocked and that were flagged as mal events.
The Professional Services teams have been able to deploy in a very large network and have worked with the requirements that a large enterprise has.

Gardner: How do you now take this out to the market? Is there a cyber-security platform? Do you have a services component? You’ve done this internally, but how do you take this out to the market, combining the products, the services, and the methodologies?

O’Shea: I’m not on the product marketing side, but TippingPoint has learned from us and we’ve partnered with them. We’re constantly sharing back with them. So the give-back to TippingPoint, as a product division, is that they can see real traffic, in a real high-volume network, and they can pretest their signatures.

There are active lighthouse-type installs, lighthouse meaning that they’re not actively blocking. They’re just observing, and they are testing their next iteration of software and the next group of profiles. They’re able to do that for themselves, and it's a give back that has worked. What we receive is a better product, and what everybody else receives is a better product.

The Professional Services teams have been able to deploy in a very large network and have worked with the requirements that a large enterprise has. That includes standard deployment, how things are connected and what the drawings are going to look like, as well as how are you going to cable it up.

A large enterprise has different standards than a small business would have, and that was a give back to the Professional Services to be able to deploy it in a large enterprise. It has been a good relationship, and there is always opportunity for improvement, but it certainly has helped.

Current trends

Gardner: Jim, looking to the future a little bit, we know that there’s going to be more and more cloud and hybrid-cloud types of activities. We’re certainly seeing already a huge uptick in mobile device and tablet use on corporate networks. This is also part of the bring-your-own-device (BYOD) trend that we’re seeing.

So should we expect a higher degree of risk and more variables and complication, and what does that portend for the use of these types of technologies going forward? How much gain do you get by getting on the IDS bandwagon sooner rather than later?

O’Shea: BYOD is a new twist on things and it means something different to everybody, because it's an acronym term, but let's take the view of you bringing in a product you buy.
BYOD is a new twist on things and it means something different to everybody, because it's an acronym term.

Somebody is always going to get a new device, they are going to bring in it, they are going to try it out, and they are going to connect it to the corporate network, if they can. And because they are coming from a different environment and they’re not necessarily to corporate standards, they may bring unwanted guests into the network, in terms of malware.

Now, we have the opportunity, because we are inline, to detect and block that right away. Because we are an integrated ecosystem, they will show up as anomalous events. ArcSight and our Cyber Defense Center will be able to see those events. So you get a bigger picture.

Those events can be then translated into removing that node from the network. We have that opportunity to do that. BYOD not only brings your own device, but it also brings things you don’t know that are going to happen, and the only way to block that is prevention and anomalous type detection, and then try to bring it altogether in a bigger picture.

Gardner: Well, great. I’m afraid we will have to leave it there. We’ve been learning about the modern ins and outs of improving enterprise intrusion prevention systems, and we’ve heard about how HP itself has created more of a granular access control benefit amid real-time, yet intelligent, intrusion detection and protection.

I’d like to thank the supporter for this series, HP Software, and remind our audience to carry on the dialogue through the Discover Group on LinkedIn. And of course, a big thank you to our guest, Jim O'Shea, Network Security Architect for HP Cyber Security Strategy and Infrastructure Engagement. Thanks so much, Jim.

O’Shea: Thank you.

Gardner: And lastly, our appreciation goes out to our global audience for joining us once again for this HP Discover Podcast discussion.

I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host for this ongoing series of HP-sponsored business success stories. Thanks again for listening, and come back next time.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: HP.
Learn more about prevention detection.

Transcript of a BriefingsDirect podcast on how the strategy of dealing with malware is shifting from reaction to prevention. Copyright Interarbor Solutions, LLC, 2005-2014. All rights reserved.

You may also be interested in:

Thursday, December 12, 2013

Healthcare Turns to Big Data Analytics Platforms to Gain Insight and Awareness for Improved Patient Outcomes

Transcript of a BriefingsDirect on the need to tap the potential of big data to improve healthcare delivery and how the technology to do that is currently lagging.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: HP.

Dana Gardner: Hello, and welcome to the next edition of the HP Discover Podcast Series. I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for this ongoing discussion of IT innovation and how it's making an impact on people’s lives.

Gardner
Once again, we’re focusing on how IT leaders are improving their services to deliver better experiences and payoffs for businesses and end users alike. I’m now joined by our co-host for this sponsored series, Chief Software Evangelist at HP, Paul Muller. Welcome Paul, how are you today?

Paul Muller: Fighting fit, and healthy Dana, yourself?

Gardner: Glad to hear it. I’m doing very well, thanks. We’re going to now examine the impact that big-data technologies and solutions are having on the highly dynamic healthcare industry. We’ll explore how analytics platforms and new healthcare-specific solutions together are offering far greater insight and intelligence into how healthcare providers are managing patient care, cost, and outcomes.

And we’re going to hear firsthand of how these new offerings, announced this week at the HP Discover Conference in Barcelona, are designed specifically to give hospitals and care providers new data-driven advantages as they seek to transform their organizations.

With that, please join me in welcoming our guest, Patrick Kelly, Senior Practice Manager at the Avnet Services Healthcare Practice. Welcome, Patrick. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

Patrick Kelly: Thank you, Dana. It's great to be with both you and Paul.

Gardner: Just to put this into some perspective, Paul, as you travel the globe, as I know you do, how closely are you seeing an intersection between big data and the need for analytics in healthcare. Is this a US-specific drive, or is this something that’s sweeping many markets as well?

Muller: It's undoubtedly a global trend, Dana. One statistic that sticks in my mind is that in 2012 what was estimated was approximately 500 petabytes of digital healthcare data across the globe. That’s expected to reach 25,000 petabytes by the year 2020. So, that’s a 50-times increase in the amount of digital healthcare data that we expect to be retaining.
Muller

The reasons for that is simply that having better data helps us drive better healthcare outcomes. And we can do it in a number of different ways. We move to what we call most evidence-based medicines, rather than subjecting people to a battery of tests, or following a script, if you like.

The test or the activities that are undertaken with each individual are more clearly tailored, based on the symptoms that they’re presenting with, and data helps us make some of those decisions.

Basic medical research

The other element of it is that we’re now starting to bring in more people and engage more people in basic medical research. For example, in the US, the Veterans Administration has a voluntary program that’s using blood sample and health information from various military veterans. Over 150,000 have enrolled to help give us a better understanding of healthcare.

We’ve had similar programs in Iceland and other countries where we were using long-term healthcare and statistical data from the population to help us spot and address healthcare challenges before they become real problems.

The other, of course, is how we better manage healthcare data. A lot of our listeners, I’m sure, live in countries where electronic healthcare records (EHR) are a hot topic. Either there is a project under way or you may already have them, but that whole process of establishing them and making sure that those records are interchangeable is absolutely critical.

Then, of course, we have the opportunity of utilizing publicly available data. We’ve all heard of Google being utilized to identify the outbreaks of flu in various countries based on the frequency of which people search for flu symptoms.
There’s a huge array of data that you need to bring together, in addition to just thinking about the size of it.

So, there’s definitely a huge number of opportunities coming from data. The challenge that we’ll find so frequently is that when we talk about big data, it's critical not just to talk about the size of the data we collect, but the variety of data. You’ve got things like structured EHR. You have unstructured clinical notes. If you’ve ever seen a doctor’s scribble, you know what I’m talking about.

You have medical imaging data, genetic data, and epidemiological data. There’s a huge array of data that you need to bring together, in addition to just thinking what is the size of it. Of course, overarching all of these are the regulatory and privacy issues that we have to deal with. It's a rich and fascinating topic.

Gardner: Patrick Kelly, tell us a little bit about what you see as the driving need technically to get a handle on this vast ocean of healthcare data and the huge potential for making good use of it? 

Kelly: All the points Paul brought up were spot-on. It really is a problem of how to deal with such a deluge of data. Also, there’s a great change that’s being undertaken because of the Affordable Care Act (ACA) legislation and that’s impacting not only the business model, but also the need to switch to an electronic medical record.

Capturing data

From an EHR perspective to date, IT is focused on capturing that data. They take and then transpose what’s on a medical record into an electronic format. Unfortunately, where we’ve fallen short in helping the business is taking that data that’s captured and making it useful and meaningful in analytics and helping the business to gain visibility and be able to pivot and change as the need to change the business model is being brought to bear on the industry.

Gardner: For those of our audience who are not familiar with Avnet, please describe your organization. You’ve been involved with a number of different activities, but healthcare seems to be pretty prominent in the group now. [Learn more about Avnet's Healthcare Analytics Practice.]

Kelly
Kelly: Avnet has made a pretty significant investment over the last 24 months to bolster the services side of the world. We’ve brought numbers up to around 2,000 new personnel on board to focus on everything in the ecosystem, from -- as we’re talking about today -- healthcare all the way up to hardware, educational services, and supporting partners like HP. We happen to be HP’s largest enterprise distributor. We also have a number of critical channel partners.

In the last eight months, we came together and brought on board a number of individuals who have deep expertise in healthcare and security. They work to focus on building out healthcare practice that not only provides services, but is also developing kind of a healthcare analytics platform.

Gardner: Paul Muller, you can’t buy healthcare analytics in a box. This is really a team sport; an ecosystem approach. Tell me a little bit about what Avnet is, how important they are in HP’s role, and, of course, there are going to be more players as well.
What Avnet brings to the table is the understanding of the HAVEn technology, combined with deep expertise in the area of healthcare and analytics.

Muller: The listeners would have heard from the HP Discover announcements over the last couple of days that Avnet and HP have come together around what we call the HAVEn platform. HAVEn as we might have talked about previously on the show stands for Hadoop, Autonomy, Vertica, Enterprise Security, with the “n” being any number of apps. [Learn more about the HAVEn platform.]

The "n" or any numbers of apps is really where we work together with our partners to utilize the platform, to build better big-data enabled applications. That’s really the critical capability our partners have.

What Avnet brings to the table is the understanding of the HAVEn technology, combined with deep expertise in the area of healthcare and analytics. Combining that, we've created this fantastic new capability that we’re here to talk about now.

Gardner: Back to you, Patrick. Tell me a bit about what you think are the top problems that need to be solved in order to get healthcare information and analytics to the right people in a speedy fashion. What are our hurdles to overcome here?

Kelly: If we pull back the covers and look at some of the problems or challenges around advancing analytics and modernization into healthcare, it’s really in a couple of areas. One of them is that it's a pretty big cultural change.

Significant load

Right now, we have an overtaxed IT department that’s struggling to bring electronic medical records online and to also deal with a lot of different compliance things around ICD-10 and still meet meaningful use. So, that’s a pretty significant load on those guys.

Now, they’re being asked to look at delivering information to the business side of the world. And right now, there's not a good understanding, from an enterprise-wide view, of how to use analytics in healthcare very well.

So, part of the challenge is governance and strategy and looking at an enterprise-wide road map to how you get there. From a technology perspective, there’s a whole problem around industry readiness. There are a lot of legacy systems floating around that can range from 30-year-old mainframes up to more modern systems. So there’s a great deal of work that has to go around modernizing the systems and then tying them together. That all leads to problems with data logistics and fragmentation and really just equals cost and complexity.

One of the traditional approaches that other industries have followed with enterprise data warehouses and traditional extract, transform, load (ETL) approaches are just too costly, too slow, and too difficult for healthcare system to leverage. Finally, there are a lot of challenges in the process of the workflow.

Muller: These sound conceptual at a high level, but the impact on patient outcomes is pretty dramatic. One statistic that sticks in my head is that hospitalizations in the U.S. are estimated to account for about 30 percent of the trillions of dollars in annual cost of healthcare, with around 20 percent of all hospital admissions occurring within 30 days of a previous discharge.
Better utilizing big-data technology can have a very real impact on the healthcare outcomes of your loved ones.

In other words, we’re potentially letting people go without having completely resolved their issues. Better utilizing big-data technology can have a very real impact, for example, on the healthcare outcomes of your loved ones. Any other thoughts around that, Patrick?

Kelly: Paul, you hit a really critical note around re-admissions, something that, as you mentioned, has a real impact on the outcomes of patients. It's also a cost driver. Reimbursement rates are being reduced because of failure. Hospitals would be able to address the shortfalls either in education or follow-up care that end up landing patients back in the ER.

You’re dead on with re-admissions, and from a big-data perspective, there are two stages to look at. There’s a retrospective look that is a challenge even though it's not a traditional big-data challenge. There’s still lot of data and a lot of elements to look into just to identify patients that have been readmitted and track those.

But the more exciting and interesting part to this is the predictive, looking forward and seeing the patient’s conditions, their co-morbidity, how sick they are, what kind of treatment they receive, what kind of education they received and the follow-up care as well as how they behave in the outside world. Then, it’s bringing all that together and building a model to be able to determine whether this person is at risk to readmit. If so, how do we target care to them to help reduce that risk. 

Gardner: We certainly have some technology issues to resolve and some cultural shifts to make, but what are the goals in the medical field, in the provider organizations themselves? I’m thinking of such things as cutting cost, but more that, things about treatments and experience and even gaining perhaps a holistic view of a patient, regardless of where they are in the spectrum.

Waste in the system

Muller: You kind of hit it there, Dana, with the cutting cost. I was reading a report today, and it was kind of shocking. There is a tremendous amount of waste in the system, as we know. It said that in the US, $600 billion, 17.6 percent of the nation’s GDP, that is focused on healthcare is potentially being misspent. A lot of that is due to unnecessary procedures and tests, as well as operational inefficiency.

From a provider perspective, it's getting a handle on those unnecessary procedures. I’ll give you an example. There’s been an increase in the last decade of elective deliveries, where someone comes in and says that they want to have an early delivery for whatever reason. The impact, unfortunately, is an additional time in the neo-natal intensive care unit (NICU) for the baby.

It drives up a lot of cost and is dangerous for both the mother and child. So, getting a handle on where the waste is within their four walls, whether it’s operationally, unnecessary procedures, or tests and being able to apply Lean Six Sigma, and some of these process is necessary to help reduce that.

Then, you mentioned treatments and how to improve outcomes. Another shocking statistic is that medical errors are the third leading cause of death in the US. In addition to that, employers end up paying almost $40,000 every time someone receives a surgical site infection.
From a provider perspective, it's getting a handle on those unnecessary procedures.

Those medical errors can be everything from a sponge left in a patient, to a mis-dose of a medication, to an infection. Those all lead to a lot of unnecessary death as well as driving up cost not only for the hospital but for the payers of the insurance. These are areas that they will get visibility into to understand where variation is happening and eliminate that.

Finally, a new aspect is customer experience. Somehow, reimbursements are going to be tied to -- and this is new for the medical field -- how I as a patient enjoy, for lack of better term, my experience as the hospital or with my provider, and how engaged I had become in my own care. Those are critical measures that analytics are going to help provide.

Gardner: We have a big chore ahead of us with the need for changing the way that IT is conducted in these organizations. Obviously, what you’ve just described are different ways of doing medicine based on data and analysis, but we also have this change in the way that medicine is being delivered in the US. You mentioned the ACA. We’re moving from a paid by procedure basis much more to a paid by the outcomes basis. This shifts things and transforms things tremendously too.

Now that we have a sense of this massive challenge ahead of us, what are organizations like Avnet and providers like HP with HAVEn doing that will help us start to get a handle on this? Give us a sense, Patrick, of what you are bring into the market with the announcement in Barcelona.

Kelly: As difficult as it is to reduce complexity in any of these analytic engagements, it's very costly and time consuming to integrate any new system into a hospital. One of the key things is to be able to reduce that time to value from a system that you introduce into the hospital and use to target very specific analytical challenges.

From Avnet’s perspective, we’re bringing a healthcare platform that we’re developing around the HAVEn stack, leveraging some of those great powerful technologies like Vertica and Hadoop, and using those to try to simplify the integration task at the hospitals.

Standardized inputs

We’re building inputs from HL7, which is just a common data format within the hospital, trying to build some standardized inputs from other clinical systems, in order to reduce the heavy lift of integrating a new analytics package in the environment.

In addition, we’re looking to build a unified view of the patient’s data. We want to extend that beyond the walls of the hospital and build a unified platform. The idea is to put a number of different tools and modular analytics on top of that to have some very quick wins, targeted things like we've already talked about, from readmission all the way into some blocking and tackling operational work. It will be everything from patient flow to understanding capacity management.

It will bring a platform that accelerates the integration and analytics delivery in the organization. In addition, we’re going to wrap that into a number of services that range from early assessment to road map and strategy to help with business integration all the way around continuing to build and support the product with the help system.

The goal is to accelerate delivery around the analytics, get the tools that they need to get visibility into the business, and empower the providers and give them a complete view of the patient.

Gardner: Paul, it’s very impressive when you look at what can be done when an ecosystem comes together. When you look at applications, like what Avnet is delivering, it seems to me they’re also changing the game in terms of who can use these analytics. We’re seeing visualizations and we’re seeing modular approaches like Patrick described. How much of a sea change are we seeing in terms of not just creating better analytics, but getting them to more people, perhaps people had never really had access to this intelligence before.
It’s the immediacy of interaction that is going to make the biggest difference.

Muller: That’s a critical element. It's simple, easy to understand, and visualizations are an important element of it. The other is just simply the ability to turn these sorts of questions around more quickly.

If you think about traditional medical studies and even something as simple as drug development, in the past getting access to the data, being able to have a conversation with the data, has been very difficult, because sourcing it, scrubbing it, correlating it, processing it has taken years.

Even simple queries could take days to run. It’s become more complex and you have to do things like look for correlation across longitudinal records or understanding unstructured clinical notes that have been written by a doctor or, more importantly, by different doctor's. Each of them is writing something similar, but in a different way. Then, there’s the massive volume of information involved. Patrick touched on some of the behavioral aspects or lifestyle choices people make.

The ability to take all of that information at one time and have a conversation, where it's a slice and dice it and interact with it, is another important aspect to the usability and the democratizing access to some of that information. Whether, it would be the researchers or government officials and health care workers looking for example for the potential outbreaks of disease or to plan a better health care system, it’s not just great visualizations that are important. That certainly helps, but it’s the immediacy of interaction that is going to make the biggest difference.

Gardner: Patrick, when you do these basic infrastructure improvements, when you create a different culture to make the data analysis available fast, you start to get towards that predictive, rather then reactive, approach. Do you have some sense or even examples of what good can come of this? Are there some tangible benefits, some soft benefits, to get as a payback. I’m thinking clearly pretty quickly because we probably need to demonstrate value rather soon in this environment?

About visibility

Kelly: Dana, any first step with this is about visibility. It opens the eyes around processes in the organization that are problematic and that can be very basic around things like scheduling in the operating room and utilization of that time to length of stay of patients.

A very a quick win is to understand why your patients seem to be continually having problems and being in the bed longer then they should be. It’s being able, while they're filling those beds, to redirect care, case workers, medical care, and everything necessary to help them get out of the hospital sooner and improve their outcomes.

A lot of times, we've seen a look of surprise when we've shown, here is the patient who has been in for 10 days for a procedure that should have only been a two-day stay, and really giving visibility there. That’s the first step, though very basic.

As we start attacking some of these problems around hospital-based infection, we help the provider make sure that they are covering all their bases and doing kind of the best practices, and eliminating the variation between each physician and care provider, you start seeing some real tangible improvements and outcomes in saving peoples lives.

When you see that from any population be it stroke, re-admissions -- as we talked about earlier -- with heart failure and being able to make sure those patients are avoiding things like pneumonia, you bring visibility.
A challenge for a hospital that has acquired a number of physicians is how to get visibility into those physician practices.

Then, in predictive models and optimizing how the providers and the caregivers are working is really key. There are some quick wins, and that’s why traditionally we built these master repositories that we then built reports on top of. It’s a year and a half to delivery for any value, and we’re looking to focus on very specific use cases and trying to tackle them very quickly in a 90- to 120-day period.

Gardner: Patrick, do you have any early-adopter examples you can provide for us, so that we have a sense of what types of organizations are putting this into place, what they’ve done first, and what have been the outcomes?

Kelly: We're partnering with a 12-hospital health care system, dealing with again some blocking and tackling around understanding better how to utilize their physician network.

A challenge for a hospital that has acquired a number of physicians is how to get visibility into those physician practices. How do you understand the kinds of things we've talked about -- cost, patient experience, outcomes -- out in the wild, in the primary care offices, and in the specialty offices? That data has traditionally just been completely segmented from the hospital systems.

The challenge is building tools that are going to be leveraged by the physician themselves, as well as the hospitals and at an executive level, and utilizing that information to help optimize how those practices are running. It’s kind of a basic problem for most businesses, but it's something very real for hospitals to deal with.

Massive opportunity

Gardner: Paul Muller, this seems to be a massive opportunity, something that will be going on from many years with HP, Vertica, and HAVEn. Trillions of dollars have been spent on ways that can give us better patient experiences, higher health rates, lower mortality rates. So, it’s a win, win, win, right? The hospitals win, the insurers win, the governments win, the patients win, the doctors win. What sort of opportunity is this and how is HP going at it?

Muller: You’ve absolutely nailed the assessment there. It’s an all-around benefit. A healthy society is a healthy economy. That’s pretty crystal clear to everybody. The opportunity for HP and our partners is to help enable that by putting the right data at the finger tips of the people with the potential to generate life saving or lifestyle improving insights. That could be developing a new drug, improving the impatient experience, or helping us identify longer-term issues like genetic or other sorts of congenital diseases.

From our perspective, it’s about providing the underlying platform technology, HAVEn, as the big data platform. The great partner ecosystem that we've developed in Avnet is a wonderful example of an organization that’s taken the powerful platform and very quickly turned that into something that can help not only save money, but as we just talked about, save lives which I think is fantastic.

Gardner: Patrick, as we wrap up, we can certainly see many ways in which these technologies in this analysis can be used immediately for some very significant benefits. But I’m thinking that it also puts in place a tremendous foundation for what we know is coming in the future -- more sensors, more information coming from the patients, more telemetry, so that it's coming remotely, maybe from their bodies, while they are out of the hospital.
In this industry, it’s very life and death, versus it's just purely a financial incentive.

We know that mobile devices are becoming more and more common, not only in patient environments, but in the hospitals and the care-provider organizations. We know the cloud and hybrid cloud services are becoming available and can distribute this data and integrate it across so many more types of processes.

It seems to me that you not only get a benefit from getting to a big-data analysis capability now, but it puts you in a position to be ready when we have more types of data -- more speed, more end points, and, therefore, more requirements for what your infrastructure, whether on premises or in a cloud, can do. Tell me a little bit about what you think the Avnet and HP Solution does for setting you up for these future trend? 

Kelly: At this point, technology today is just not where it needs to be, especially in healthcare. An EKG spits out 1,000 data points per second. There is no way, at this point, without the right technology, that you can actually deal with that.

If we look to a future where providers do less monitoring, so less vital collection, fewer physicals, and all of that is coming from your mobile device, it's coming from intelligent machines. There really needs to be an infrastructure in place to deal with that.

I spent a lot of time working with Vertica even before Avnet. Vertica, Hadoop, and leveraging economy in the area of unstructured data is a technology that is going to allow the scalability and the growth that’s going to be necessary to leverage the data that we need to make it an asset and much less challenge and allow us to transform healthcare.

The key to that is unlocking this tremendous trove of data. In this industry, as you guys have said, it’s very life and death, versus it's just purely a financial incentive.

Targeting big data

Muller: I might jump in on that as well, Dana. This is an important point that we can’t lose sight of as well. As I said when you and I hosted the previous show, big data is also a big target.

One of the things that every healthcare professional and regulator, every member of the public needs to be mindful of is a large accumulation of sensitive personally identifiable information (PII).

It's not just a governance issue, but it's a question of morals and making sure that we are doing the right thing by the people who are trusting themselves not just with their physical care, but with how they present in society. Medical information can be sensitive when available not just to criminals but even to prospective employers, members of the family, and others.

The other thing we need to be mindful of is we've got to not just collect the big data, but we've got to secure it. We've got to be really mindful of who’s accessing what, when they are accessing, are they appropriately accessing it, and have they done something like taking a copy or moved it else where that could indicate that they have malicious intent.
It's also critical we think about big data in the context of health from a 360-degree perspective.

It's also critical we think about big data in the context of health from a 360-degree perspective.

Kelly: That’s a great point. And to step back a little bit on that, one of the things that brings me a little comfort around that is there are some very clear guidelines in the way of HIPAA around how this data is managed, and we look at it from baking the security into it, in everything from the encryption to the audit ability.

But it’s also training the staff working in these environments and making sure that all of that training is put in place to ensure the safety of that data. One of the things that always leaves me scratching my head is that I can go down the street into the grocery store and buy a bunch of stuff. By the time I get to register, they seem to know more about me than the hospital does when I go to the hospital.

That’s one of the shocking things that make you say you can’t wait until big data gets here. I have a little comfort too, because there are at least laws in place to try to corral that data and make sure everyone is using it correctly.

Gardner: Very good. I’m afraid we’ll have to leave it there. Please join me in thanking our co-host, Paul Muller, Chief Software Evangelist at HP. Thanks so much, Paul.

Muller: Thank you for having me back on the show again, Dana. I really love being here.

Gardner: Of course and also a thank you to the supporter of this series, HP Software. And a reminder to our audience to carry on the dialog with Paul Muller through the Discover group on LinkedIn. We've been having a discussion about how big data and healthcare are intersecting and how there’s a huge opportunity for far greater insight and intelligence into how healthcare providers are managing their patient’s care, the cost and ultimately the outcomes.

And I’d also like to remind you that you can access this, and other episodes of the HP Discover podcast series on iTunes under BriefingsDirect.

And, of course, a big thank you to our guest. We’ve been talking with Patrick Kelly, Senior Practice Manager at the Avnet Services Healthcare Practice. Thanks so much, Patrick.

Kelly: Thank you, guys.

Gardner: This is Dana Gardner, Principal Analyst at Interarbor Solutions, your co-host for this ongoing series. And lastly, a big thank you to our audience for joining this HP Discover Discussion, and reminder to come back next time.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: HP.

Transcript of a BriefingsDirect on the need to tap the potential of big data to improve healthcare delivery and how the technology to do that is currently lagging. Copyright Interarbor Solutions, LLC, 2005-2013. All rights reserved.

You may also be interested in: