Wednesday, January 29, 2014

Healthcare Among Thorniest and Yet Most Opportunistic Use Cases for Boundaryless Information Flow Improvement

Transcript of a BriefingsDirect podcast on how The Open Group is addressing the information needs and challenges in the healthcare ecosystem.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: The Open Group.

Dana Gardner: Hello, and welcome to a special BriefingsDirect panel discussion coming to you in conjunction with The Open Group Conference on February 3 in San Francisco.

Gardner
I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator as we examine how the healthcare industry can benefit from improved and methodological information flow.

Healthcare, like no other sector of the economy, exemplifies the challenges and the opportunity for improving how the various participants in a complex ecosystem interact. The Open Group, at its next North American conference, has made improved information flow across so-called boundaryless organizations the theme of its gathering of IT leaders, enterprise architects, and standards developers and implementers.

Join us now, as we explore what it takes to bring rigorous interactions, process efficiency, and governance to data and workflows that must extend across many healthcare participants with speed and dependability.

Learn how improved cross-organization collaboration plays a huge part in helping to make healthcare more responsive, effective, safe, and cost-efficient. And also become acquainted with what The Open Group’s new Healthcare Industry Forum is doing to improve the situation.

With that, please join me in welcoming our guests, Larry Schmidt, the Chief Technologist at HP for the America’s Health and Life Sciences Industries, as well as the Chairman of The Open Group Healthcare Industry Forum. Welcome, Larry. [Disclosure: HP is a sponsor of BriefingsDirect podcasts. The views of the panelists are theirs alone and not necessarily those of their employers.]

Larry Schmidt: Thank you.

Gardner: We’re also here with Eric Stephens, an Oracle Enterprise Architect. Welcome, Eric.

Eric Stephens: Thank you, Dana.

Gardner: Gentlemen, we have you both here because you are going to be at The Open Group Conference in February in San Francisco. We want to get into this new Healthcare Forum, but before we get into the particulars of what we can do to help the healthcare situation, let’s try to define a little bit better the state of affairs. [Register for the event here.]

So first to you, Larry. Why is healthcare such a tough nut to crack when it comes to this information flow? Is there something unique about healthcare that we don't necessarily find in other vertical industries?

Schmidt: What’s unique about healthcare right now is that in order to answer the question we have to go back to some of the challenges we’ve seen in healthcare.

We’ve progressed in healthcare from a healthcare delivery model that was more based on acute care -- that is, I get sick, I go to the doctor -- to more of a managed-care type capability with the healthcare delivery, where a doctor at times is watching and trying to coach you. Now, we’ve gotten to where the individual is in charge of their own healthcare.

A lot of fragmentation

With that, the ecosystem around healthcare has not had the opportunity to focus the overall interactions based on the individual. So we see an awful lot of fragmentation occurring. There are many great standards across the powers that exist within the ecosystem, but if you take the individual and place that individual in the center of this universe, the whole information model changes.

Then, of course, there are other things, such as technology advances, personal biometric devices, and things like that that come into play and allow us to be much more effective with information that can be captured for healthcare. As a result, it’s the change with the focus on the individual that is allowing us the opportunity to redefine how information should flow across the healthcare ecosystem.

Gardner: So I guess it’s interesting, Larry, that the individual is at the center or hub of this ongoing moving ecosystem with many spokes, if you will. Is that a characterization, or is there no hub and that’s perhaps one of the challenges for this?

Schmidt: What you said first is a good way to categorize it. The scenario of the individual being more in charge of their healthcare -- care of their health would be a better way to think of this -- is a way to see both improvements in the information flow  as well as making improvements in the overall cost of healthcare going forward.

Schmidt
As I offered earlier, because the ecosystem had pretty much been focused around the doctor's visit, or the doctor’s work with an individual, as opposed to the individual’s work with the doctor. We see tremendous opportunity in making advancements in the communications models that can occur across healthcare.

Gardner: Larry, is this specific to the United States or North America, is this global in nature, or is it very much a mixed bag, market to market as to how the challenges have mounted?

Schmidt: I think in any country, across the world, the individual being the focus of the ecosystem goes across the boundaries of countries. Of course, The Open Group is responsible and is a worldwide standards body. As a result of that, it's a great match for us to be able to focus the healthcare ecosystem to the individual and use the capabilities of The Open Group to be able to make advances in the communication models across all countries around healthcare.

Gardner: Eric, thinking about this from a technological point of view, as an enterprise architect, we’re now dealing with this hub and spoke with the patient at the middle. A lot of this does have to do with information, data, and workflow, but we’ve dealt with these things before in many instances in the enterprise and in IT.

Is there anything particular about the technology that is difficult for healthcare, or is this really more a function of the healthcare verticals and the technology is really ready to step up to the plate?

Information transparency

Stephens: Well, Dana, the technology is there and it is ready to step up to the plate. I’ll start with transparency of the information. Let’s pick a favorite poster child, Amazon. In terms of the detail that's available on my account. I can look at past orders. I can look up and see the cost of services, I can track activity that's taking place, both from a purchase and a return standpoint. That level of visibility that you’re alluding to exists. The technology is there, and it’s a matter of applying it.

Stephens
As to why it's not being applied in a rapid fashion in the healthcare industry, we could surmise a number of reasons. One of them is potentially around the cacophony of standards that exist and the lack of a “Rosetta Stone” that links those standards together to maximum interoperability.

The other challenge that exists is simply the focus in healthcare around the healthcare technology that’s being used, the surgical instruments, the diagnostic tools, and such. There is focus and great innovation there, but when it comes to the plumbing of IT, oftentimes that will suffer.

Gardner: So we have some hurdles on a number of fronts, but not necessarily the technology itself. This is a perfect case study for this concept of the boundaryless information flow, which is really the main theme of The Open Group Conference coming up on February 3. [Register for the event here.]

Back to you, Larry, on this boundaryless issue. There are standards in place in other industries that help foster a supply-chain ecosystem or a community of partners that work together.

Is that what The Open Group is seeking? Are they going to take what they’ve done in other industries for standardization and apply it to healthcare, or do you perhaps need to start from scratch? Is this such a unique challenge that you can't simply retrofit other standardization activities? How do you approach something like healthcare from a standards perspective?
I think it's a great term to reflect the vast number of stakeholders that would exist across the healthcare ecosystem.

Schmidt: The first thing we have to do is gain an appreciation for the stakeholders that interact. We’re using the term “ecosystem” here. I think it's a great term to reflect the vast number of stakeholders that would exist across the healthcare ecosystem. Anywhere from the patient, to the doctor, to payment organization for paying claims, the life sciences organizations, for pharmaceuticals, and things like that, there are so many places that stakeholders can interact seamlessly.

So it’s being able to use The Open Group’s assets to first understand what the ecosystem can be, and then secondly, use The Open Group’s capabilities around things like security, TOGAF from an architecture methodology, enablement and so on. Those assets are things that we can leverage to allow us to be able to use the tools of The Open Group to make advances within the healthcare industry.

It’s an amazing challenge, but you have to take it one step at a time, and the first step is going to be that definition of the ecosystem.

Gardner: I suppose there’s no better place to go for teasing out what the issues are and what the right prioritization should be than to go to the actual participants. The Open Group did just that last summer in Philadelphia at their earlier North American conference. They had some 60 individuals representing primary stakeholders in healthcare in the same room and they conducted some surveys.

Larry, maybe you can provide us an overview of what they found and how that’s been a guide to how to proceed?

Participant survey

Schmidt: What we wanted to do was present the concept of boundaryless information flow across the healthcare ecosystem. So we surveyed the participants that were part of the conference itself. One of the questions we asked was about the healthcare quality of data, as well as the efficiency and the effectiveness of data. Specifically, the polling questions, were designed to gauge the state of healthcare data quality and effective information flow.

We understood that 86 percent of those participants felt very uncomfortable with the quality of healthcare information flows, and 91 percent of the participants felt very uncomfortable with the efficiency of healthcare information flows.

In the discussion in Philadelphia, we talked about why information isn’t flowing much more easily and freely within this ecosystem. We discovered that a lot of the standards that currently exist within the ecosystem are very much tower-oriented. That is, they only handle a portion of the ecosystem, and the interoperability across those standards is an area that needs to be focused on.

But we do think that, because the individual should be placed into the center of the ecosystem, there's new ground that will come into play. Our Philadelphia participants actually confirmed that, as we were working through our workshop. That was one of the big, big findings that we had in the Philadelphia conference.
We understood that 86 percent of those participants felt very uncomfortable with the quality of healthcare information flows.

Gardner: Just so our audience understands, the resulting work that’s been going on for months now will culminate with the Healthcare Industry Forum being officially announced and open for business,, beginning with the San Francisco Conference. [Register for the event here.]

Tell us a little about how the mission statement for the Healthcare Industry Forum was influenced by your survey. Is there other information, perhaps a white paper or other collateral out there, that people can look to, to either learn more about this or maybe even take part in it?

Schmidt: We presented first a vision statement around boundaryless information flow. I’ll go ahead and just offer that to the team here. Boundaryless information flow of healthcare data is enabled throughout a complete healthcare ecosystem to standardization of both vocabulary and messaging that is understood by all participants within the system. This results in higher quality outcomes, streamlined business processes, reduction of fraud, and innovation enablement.

When we presented that in the conference, there was big consensus among the participants around that statement and buy in to the idea that we want that as our vision for a Healthcare Forum to actually occur.

Since then, of course, we’ve published this white paper that is the findings of the Philadelphia Conference. We’re working towards the production of a treatise, which is really the study of the problem domain that we believe we can be successful in. We also can make a major impact around this individual communication flow, enabling individuals to be in charge of more of their healthcare.

Our mission will be to provide the means to enable boundaryless information flow across the ecosystem. What we’re trying to do is make sure that we work in concert with other standards bodies to recognize the great work that’s happening around this tower concept that we believe is a boundary within the ecosystem.

Additional standards

Hopefully, we’ll get to a point where we’re able to collaborate, both with those standards bodies, as well as work within our own means to come up with additional standards that allows us to make this communication flow seamless or boundaryless.

Gardner: Eric Stephens, back to you with the enterprise architect questions. Of course, it’s important to solve the Tower of Babel issues around taxonomy, definitions, and vocabulary, but I suppose there is also a methodology issue.

Frameworks have worked quite well in enterprise architecture and in other verticals and in the IT organizations and enterprises. Is there something from your vantage point as an enterprise architect that needs to be included in this vision, perhaps looking to the next steps after you’ve gotten some of the taxonomy and definitions worked out?

Stephens: Dana, in terms of working through the taxonomies and such, as an enterprise architect, I view it as part of a larger activity around going through a process, like the TOGAF methodology, it’s architecture development methodology.
In the healthcare landscape, and in other industries, there are a lot of players coming to the table and need to interact.

By doing so, using a tailored version of that, we’ll get to that taxonomy definition and the alignment of standards and such. But there's also the addressing alignment and business processes and other application components that comes into play. That’s going to drive us towards improving the viscosity of the information, that's moving both within an enterprise and outside of the enterprise.

In the healthcare landscape, and in other industries, there are a lot of players coming to the table and need to interact, especially if you are talking about a complex episode of care. You may have two, three, or four different organizations in play. You have labs, the doctors, specialized centers, and such, and all that requires information flow.

Coming back to the methodology, I think it’s bringing to bear an architecture methodology like provided in TOGAF. It’s going to aid individuals in getting a broad picture, and also a detailed picture, of what needs to be done in order to achieve this goal of boundaryless information flow.

Gardner: I suppose, gentlemen, that we should also recognize that we are going about this in the larger context of change in the IT and business landscapes. We’re seeing many more mobile devices. We’re probably going to see patients accessing more information that we have been discussing through some sort of a mobile device, which is good news, because more and more patients and their providers can access information regardless of where they are. So mobility, I think, is a fairly important accelerant to some of this.

And, of course, there’s big data, the ability to take reams and reams of information, deal with it rapidly, analyze it in near real-time and then scale accordingly for cost issues. It’s also another big thing.

Larger context

So let’s just quickly step aside from the forum activities and look at how the larger context of change is perhaps fortuitously timed for what we we’d like to do in terms of transformation around healthcare. Let me first direct that to you, Larry. How important are things like mobile and big data in making significant progress in the issues facing healthcare?

Schmidt: Well, that’s interesting, because when we first stared with mobility devices, I actually built and think that the mobile devices become, what I will call a personal integration server. It will help the individual who wants to take charge of their healthcare or care of their health. It will give them the opportunity to capture information using other devices, such as biometric devices, blood pressure monitors, and things like that, and have that captured on a mobile device and placed in a repository someplace to allow either a physician or others, or even that individual, to look at trending over time.

To me, the mobile device, from a standpoint of being able to gather data, is a great technology enabler that has come of age. It allows us the opportunity to streamline that information gathering that is necessary to provide the right diagnoses of working with your health coach or your provider.

Of course, that has the possibility, at the individual level, of producing a lot of data, and it could be massive amount of data, depending on how the data is actually gathered. So big data and analytics, even at the individual level, being able to decipher or to understand trending and things that are happening to the individual over time outside of the doctor’s office, is something I think will really enable improvements in healthcare.
One of the key success factors that is going to have to be addressed is interoperability.

All that, of course, is fueled by the “Internet of things” and technology advances such as IPv6 to allow us to use devices like this across a network and actually keep them identified. Those two technologies that we see in IT trends, will be a great help in advancing healthcare and of course the possibility of it enabling boundaryless information flow.

Gardner: Eric Stephens, do you want to weigh in as well on where these new advances in IT can play a huge role if those standards and the framework approach methodologies are in play?

Stephens: Larry really hit the points well. I was thinking about the new terminology, the Internet of things or machine to machine, where mobile devices could end up being the size of a fingernail at some point.

Do we get to the point where there is real-time monitoring of critical patients, going back through other mobile devices and into a doctor’s office or something, will we have the ability to do a virtual office visit, and how much equipment will you need in a home, for example, to go through and do routine checkups on children and such?

One of the key success factors that is going to have to be addressed is interoperability. Back when we were all starting to cut our teeth on the Internet, one of the things that was fascinating to me is that, you have a handful of standards and all these vendors are conforming to them, such that you don’t have to think about plugging in a laptop to a network or accessing website. All that’s driven by standardization.

Drive standardization

One of the things that we can do in the Forum is start to drive some of that standardization, so that we have these devices working together easily, and it provides the necessary medical professionals the information they need, so they can make more timely decisions. It’s giving the right information, to the right decision maker, at the right time. That, in turn, drives better health outcomes, and it's going to, we hope, drive down the overall cost profile of healthcare, specifically here in the United States.

Gardner: I should think makes for a high incentive to work on these issues of standardization, taxonomy, definitions, and methodologies so that you can take advantage of these great technologies and the scale and efficiency they afford.

Getting back to the conference, I understand that the Healthcare Industry Forum is going to be announced. There is going to be a charter, a steering committee program, definitions, and treatise in the works. So there will be quite a bit kicking off. I would like to hear from you two, Larry and Eric, what you will specifically be presenting at the conference in San Francisco in just a matter of a week or two. Larry, what’s on the agenda for your presentations at the conference? [Register for the event here.]

Schmidt: Actually, Eric and I are doing a joint presentation and we’re going to talk about some of the challenges that we think we can see is ahead of us as a result of trying to enable our vision around boundaryless information flow, specifically around healthcare.
As an enterprise architect, I look at things in terms of the business, the application, information, technology, and architecture.

The culture of being able to produce standards in an industry like this is going to be a major challenge to us. There is a lot of individualization that occurs across this industry. So having people come together and recognize that there are going to be different views, different points of views, and coming into more of a consensus on how information should flow, specifically in healthcare. Although I think any of the forums go through this cultural change.

We’re going to talk about that at the beginning in the conference as a part of how we’re planning to address those challenges as part of the Industry Forum itself.  Then, other meetings will allow us to continue with some of the work that we have been doing around a treatise and other actions that will help us get started down the path of understating the ecosystem and so on.

Those are the things that we’ll be addressing at this specific conference.

Gardner: Eric, anything to add to that, I didn't realize you are both doing this as a joint presentation?

Stephens: Yes, and thanks to Larry for allowing me to participate in it. One of the areas I will be focusing on, and you alluded to this earlier, Dana, is around the information architecture.

As an enterprise architect, I look at things in terms of the business, the application, information, technology, and architecture. When we talk about boundaryless information flow, my remarks and contributions are focused around the information architecture and specifically around an ecosystem of an information architecture at a generic level, but also the need and importance of integration. I will perhaps touch a little bit on the standards to integrate that with Larry’s thoughts.

Soliciting opinions

Schmidt: Dana, I just wanted to add the other work that we’ll be doing there at the conference. We’ve invited some of the healthcare organizations in that area of the country, San Francisco and so on, to come in on Tuesday. We plan to present the findings of the paper and the work that we did in the Philadelphia Conference, and get opinions in refining both the observations, as well as some of the direction that we plan to take with the Healthcare Forum.

Obviously we’ve shared here some of the thoughts of where we believe we’re moving with the Healthcare Forum, but as the Forum continues to form, some of the direction of it will morph based on the participants, and based on some of the things that we see happening with the industry.

So, it’s a really exciting time and I’m actually very much looking forward to presenting the findings of the Philadelphia Conference, getting, as I said, the next set of feedback, and starting the discussion as to how we can make change going toward that vision of boundaryless information flow.
We’re actually able to see a better profile of what the individual is doing throughout their life and throughout their days.

Gardner: I should also point out that it’s not too late for our listeners and readers to participate themselves in this conference. If you’re in the San Francisco area, you’re able to get there and partake, but there are also going to be online activities. There will be some of the presentations delivered online and there will be Twitter feeds.

So if you can't make it to San Francisco on February 3, be aware that The Open Group Conference will be available in several different ways online. Then, there will be materials available after the fact to access on-demand. Of course, if you’re interested in taking more activity under your wing with the Forum itself, there will be information on The Open Group website as to how to get involved.

Before we sign off, I want to get a sense of what the stakes are here. It seems to me that if you do this well and if you do this correctly, you get alignment across these different participants -- the patient being at the hub of the wheel of the ecosystem. There’s a tremendous opportunity here for improvement, not only in patient care and outcomes, but costs, efficiency, and process innovation.

So first to you Larry. If we do this right, what can we expect?

Schmidt: There are several things to expect. Number one, I believe that the overall health of the population will improve, because individuals are more knowledgeable about their individualized healthcare and doctors have the necessary information, based on observations in place, as opposed to observations or, again, through discussion and/or interview of the patient.

We’re actually able to see a better profile of what the individual is doing throughout their life and throughout their days. That can provide doctors the opportunity to make better diagnosis. Better diagnosis, with better information, as Eric said earlier, the right information, at the right time, to the right person, gives the whole ecosystem the opportunity to respond more efficiently and effectively, both at the individual level and in the population. That plays well with any healthcare system around the world. So it’s very exciting times here.

Metrics of success

Gardner: Eric, what’s your perspective on some of the paybacks or metrics of success, when some of the fruits of the standardization begin to impact the overall healthcare system?

Stephens: At the risk of oversimplifying and repeating some of things that Larry said, it comes down to cost and outcomes as the two main things. That’s what’s in my mind right now. I look at these very scary graphs about the cost of healthcare in the United States, and it's hovering in the 17-18 percent of GDP. If I recall correctly, that’s at least five full percentage points larger than other economically developed countries in the world.

The trend on individual premiums and such continues to tick upward. Anything we can do to drive that cost down is going to be very beneficial, and this goes right back to patient-centricity. It goes right back to their pocketbook.

And the outcomes are important as well. There are a myriad of diseases and such that we’re dealing with in this country. More information and more education is going to help drive a healthier population, which in turn drives down the cost. The expenditures that are being spent are around the innovation. You leave room for innovation and you leave room for new advances in medical technology and such to treat diseases going. So again, it’s back to cost and outcomes.
Anything we can do to drive that cost down is going to be very beneficial, and this goes right back to patient centricity.

Gardner: Very good. I’m afraid we will have to leave it there. We’ve been talking with a panel of experts on how the healthcare industry can benefit from improved and methodological information flow. And we have seen how the healthcare industry itself is seeking large-scale transformation and how improved cross-organizational interactions and collaborations seem to be intrinsic to be able to move forward and capitalize and make that transformation possible.

And lastly, we have learned that The Open Group’s new Healthcare Industry Forum is doing a lot now and is getting into its full speed to improve the situation.

This special BriefingsDirect discussion comes to you in conjunction with The Open Group Conference on February 3 in San Francisco. It’s not too late to register at The Open Group website and you can also follow the proceedings during and after the conference online and via Twitter.

So a big thank you to our panel, Larry Schmidt, the Chief Technologist at HP for the America’s Health and Life Sciences Industries, as well as the Chairman of The new Open Group Healthcare Industry Forum. Thanks so much, Larry.

Schmidt: You bet. Glad to be here.

Gardner: And thank you, too, to Eric Stephens, an Oracle Enterprise Architect. We appreciate your time Eric.

Stephens: Thanks for having me, Dana.

Gardner: This is Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for this look at the healthcare ecosystem process. Thanks for listening, and come back next time for more BriefingsDirect podcast discussions.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: The Open Group.
Register for the event here.

Transcript of a BriefingsDirect podcast on how The Open Group is addressing the information needs and challenges in the healthcare ecosystem. Copyright The Open Group and Interarbor Solutions, LLC, 2005-2014. All rights reserved.

You may also be interested in:

Thursday, January 23, 2014

Siemens Brazil Leverages HP Anywhere to Deliver Applications Better to More Mobile Devices

Transcript of a sponsored BriefingsDirect podcast on how a major energy engineering company is delivering mobile capability to its managers in Brazil.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: HP.

Dana Gardner: Hello, and welcome to the next edition of the HP Discover Podcast Series. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for this ongoing sponsored discussion on IT innovation and how it’s making an impact on people’s lives.

Gardner
Once again, we’re focusing on how companies are adapting to the new style of IT to improve IT performance and deliver better user experiences and business results. This time, we’re coming to you directly from the recent HP Discover 2013 Conference in Barcelona.

Our task: To learn directly from IT and business leaders alike how big data, mobile, and cloud -- powered by converged infrastructure -- are all supporting their business goals in new and interesting ways.

Our next innovation case study interview highlights how Siemens Brazil is using HP Anywhere to improve how they deliver applications to users with mobile devices. Here to tell us how they’re making progress is Alexandre Padeti, IT Consultant and Applications Integration Technician with Siemens Brazil in São Paulo. Welcome.

Alexandre Padeti: Hi, Dana.

Gardner: Tell us a little bit about what Siemens Brazil. Then let's learn about your transition to mobile applications.

Padeti: Siemens Brazil is an engineering company in Brazil, mainly responsible for 50 percent of the energy transmission in Brazil. With the mobility scenario within Siemens Brazil, we’re just starting right now to implement HP Anywhere in our field applications.

Padeti
Gardner: Why didn’t you just make these applications, customize them, host them, and deliver them? What was missing from your being able to do this yourselves?

Padeti: In the beginning, we were looking for a tool that gave us the freedom to develop for any device. That's the main reason that we chose HP Anywhere. We have the freedom now to choose -- or give the freedom to the users to choose -- the device.

Gardner: What types of applications have you targeted first for moving out to the mobile tier?

Padeti: The main applications that we are working with at the moment is Workflow Approval, which integrates with our back-end SAP ERP system. With HP Anywhere, we’re trying to give the managers mobility, the option to make their approval on daily basis in a different way.

Real-time basis

Gardner: So it's more important to have workflow approved and managed on a real-time basis, wherever these individuals are and whatever device they happen to be using?

Padeti: Yes. These are the main points of the solution. We’re trying to give this especially to our managers, who are used to being in meetings or moving from one place to another. They gain the ability to make this kind of approval on the go.

https://hpln.hp.com/group/hp-anywhereGardner: Tell us a little bit about the process of adoption. You've had a proof of concept (POC) phase?

Padeti: Initially we had a POC with HP Anywhere together with HP Brazil and a local partner. From the beginning, it was well-suited. So we decided to go with HP Anywhere in production, and now we’re running a project that will cover nearly 200 users by the end of January.

Gardner: Do you think this will lead to more applications and more mobile users? Does this seem to be a larger undertaking with movement toward even more mobility?
We’re quite sure that 90 percent of the devices will be running on Android and a small percentage on iOS.

Padeti: Yes, that's for sure. This will become bigger in Siemens Brazil, because it's a change of the mindset of the users. They will begin to change the way they’re thinking about requesting solutions from the IT department. In the future, I believe that we’ll have a lot of requirements to develop more such mobile applications.

Gardner: Alexandre, do you have a sense yet what will be the majority of these mobile devices? Which are most popular among these 200 initial users there?

Padeti: The standard for Siemens Brazil is based on Android. So we’re quite sure that 90 percent of the devices will be running on Android and a small percentage on iOS.

Gardner: As you've gone through this process, are there any lessons learned that you could share for other organizations? What lessons have you learned, or what advice could you offer them?

Small processes

Padeti: The first one would be to think first about smaller processes. At Siemens Brazil, we’re starting with a not-so-big process. We’re using a not-so-complex one to start. This is a good thing to engage the users and allow them to be comfortable, and furnish proof of use on the solution.

The next one would be to talk a lot with the users, because in our case we have requirements that the user could not think of before. We're learning constantly about what is possible with mobility.
When you give to them the freedom with the mobility, new ideas will come up.

I really advise you to talk with the users and know what they want, because most of the times they don’t come up with an idea until they use mobile, because they’re only thinking initially of desktop or notebook PCs. So when you give to them freedom with mobility, new ideas come up.

Gardner: Well, very good. I’m afraid we will have to leave it there. We've been talking about how Siemens Brazil has been moving to more mobile applications delivery for its workers, in this case workflow applications to managers. Our guest has been Alexandre Padeti, IT Consultant and Applications Integration Technician at Siemens Brazil in São Paulo. Thank you so much.

Padeti: Thank you, Dana.

Gardner: And I like to thank our audience as well for joining us for the special new style of IT discussion coming to you from the recent HP Discover 2013 Conference in Barcelona. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host for this ongoing series of HP-sponsored discussions.  Thanks again for listening, and come back next time.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: HP.

Transcript of a sponsored BriefingsDirect podcast on how a major energy engineering company is delivering mobile capability to its managers in Brazil. Copyright Interarbor Solutions, LLC, 2005-2014. All rights reserved.

You may also be interested in:

Wednesday, January 08, 2014

Nimble Storage Leverages Big Data and Cloud to Produce Data Performance Optimization on the Fly

Transcript of a BriefingsDirect podcast on how a hybrid storage provider can analyze operational data to bring about increased efficiency.

Listen to the podcast. Find it on iTunes. Download the transcript.

Dana Gardner: Hello, and welcome to the next edition of the HP Discover Performance Podcast Series. I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your moderator for this ongoing discussion of IT innovation and how it’s making an impact on people’s lives.

Gardner
Once again, we’re focusing on how IT leaders are improving their business performance for better access, use and analysis of their data and information.

Our next innovation case study focuses on how optimized hybrid storage provider Nimble Storage has leveraged big data and cloud to produce significant storage  performance, and efficiency. Nimble is, of course, also notable for its recent successful IPO.

Learn here how Nimble Storage has leveraged the HP Vertica analytics platform to analyze operational data on mixed-storage environments to optimize workloads. High-performing, cost-effective big-data processing via cloud helps to make the best use of dynamic storage resources, it turns out. A fascinating story.

To learn more, join me in welcoming our guest, Larry Lancaster, Chief Data Scientist at Nimble Storage Inc. in San Jose, California. Welcome, Larry.

Larry Lancaster: Hi, Dana, it's great to talk to you today.

Gardner: I'm glad you could join us. As I said, it's a fascinating use-case. Tell us about the general scope of how you use data in the cloud to create this hybrid storage optimization service.

Lancaster: At a high level, Nimble Storage recognized early, near the inception of the product, that if we were able to collect enough operational data about how our products are performing in the field, get it back home and analyze it, we'd be able to dramatically reduce support costs. Also, we can create a feedback loop that allows engineering to improve the product very quickly, according to the demands that are being placed on the product in the field.

Lancaster
Looking at it from that perspective, to get it right, you need to do it from the inception of the product. If you take a look at how much data we get back for every array we sell in the field, we could be receiving anywhere from 10,000 to 100,000 data points per minute from each array. Then, we bring those back home, we put them into a database, and we run a lot of intensive analytics on those data.

Once you're doing that, you realize that as soon as you do something, you have this data you're starting to leverage. You're making support recommendations and so on, but then you realize you could do a lot more with it. We can do dynamic cache sizing. We can figure out how much cache a customer needs based on an analysis of their real workloads.

We found that big data is really paying off for us. We want to continue to increase how much it's paying off for us, but to do that we need to be able to do bigger queries faster. We have a team of data scientists and we don't want them sitting here twiddling their thumbs. That’s what brought us to Vertica at Nimble.

Using big data

Gardner: It's an interesting juxtaposition that you're using big data in order to better manage data and storage. What better use of it? And what sort of efficiencies are we talking about here, when you are able to get that data in that massive scale and do these analytics and then go back out into the field and adjust? What does that get for you?

Lancaster: We have a very tight feedback loop. In one release we put out, we may make some changes in the way certain things happen on the back end, for example, the way NVRAM is drained. There are some very particular details around that, and we can observe very quickly how that performs under different workloads. We can make tweaks and do a lot of tuning.

Without the kind of data we have, we might have to have multiple cases being opened on performance in the field and escalations, looking at cores, and then simulating things in the lab.

It's a very labor-intensive, slow process with very little data to base the decision on. When you bring home operational data from all your products in the field, you're now talking about being able to figure out in near real-time the distribution of workloads in the field and how people access their storage. I think we have a better understanding of the way storage works in the real world than any other storage vendor, simply because we have the data.

Gardner: So it's an interesting combination of a product lifecycle approach to getting data -- but also combining a service with a product in such a way that you're adjusting in real time.

Lancaster: That’s right. We do a lot of neat things. We do capacity forecasting. We do a lot of predictive analytics to try to figure out when the storage administrator is going to need to purchase something, rather than having them just stumble into the fact that they need to provision for equipment because they've run out of space.
That’s the kind of efficiency we gain that you can see, and the InfoSight service delivers that to our customers.

A lot of things that should have been done in storage from the very beginning that sound straightforward were simply never done. We're the first company to take a comprehensive approach to it. We open and close 80 percent of our cases automatically, 90 percent of them are automatically opened.

We have a suite of tools that run on this operational data, so we don't have to call people up and say, "Please gather this data for us. Please send us these log posts. Please send us these statistics." Now, we take a case that could have taken two or three days and we turn it into something that can be done in an hour.

That’s the kind of efficiency we gain that you can see, and the InfoSight service delivers that to our customers.

Gardner: Larry, just to be clear, you're supporting both flash and traditional disk storage, but you're able to exploit the hybrid relationship between them because of this data and analysis. Tell us a little bit about how the hybrid storage works.

Challenge for hard drives

Lancaster: At a high level, you have hard drives, which are inexpensive, but they're slow for random I/O. For sequential I/O, they are all right, but for random I/O performance, they're slow. It takes time to move the platter and the head. You're looking at 5 to 10 milliseconds seek time for random read.

That's been the challenge for hard drives. Flash drives have come out and they can dramatically improve on that. Now, you're talking about microsecond-order latencies, rather than milliseconds.

But the challenge there is that they're expensive. You could go buy all flash or you could go buy all hard drives and you can live with those downsides of each. Or, you can take the best of both worlds.

Then, there's a challenge. How do I keep the data that I need to access randomly in flash, but keep the rest of the data that I don't care so much about in a frequent random-read performance, keep that on the hard drives only, and in that way, optimize my use of flash. That's the way you can save money, but it's difficult to do that.

It comes down to having some understanding of the workloads that the customer is running and being able to anticipate the best algorithms and parameters for those algorithms to make sure that the right data is in flash.
It would be hard to be the best hybrid storage solution without the kind of analytics that we're doing.

We've built up an enormous dataset covering thousands of system-years of real-world usage to tell us exactly which approaches to caching are going to deliver the most benefit. It would be hard to be the best hybrid storage solution without the kind of analytics that we're doing.

Gardner: Then, to extrapolate a little bit higher, or maybe wider, for how this benefits an organization, the analysis that you're gathering also pertains to the data lifecycle, things like disaster recovery (DR), business continuity, backups, scheduling, and so forth. Tell us how the data gathering analytics has been applied to that larger data lifecycle equation.

Lancaster: You're absolutely right. One of the things that we do is make sure that we audit all of the storage that our customers have deployed to understand how much of it is protected with local snapshots, how much of it is replicated for disaster recovery,  and how much incremental space is required to increase retention time and so on.

We have very efficient snapshots, but at the end of the day, if you're making changes, snapshots still do take some amount of space. So, learning exactly what is that overhead, and how can we help you achieve your disaster recovery goals.

We have a good understanding of that in the field. We go to customers with proactive service recommendations about what they could and should do. But we also take into account the fact that they may be doing DR when we forecast how much capacity they are going to need.

Larger lifecycle

You're right. It is part of a larger lifecycle that we address, but at the end of the day, for my team it's still all about analytics. It's about looking to the data as the source of truth and as the source of recommendation.

We can tell you roughly how much space you're going to need to do disaster recovery on a given type of application, because we can look in our field and see the distribution of the extra space that would take and what kind of bandwidth you're going to need. We have all that information at our fingertips.

When you start to work this way, you realize that you can do things you couldn't do before. And the things you could do before, you can do orders of magnitude better. So we're a great case of actually applying data science to the product lifecycle, but also to front-line revenue and cost enhancement.

Gardner: I think this is a great example and I think you're a harbinger of what we're going to see more and more, which is bringing this high level of intelligence to bear on many other different services, for many different types of products. IT and storage is great and makes a lot of sense as an early adopter. But I can see this is pertaining to many other vertical industries. It illustrates where a lot of big-data value is going to go.

Now, let's dig into how you actually can get that analysis in the speed, at the scale, and at the cost that you require. Tell us about your journey in terms of different analytics platforms and data architectures that you've been using and where you're headed.
I have to tell you, I fell in love with Vertica because of the performance benefits that it provided.

Lancaster: To give you a brief history of my awareness of HP Vertica and my involvement around the product, I don’t remember the exact year, but it may have been eight years ago roughly. At some point, there was an announcement that Mike Stonebraker was involved in a group that was going to productize the C-Store Database, which was sort of an academic experiment at UC Berkeley, to understand the benefits and capabilities of real column store.

[Learn more about column store architectures and how they benefit data speed and management for Infinity Insurance.]

I was immediately interested and contacted them. I was working at another storage company at the time. I had a 20 terabyte (TB) data warehouse, which at the time was one of the largest Oracle on Linux data warehouses in the world.

They didn't want to touch that opportunity just yet, because they were just starting out in alpha mode. I hooked up with them again a few years later, when I was CTO at a company called Glassbeam, where we developed what's substantially an extract, transform, and load (ETL) platform.

By then, they were well along the road. They had a great product and it was solid. So we tried it out, and I have to tell you, I fell in love with Vertica because of the performance benefits that it provided.

When you start thinking about collecting as many different data points as we like to collect, you have to recognize that you’re going to end up with a couple choices on a row store. Either you're going to have very narrow tables and a lot of them or else you're going to be wasting a lot of I/O overhead, retrieving entire rows where you just need a couple fields.

Greater efficiency

That was what piqued my interest at first. But as I began to use it more and more at Glassbeam, I realized that the performance benefits you could gain by using HP Vertica properly were another order of magnitude beyond what you would expect just with the column-store efficiency.

That's because of certain features that Vertica allows, such as something called pre-join projections. We can drill into that sort of stuff more if you like, but, at a high-level, it lets you maintain the normalized logical integrity of your schema, while having under the hood, an optimized denormalized query performance physically on disk.

Now you might ask you can be efficient if you have a denormalized structure on disk. It's because Vertica allows you to do some very efficient types of encoding on your data. So all of the low cardinality columns that would have been wasting space in a row store end up taking almost no space at all.

What you find, at least it's been my impression, is that Vertica is the data warehouse that you would have wanted to have built 10 or 20 years ago, but nobody had done it yet.
Vertica is the data warehouse that you would have wanted to have built 10 or 20 years ago, but nobody had done it yet.

Nowadays, when I'm evaluating other big data platforms, I always have to look at it from the perspective of it's great, we can get some parallelism here, and there are certain operations that we can do that might be difficult on other platforms, but I always have to compare it to Vertica. Frankly, I always find that Vertica comes out on top in terms of features, performance, and usability.

Gardner: When you arrived there at Nimble Storage, what were they using, and where are you now on your journey into a transition to Vertica?

Lancaster: I built the environment here from the ground up. When I got here, there were roughly 30 people. It's a very small company. We started with Postgres. We started with something free. We didn’t want to have a large budget dedicated to the backing infrastructure just yet. We weren’t ready to monetize it yet.

So, we started on Postgres and we've scaled up now to the point where we have about 100 TBs on Postgres. We get decent performance out of the database for the things that we absolutely need to do, which are micro-batch updates and transactional activity. We get that performance because the database lives on Nimble Storage.

I don't know what the largest unsharded Postgres instance is in the world, but I feel like I have one of them. It's a challenge to manage and leverage. Now, we've gotten to the point where we're really enjoying doing larger queries. We really want to understand the entire installed base of how we want to do analyses that extend across the entire base.

Rich information

We want to understand the lifecycle of a volume. We want to understand how it grows, how it lives, what its performance characteristics are, and then how gradually it falls into senescence when people stop using it. It turns out there is a lot of really rich information that we now have access to to understand storage lifecycles in a way I don't think was possible before.

But to do that, we need to take our infrastructure to the next level. So we've been doing that and we've loaded a large number of our sensor data that’s the numerical data I have talked about into Vertica, started to compare the queries, and then started to use Vertica more and more for all the analysis we're doing.

Internally, we're using Vertica, just because of the performance benefits. I can give you an example. We had a particular query, a particularly large query. It was to look at certain aspects of latency over a month across the entire installed base to understand a little bit about the distribution, depending on different factors, and so on.
I'm really excited. We're getting exactly what we wanted and better.

We ran that query in Postgres, and depending on how busy the server was, it took  anywhere from 12 to 24 hours to run. On Vertica, to run the same query on the same data takes anywhere from three to seven seconds.

I anticipated that because we were aware upfront of the benefits we'd be getting. I've seen it before. We knew how to structure our projections to get that kind of performance. We knew what kind of infrastructure we'd need under it. I'm really excited. We're getting exactly what we wanted and better.

This is only a three node cluster. Look at the performance we're getting. On the smaller queries, we're getting sub-second latencies. On the big ones, we're getting sub-10 second latencies. It's absolutely amazing. It's game changing.

People can sit at their desktops now, manipulate data, come up with new ideas and iterate without having to run a batch and go home. It's a dramatic productivity increase. Data scientists tend to be fairly impatient. They're highly paid people, and you don’t want them sitting at their desk waiting to get an answer out of the database. It's not the best use of their time.

Gardner: Larry, is there another aspect to the HP Vertica value when it comes to the cloud model for deployment? It seems to me that if Nimble Storage continues to grow rapidly and scales that, bringing all that data back to a central single point might be problematic. Having it distributed or in different cloud deployment models might make sense. Is there something about the way Vertica works within a cloud services deployment that is of interest to you as well?

No worries

Lancaster: There's the ease of adding nodes without downtime, the fact that you can create a K-safe cluster. If my cluster is 16 nodes wide now, and I want two nodes redundancy, it's very similar to RAID. You can specify that, and the database will take care of that for you. You don’t have to worry about the database going down and losing data as a result of the node failure every time or two.

I love the fact that you don’t have to pay extra for that. If I want to put more cores or  nodes on it or I want to put more redundancy into my design, I can do that without paying more for it. Wow! That’s kind of revolutionary in itself.

It's great to see a database company incented to give you great performance. They're incented to help you work better with more nodes and more cores. They don't have to worry about people not being able to pay the additional license fees to deploy more resources. In that sense, it's great.

We have our own private cloud -- that’s how I like to think of it -- at an offsite colocation facility. We do DR through Nimble Storage. At the same time, we have a K-safe cluster. We had a hardware glitch on one of the nodes last week, and the other two nodes stayed up, served data, and everything was fine.
If you do your job right as a cloud provider, people just want more and more and more.

Those kinds of features are critical, and that ability to be flexible and expand is critical for someone who is trying to build a large cloud infrastructure, because you're never going to know in advance exactly how much you're going to need.

If you do your job right as a cloud provider, people just want more and more and more. You want to get them hooked and you want to get them enjoying the experience. Vertica lets you do that.

Gardner: I'm afraid we'll have to leave it there. We've been learning about how optimized hybrid storage provider Nimble Storage has leveraged big data and cloud to produce unique storage performance analytics and efficiencies. And we've seen how the HP Vertica Analytics platform has been used to analyze Nimble's operational data across mixed storage environments in near real-time, so that they can optimize their workloads and also extend the benefits to a data lifecycle.

So, a big thank you to our guest, Larry Lancaster, Chief Data Scientist at Nimble Storage. Thank you, Larry.

Lancaster: Thanks, Dana.

Gardner: Also, thank you to our audience for joining us for this special HP Discover Performance Podcast.

I'm Dana Gardner; Principal Analyst at Interarbor Solutions, your host for this ongoing series of HP-sponsored discussions. Thanks again for joining, and come back next time.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: HP.

Transcript of a BriefingsDirect podcast on how a hybrid storage provider can analyze operational data to bring about increased efficiency.  Copyright Interarbor Solutions, LLC, 2005-2014. All rights reserved.

You may also be interested in:

Tuesday, January 07, 2014

Learn How HP Implemented the TippingPoint Intrusion Prevention System Across its Security Infrastructure

Transcript of a BriefingsDirect podcast on how the strategy of dealing with malware is shifting from reaction to prevention.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: HP.

Dana Gardner: Hello, and welcome to the next edition of the HP Discover Podcast Series. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your co-host and moderator for this ongoing discussion of IT innovation and how it’s making an impact on people’s lives.

Gardner
Once again, we’re focusing on how IT leaders are improving the security and availability of services to deliver better experiences and payoffs for businesses and end users alike.

We have a fascinating show today. We’re going to be exploring the ins and outs of improving enterprise intrusion prevention systems (IPS), and we will see how HP and its global cyber security partners have made the HP Global Network more resilient and safe. We’ll will hear how a vision for security has been effectively translated into actual implementation.

To learn more about how HP itself has created role-based and granular access control benefits amid real-time yet intelligent intrusion protection, please join me in welcoming our guest, Jim O'Shea, Network Security Architect for HP Cyber Security Strategy and Infrastructure Engagement. Welcome to the show, Jim.

Jim O’Shea: Hello, Dana. Thank you.

Gardner: Before we get into the nitty-gritty, what do you think are some of the major trends that are driving the need for better intrusion prevention systems nowadays?

O’Shea: If you look at the past, it was about detection, and you had reaction technologies. We had firewalls that blocked and looked at the port level. Then, we evolved to trying to detect things that were malicious with intent by using IDS. But that was a reactionary-type thing. It was a nice approach, but we were reacting. Something happened, you reacted, but if you knew it was bad, why did we let it in in the first place?

The evolution was the IPS, the prevention. If you know it's bad, why do you even want to see it? Why do you want to try to react to it? Just block it. That’s the trend that we’ve been following.

Gardner: But we can’t just have a black-and-white situation. It’s much more gray. There are sorts of intrusion, I suppose, that we want. We want access control, rather than just a firewall. So is there a new thinking, a new vision, that’s been developed over the past several years about these networks and what should or shouldn't be allowed through them?

O’Shea: You’re talking about letting the good in. Those are the evolutions and the trends that we are all trying to strive for. Get the good traffic in. Get who you are in. Maybe look at what you have. You can explore the health of your device. Those are all trends that we’re all striving for now.

Gardner: I recall Jim, that there was a Ponemon Institute report about a year or so ago that really outlined some of the issues here. Do you recall that? Were there any issues in there that illustrate this trend toward a different type of network and a different approach to protection?

Number of attacks

O’Shea: The Ponemon study was illustrating the vast number of attacks and the trend toward the costs for intrusion. It was highlighting those type of trends, all of which we’re trying to head off. Those type of reports are guiding factors in taking a more proactive, automated-type response. [Learn more about intrusion prevention systems.]

Gardner: I suppose what’s also different nowadays is that we’re not only concerned with outside issues in terms of risk, but also insider attacks. It’s being able to detect behaviors and things that occur that data can detect. The analysis can then provide perhaps a heads-up across the network, regardless of whether they have access or not. What are the risk issues now when we think about insider attacks, rather than just outside penetration?

O’Shea: You’re exactly right. Are you hiring the right people? That’s a big issue. Are they being influenced? Those are all huge issues. Big data can handle some of that and pull that in. Our approach on intrusion prevention wasn’t to just look at what’s coming from the outside, but it was also look at data traversing the network.
You have a whole rogue wireless-type approach in which people can gain access and can they probe and poke around.

When we deployed the TippingPoint solution, we didn’t change our policies or profiles that we were deploying based on whether it’s starting on the inside or starting on the outside. It was an equal deployment.

An insider attack could also be somebody who walks into a facility, gains physical access, and connects to your network. You have a whole rogue wireless-type approach in which people can gain access and can they probe and poke around. And if it’s malware traffic from our perspective, with the IDS we took the approach, inside or outside -- doesn’t matter. If we can detect it, if we can be in the path, it’s a block.

Gardner: For those of our listeners who might not be familiar with the term “intrusion prevention systems,” maybe you could illustrate and flesh that out a bit. What do we mean by IPS? What are we talking about? Are these technologies? Are these processes, methodologies, or all of the above?

O’Shea: TippingPoint technology is an appliance-based technology. It’s an inline device. We deploy it inline. It sits in the network, and the traffic is flowing through it. It’s looking for characteristics or reputation on the type of traffic, and reputation is a more real-time change in the system. This network, IP address, or URL is known for malware, etc. That’s a dynamic update, but the static updates are signature-type, and the detection of vulnerability or a specific exploit aimed at an operating system.

So intrusion prevention is through the detection of that, and blocking and preventing that from completing its communication to the end node.

Gardner: And these work in conjunction with other approaches, such as security information, event management, and network-based anomaly detection. Is that correct? How do they work together?

Bigger picture

O’Shea: All the events get logged into HP ArcSight to create the bigger picture. Are you seeing these type of events occurring other places? So you have the bigger picture correlation.

Network-based anomaly detection is the ability to detect something that is occurring in the network and it's based on an IP address or it’s based on a flow. Taking advantage of reputation we can insert those IP addresses, detected based on flow, that are doing something anomalous.

It could be that they’re beaconing out, spreading a worm. If they look like they’re causing concerns with a high degree of accuracy, then we can put that into the reputation and take advantage of moving blocks.

So reputation is a self-deploying feature. You insert an IP address into it and it can self-update. We haven’t taken the automated step yet, although that’s in the plan. Today, it’s a manual process for us, but ideally, through application programming interfaces (APIs), we can automate all that. It works in a lab, but we haven’t deployed it on our production that way.

Gardner: Clearly HP is a good example of a large enterprise, one of the largest in the world, with global presence, with a lot of technology, a lot of intellectual property, and therefore a lot to protect. Let’s look at how you actually approached protecting the HP network.
We wanted to prevent mal traffic, mal-formed traffic, malware -- any traffic with the mal intent of reaching the data center.

What’s the vision, if you will, for HP's Global Cyber Security, when it comes to these newer approaches? Do you have an overarching vision that then you can implement? How do we begin to think about chunking out the problem in order to then solve it effectively?

O’Shea: You want to be able to detect, block, and prevent as an overarching strategy. We also wanted to take advantage of inserting a giant filter inline on all data that’s going into the data center. We wanted to prevent mal traffic, mal-formed traffic, malware -- any traffic with the "mal" intent of reaching the data center.

So why make that an application decision to block and rely on host-level defenses, when we have the opportunity to do it at the network? So it made the network more hygienically clean, blocking traffic that you don’t want to see.

We wrapped it around the data center, so all traffic going into our data centers goes through that type of filter. [Learn more about intrusion prevention systems.]

Gardner: You’ve mentioned a few HP products: TippingPoint and ArcSight, for example, but this is a larger ecosystem approach and play. Tell us a little bit about partnerships, other technologies, and even the partnerships for implementation, not just the technology, but the process and methodologies as well.

Key to deployment

O’Shea: That was key to our deployment, because it is an inline technology and you are going inline in the network. You’re changing flows, where it could be mal traffic, but yet maybe a researcher is trying to do something. So we need to have the ability to have that level of partnership with the network team. They have to see it. They have to understand what it is. It has to be manageable.

When we deployed it, we looked at what could go wrong and we designed around that. What could go wrong? A device failed. So we have an N+1 type installation. If a single device fails, we’re not down, we are not blocking traffic. We have the ability to handle the capacity of our network, which grows, and we are growing, and so it has to be built for the now and the future. It has to be manageable.

It has to be able to be understood by “first responders,” the people that get called first. Everybody blames the network first, and then it's the application afterward. So the network team gets pulled in on many calls, at all types of hours, and they have to be able to get that view.

That was key to get them broad-based training, so that the technology was there. Get a process integrated into how you’re going to handle updates and how you’re going to add beyond what TippingPoint recommended. TippingPoint makes a recommendation on profiles and new settings. If we take that, do we want to add other things? So we have to have a global cyber-security view and a global cyber-security input and have that all vetted.

The application team had to be onboard and aware, so that everybody understands. Finally, because we were going into a very large installed network that was handling a lot of different types of traffic, we brought in TippingPoint Professional Services and had everything looked at, re-looked at, and signed off on, so that what we’re doing is a best practice. We looked at it from multiple angles and took a lot of things into consideration.
We proxy the events. That gives us the ability to have multiple ArcSight instances and also to evolve.

Gardner: Now, we have different groups of people that need to work in concert to a larger degree than in the past. We have application folks, network folks, outside service providers, and network providers. It seems that we are asking for a complete view of security, which means people need to be coordinated and cooperative in ways that they hadn’t had to be before.

Is there something about TippingPoint and ArcSight that provides data, views, and analytics in such a way that it's easier for these groups to work together in ways that they hadn’t before? We know that they have to work together, but is there something about the technology that helps them work together, or gives them common views or inputs that grease the skids to collaboration?

O’Shea: One of the nice things about the way the TippingPoint events occur is that you have a choice. You can send them from an individual IDS units themselves or you can proxy them from the management console. Again, the ability to manage was critical to us, so we chose to do it from the console.

We proxy the events. That gives us the ability to have multiple ArcSight instances and also to evolve. ArcSight evolves. When they’re changing, evolving, and growing, and they want to bring up a new collector, we’re able to send very rapidly to the new collector.

ArcSight pulls in firewall logs. You can get proxy events and events from antivirus. You can pull in that whole view and get a bigger picture at the ArcSight console. The TippingPoint view is of what’s happening from the inline TippingPoint and what's traversing it. Then, the ArcSight view adds a lot of depth to that.

Very flexible

So it gives a very broad picture, but from the TippingPoint view, we’re very flexible and able to add and stay in step with ArcSight growth quickly. It's kind of a concert. That includes sending events on different ports. You’re not restricted to one port. If you want to create a secure port or a unique port for your events to go on to ArcSight, you have that ability.

Gardner: We’ve heard, of course, how important real-time reaction is, and even gaining insights to be able to anticipate and be proactive. What is it that you learned through this process that allowed you to make that latency reduced or eliminated so that the amount of time that things go on is cut. I’ve heard that a lot of times you can't prevent intrusion, but you can prevent the damage of intrusion. So how does it work in terms of this low latency time element?

O’Shea: With TippingPoint, you get to see when an exploit is triggered, TippingPoint has a concept of Zero Days and it has a concept of Reputation. Reputation is an ongoing change, and Zero Day is a deployment of a profile. Think of Reputation as a constant updating of signatures as sites change and how the industry is recognizing them. So that gives you an ability to have a view of a site that people frequented and may now be compromised. You have that ability to see that because the Reputation of the site changed.

With TippingPoint being a block technology, you have the low latency. The latency is being detected and blocked, but now, when you pull it back into ArcSight, you have the ability to see a holistic view. We’re seeing these events or something that looks similar. The network-based anomaly detection is reporting some strange things happening, or you have some antivirus things that are reporting.

That’s a different type of reaction. You can react and deploy and say that you want to take action against whatever it is you are seeing. Maybe you need to put up a new firewall block to alleviate something.
That’s a different type of reaction. You can react and deploy and say that you want to take action against whatever it is you are seeing.

Or on the other hand, if TippingPoint is not seeing it, maybe you have the opportunity to activate this new signature more rapidly and deploy new profile. This is something new, and you can take action right away.

Gardner: Jim, let's talk a bit about what you get when you do this correctly. So using HP’s example, what were some of the paybacks, both in technical terms, maybe metrics of success technically, but then also business results? What happens when you can deploy these systems, develop those partnerships, and get cooperation? How can we measure what we have done here?

O’Shea: One of the things that we did wrong in our deployment is that we didn’t have a baseline of what is mal or what is bad. So, as it was a moving deployment, we don’t have hard and fast metrics of a before and after view. But again, you don’t know what's bad until you start trying to detect it. It might not have been for us to even take that type of view.

We deployed TippingPoint. After the deployment we’ve had some DoS attacks against us, and they have been blocked and deflected. We’ve had some other events that we have been able to block and defend rapidly. [Learn more about intrusion prevention systems.]

If you think back historically of how we dealt with them, those were kind of a Whac-A-Mole-type of defenses. Something happened, and you reacted. So I guess the metric would be that we’re not as reactionary, but do we have hard metrics to prove that? I don’t have those.

How much volume?

Gardner: We can appreciate the scale of what the systems are capable of. Do we have a number of events detected or that sort of thing, blocks per month, any sense of how much volume we can handle?

O’Shea: We took a month’s sample. I’m trying to recall the exact number, but it was 100 million events in one month that were detected as mal events. That’s including Internet-facing events. That’s why the volume is high, but it was 100 million events that were automatically blocked and that were flagged as mal events.
The Professional Services teams have been able to deploy in a very large network and have worked with the requirements that a large enterprise has.

Gardner: How do you now take this out to the market? Is there a cyber-security platform? Do you have a services component? You’ve done this internally, but how do you take this out to the market, combining the products, the services, and the methodologies?

O’Shea: I’m not on the product marketing side, but TippingPoint has learned from us and we’ve partnered with them. We’re constantly sharing back with them. So the give-back to TippingPoint, as a product division, is that they can see real traffic, in a real high-volume network, and they can pretest their signatures.

There are active lighthouse-type installs, lighthouse meaning that they’re not actively blocking. They’re just observing, and they are testing their next iteration of software and the next group of profiles. They’re able to do that for themselves, and it's a give back that has worked. What we receive is a better product, and what everybody else receives is a better product.

The Professional Services teams have been able to deploy in a very large network and have worked with the requirements that a large enterprise has. That includes standard deployment, how things are connected and what the drawings are going to look like, as well as how are you going to cable it up.

A large enterprise has different standards than a small business would have, and that was a give back to the Professional Services to be able to deploy it in a large enterprise. It has been a good relationship, and there is always opportunity for improvement, but it certainly has helped.

Current trends

Gardner: Jim, looking to the future a little bit, we know that there’s going to be more and more cloud and hybrid-cloud types of activities. We’re certainly seeing already a huge uptick in mobile device and tablet use on corporate networks. This is also part of the bring-your-own-device (BYOD) trend that we’re seeing.

So should we expect a higher degree of risk and more variables and complication, and what does that portend for the use of these types of technologies going forward? How much gain do you get by getting on the IDS bandwagon sooner rather than later?

O’Shea: BYOD is a new twist on things and it means something different to everybody, because it's an acronym term, but let's take the view of you bringing in a product you buy.
BYOD is a new twist on things and it means something different to everybody, because it's an acronym term.

Somebody is always going to get a new device, they are going to bring in it, they are going to try it out, and they are going to connect it to the corporate network, if they can. And because they are coming from a different environment and they’re not necessarily to corporate standards, they may bring unwanted guests into the network, in terms of malware.

Now, we have the opportunity, because we are inline, to detect and block that right away. Because we are an integrated ecosystem, they will show up as anomalous events. ArcSight and our Cyber Defense Center will be able to see those events. So you get a bigger picture.

Those events can be then translated into removing that node from the network. We have that opportunity to do that. BYOD not only brings your own device, but it also brings things you don’t know that are going to happen, and the only way to block that is prevention and anomalous type detection, and then try to bring it altogether in a bigger picture.

Gardner: Well, great. I’m afraid we will have to leave it there. We’ve been learning about the modern ins and outs of improving enterprise intrusion prevention systems, and we’ve heard about how HP itself has created more of a granular access control benefit amid real-time, yet intelligent, intrusion detection and protection.

I’d like to thank the supporter for this series, HP Software, and remind our audience to carry on the dialogue through the Discover Group on LinkedIn. And of course, a big thank you to our guest, Jim O'Shea, Network Security Architect for HP Cyber Security Strategy and Infrastructure Engagement. Thanks so much, Jim.

O’Shea: Thank you.

Gardner: And lastly, our appreciation goes out to our global audience for joining us once again for this HP Discover Podcast discussion.

I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host for this ongoing series of HP-sponsored business success stories. Thanks again for listening, and come back next time.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: HP.
Learn more about prevention detection.

Transcript of a BriefingsDirect podcast on how the strategy of dealing with malware is shifting from reaction to prevention. Copyright Interarbor Solutions, LLC, 2005-2014. All rights reserved.

You may also be interested in: