Wednesday, October 02, 2013

Panel of Business Experts Explores Role and Value of Big Data in Customer Analytics

Transcript of a BriefingsDirect podcast on how firms are using HP Vertica to gain more and faster insight from customer actions and interaction.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: HP

Dana Gardner: Hello, and welcome to the next edition of the HP Discover Performance Podcast Series. I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your moderator for this ongoing discussion of IT innovation and how it’s making an impact on people’s lives.

Gardner
Once again, we’re focusing on how IT leaders are improving their business performance for better access, use and analysis of their data and information. This time we’re coming to you directly from the HP Vertica Big Data Conference in Boston.

Our next innovation case study panel discussion highlights how various organizations are developing the means to develop far better analytics about their customers. To learn more about how high performing and cost-effective big data processing enables a steep learning curve from customers on their wants and preferences, please join me now in welcoming our guests, Rob Winters, the Director of Reporting and Analytics at Spil Games based in Amsterdam. Welcome, Rob.

Rob Winters: How is it going?

Gardner: It’s going great. We're also here with Davide Conforti, Business Intelligence Director at Jobrapido, based in Milan. Welcome, Davide.

Davide Conforti: Thank you, guys. Welcome.

Gardner: And we are also here with Pete Fishman, Director of Analytics at Yammer, based in San Francisco. Welcome. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

Pete Fishman: Thanks, Dana.

Gardner: Businesses have been analyzing customers for a long time. This isn’t something new -- needing to know a lot about your customer. What’s different now about truly getting to know your customer? Let’s start with you, Pete.

Fishman
Fishman: I work in the software industry, and our data now on the customers is all living in a central place. We're a cloud software service, and the data is big. By aggregating across companies that are using your software, you can get really significant sample sizes and real inference, both from an economic sense, in terms of measuring the lift, but actually because the sample sizes are big, you can get statistical inference.

That’s the starting point for making analytics valuable and learning about your customers.

Gardner: Rob, what’s different now, in terms of being able to get information, than 10 years ago?

Different problems

Winters: For me, the problem space is extremely different from what I was dealing with a couple of years back.

I was in telecom before this. There, you're dealing with 25 million people, and if you rescore them once a month, that’s fast enough. On a web scale problem, I'm dealing with 200 million customers and I have to rescore them within 10 or 15 minutes. So you're capturing significantly more data. We're looking at billions of records per day coming into our systems. We have to use it as fast as possible, because with the customer experience online, minutes matter.

Gardner: Is this a familiar story to you Davide? How are things different for you in terms of getting to know your customers?

Conforti
Conforti: It’s absolutely the same story. We have about 40 million unique visitors per month now. We've grown by double-digits since our start as a startup in 2006. Now, everything is about user interaction, how our users behave on-site, and how we can engage them more on-site and provide them a tremendous ad-hoc user experiences.

Gardner: So it's not just getting to know your customers. It's following your customers. It’s their actions that you can capture. I suppose that's pretty interesting and new, but let’s start with Spil Games. Tell us about your organization. How did you get such a big audience?

Winters: We've been around for about nine years. We started out as just a Dutch company and then we've acquired other local domain names in a variety of languages. At this point, we have about 50 different platforms, running in about 20 different languages. So we support customers from all over the world. In a given month, we have over 200 countries with traffic onto our sites.

Winters
For us, growth was initially about just getting that organic traffic. Up until a few years ago, if you had a good domain name, you were competing based off of where you ranked in search. Now, the entire business is changing, and you're competing based off that customer experience that you can deliver.

Gardner: Tell us what kind of games, and who are they targeted at?

Winters: We have a couple target audiences: girls, young girls, 8-14; boys; and then women. We're primarily a platform. We do some game development and publishing, but our core business is just being the platform where people can come and find content that’s interesting to them.

Gardner: Let's hear more about Yammer. Tell me, Pete, what Yammer is and does, and how you got to such huge numbers and big data.

Fishman: Yammer is a startup in San Francisco. We were acquired about a year ago by Microsoft and we're part of the larger Office organization. We view ourselves as enterprise social, taking this many-to-many communication model and making communication at your company much more efficient.

It's about surfacing relevant knowledge and experts and making work lives better. I run an analytics team there, and we essentially look at the aggregate customer behaviors and what parts of our tool people are using.

Social networks

Gardner: So, this was interesting for you as a social network within the confines of an enterprise of a business. What goes on in that network is imported data. You can learn tribal knowledge, capture it, and apply it to other problems, which perhaps you can't do on some of the more public or free and open social networks.

Fishman: Exactly. This was a really revolutionary idea that our founders David Sacks and Adam Pisoni had, way back when Facebook wasn't nearly as relevant as it is today. But we've leveraged a lot of the way that people have learned to interact in their social life and bring some of that efficiency of communication.

For example, telling you that I've gotten engaged or I'm having a baby, all these pictures go on Facebook. It's an efficient way of getting many-to-many communication. They saw that these social networks would grow and be relevant in a private, secured context of your business.

Gardner: Let's learn more about Jobrapido. Tell me about your organization and the some of the reasons that there's so much data to analyze.

Conforti: Jobrapido started in 2006 as an entrepreneurial challenge that Vito Lomele, an Italian guy, started in Milan. It's quite a challenge to live in the online market in Italy, because talent pooling isn't as wide as in U.S. or in other countries in Europe. What we do is provide job-seekers the opportunity to find their new job.
What we do is provide jobseekers the opportunity to find their new job.

We're an online job-search engine and we currently operate in 58 different countries with more than 20 languages. We're all in this big headquarters in Milan with a lot of different nationalities, because of course, we provide the service in local languages for most of our customers.

Recently, we have been purchased by the Daily Mail group, a big media group based in London. For us, it's everything from job-seeker acquisition and retention and engagement deals with constant quality and user experience on-site. We use our big data warehouse in order to understand how to better attract and retain customers on the basis of their preferences. And we also use it to tweak our matching algorithm, which works more or less like a Google algorithm.

We crawl a lot of contents from different sources, both job boards and other job sites or directly in the working pages of individual companies. We put them together in a big database and, using statistical tools, we infer which kind of rankings our job-seekers are willing to see.

So it's a pretty heavy data crunching exercise that we do everyday on millions and millions of different sponsored or organic postings.

Gardner: And just to be clear, this is a site for not only those who are looking for job use, but those who are looking to hire as well.

Moving to B2B

Conforti: True. Most of our business deals with B2C, but we're developing tools and a B2B platform to address players such as job boards, for example. We crawl and get sponsored ads from job boards as well, but we're more and more going towards our end customers.

For example if Yammer guys or if Spil Games guys want to hire a software engineer, they can directly promote their sponsored ads on Jobrapido without having to sponsor them on a job board. So we're trying to aggregate and simplify the chain of job search.

Gardner: Now that we know more about you, let's learn more about the problem that you had when it comes to managing big data, and where to get to those all important customer insights and analysis to make those available to your workers and strategists.

Rob, let's start with you. What was the problem you had to solve when it comes to getting at this data in analysis?
As you start to bring in different data sources, you start with all the stuff that you know you're going to need right away.

Winters: For me, my problem was that no one had ever tried to do it in my company before. We walked in with effectively a clean slate. But as you start to bring in different data sources, you start with all the stuff that you know you're going to need right away.

You start seeing needed links for other data sources. At this point, we're pulling data from thousands of databases, merging with dozens of application programming interfaces (APIs). You're pulling in your web log data, so that you can personalize for those folks who aren’t giving you registration information.

For me the challenge was multi-fold. How do you deal with this data problem, with this variety and volume information? How do you present it in a meaningful fashion for employees who've never looked at data before, so that they can make good decisions on it? And how do you run models against it and feed that back into a production environment as quickly as possible, so that you can give those customers a better experience than they were ever getting before on your platform?

Gardner: How did you solve it?

Winters: We're still trying to solve it, to be honest. If you look at it, we've built a technology stack that is a mixture of open source, commercial, and proprietary software that we've developed to solve these different problems. It's an ongoing journey for us -- how we do these things, and we're moving forward two steps, falling back one, and continuing along this path.

Gardner: What was it about an HP Vertica architecture that helped mitigate some of these issues? Was there a comparison to the way you had done it before, or did you go directly to a Vertica solution when you encountered these issues?

Large data

Winters: When we first started looking for a data warehouse appliance or application, we were running Postgres with no indices, just copies of production data. For data guys, that means that a query will take eight hours to execute. It's a table of a couple of million rows.

We knew that a typical row-based solution was out. So we started looking at some of the other applications out there. The big ones are Teradata, Exadata, and Greenplum, but you're going to have to mortgage the house of every employee in the company to be able to afford a license for those applications, and we're a pretty small company. So those were out.

Then, we started looking at some of the other boutique vendors like Infobright, and basically we saw that with Vertica, we can have relatively low load on our database administrator (DBA), so we can develop quickly without a lot of maintenance.

The pricing model fits what we need to achieve, and the performance is so good that we don't have to spend a ton of time on optimization now. We can basically move very rapidly along this path of becoming a data-driven organization without having to get held up on index optimization or trying to optimize our queries and rewrite paths.
We can just throw a lot of stuff into the system, smash it together, take the results, and get big wins for the company quickly.

We can just throw a lot of stuff into the system, smash it together, take the results, and get big wins for the company quickly.

Gardner: And how important is it for you to be able to deploy this on appliances only, or do you have other directions that you would like to go with that?

Winters: No, we're doing everything within our own premises. We have a data center, and we do everything on our own private servers. For us, the next step is probably going to be moving more into a private-cloud model, and hopefully, Vertica will work in that environment as well.

Gardner: At Yammer, let's look at your problem set and how you went about solving it.

Fishman: I think more broadly than just data as the problem set. Our problem set was that there were a lot of people trying to get into the enterprise social space. A lot of social networks are popping up, and essentially competing for attention at work is a challenge.

We felt that data was necessary to have a competitive advantage. David Sacks and Adam Pisoni had a vision of developing a consumer software company with rapid iteration. With that rapid iteration you get an extra advantage if you're able to reorient yourself based on what part of the product is working. Our data problems were largely about making data be a competitive advantage in our development methodology.

Gardner: What was it about Vertica that was instrumental to the point where you've adopted it? Is it a concurrency issue, a volume issue, speed, or all the above?

It's about speed

Fishman: It's all of the above, but the real highlight is always going to be about speed, especially, given the incredible competition for talent, not just in the Bay Area, but all over, especially in the data field.

Anybody that has data in their title is someone that’s highly sought after. That ability to minimize the cycle times for those folks who are such a challenge to keep and get excited about the projects that they're working on and is a tremendous solution that allows them to maximize their own abilities is really critical. It's the same in our space, and in software development in general.

Since we're in Boston, I feel like I can use baseball analogy. Hall of Fame product managers are like Hall of Fame baseball players, meaning they get it right about a third of the time. When we take on these big risks and challenges, the ability to very quickly identify whether we're going in the right direction, and then reorienting where we are going, has been really critical to Yammer being successful.

Gardner: I guess we could say it's better to give your data scientists a Ferrari than a go-kart?

Fishman: That seems like a good investment these days.

Gardner: Davide, what's the Ferrari in your organization? How did you get to one and what were you using before?

Conforti: When I joined Jobrapido, we already ran tons of A/B tests, which are the lifeblood of our product innovation. We want to test everything, from changing the color or the font of one button to a different layout, because these have tremendous impact on improving the user engagement.
We really appreciate this flexibility and the high level of control that Vertica allows. This improved a lot our innovation throughput and it's going to improve it even more in the future./p>

Before, we used the Google Analytics tools, but we didn't like that much, because it's sample data, so you hardly reach statistically meaningful results. We decided to build a data warehouse to assure flexibility, performance, and also a higher level of control and data consistency. That's end-to-end control from the source, toward the visualization, in order to make them more actionable in terms of product development.

With Vertica, we did exactly this. We poured all the different data sources into one bucket, organized it, and now we have a full control over the data model. With my team, I manage these data models. It's fascinating how fast you can add pieces to the puzzle or remove others that are no longer interesting, because our business model, of course, is a living animal, a living creature.

We really appreciate this flexibility and the high level of control that Vertica allows. This improved a lot our innovation throughput and it's going to improve it even more in the future.

Gardner: Do you have any metrics of success for comparison, either in time, concurrency, or volume? Most of our listeners and audience are interested in some hard facts. Do you have any feeds and speeds you can share?

Conforti: Currently, we crunch on Vertica about 30 GB of data everyday (i.e. we upload 30 GB/day on Vertica). But we're going to double it in a few months, because we're adding more stuff. We want to know more about the click patterns of our job-seekers on the site, and this is massive data flowing into Vertica. Also, our licensing in terabytes will likely double in the future.

Increased performance

Another hard fact that I can share with you guys is that every one of you using Vertica doesn't have to be satisfied with the first implementation of the query. If you're able to optimize it, you almost increase the performance of the query by more than 100 percent. This is my personal experience with consultants or advisers. Vertica is happy to provide the support, and this is really value-adding.

Gardner: Given that you're seeing such a large increase very rapidly in terms of your data volume, do you have a sense of cost prediction, or is there a visibility at least into the relationship between the task and the total cost?

Conforti: What we try to understand is whether we have to pour this big amount of data, all into Vertica or if we have to flank it with Hadoop or some sort of cheaper storage solution, in order to get better control costs. Currently, I don't have the figures or a model to estimate how the cost moves with the numbers. This is a pretty good point. I will build it and I will share the results with you in the future.

Gardner: Rob Winters, any metrics of success and/or how do you feel about visibility into controlling costs?
For me, it allowed me to actually do my job and have my team do their jobs, which is a pretty big metric of success.

Winters: As far as metrics of success, when we were doing our proof of concept (POC), we looked at primarily query performance. At that point, we weren’t looking at using it for prediction and personalization, but just for analytics and reporting.

What we saw was against an indexed Postgres database. We had done some optimization on the data. Our queries were running more than 1,000 percent faster, and Vertica was scaling pretty linearly, whereas with Postgres, when we put more data into the tables, they just started choking and just died completely.

For me, it allowed me to actually do my job and have my team do their jobs, which is a pretty big metric of success.

The other thing is that with a relatively small cluster, we can support hundreds of people and reports directly accessing the database, a dozen analysts or people who directly query information out of the database, and all of our personalization activities simultaneously with minimal performance hiccups. That’s a big metric of success.

Gardner: Pete, how do you judge this? What are the important metrics? Maybe you could wow us with some of your speeds and feeds, too.

Fishman: I have similar feedback as Rob, which is a comparing against a Postgres database. The speeds are at least one -- and probably closer to two or better -- order of magnitude faster. Certainly on the cost side, it's important with data to consider the whole cost. So this is sort of a theme.

End-to-end costs

There is a cost in a variety of managing and teasing out the useful insights that aren't necessarily in the sticker price. When considering a data solution, people should consider the end-to-end costs. What's really the cost per insight, as opposed to the cost per terabyte or the cost per whatever.

We certainly feel that Vertica has been our best solution. We've been customers for over three years. So it's quite a long relationship. I couldn’t imagine going back to a multi-day query, or something like that.

Gardner: So on that important new metric of cost-per-insight, do you see a trend for that?

Fishman: One thing that Davide mentioned is that he's forecasting how much data he will be putting into Vertica. I'm a forecaster myself by trade. Back in 2010, we were doing some estimates of where we would be by the end of 2011 in terms of our data volumes. This is a pretty simple extrapolation, and I got it wrong by at least an order of magnitude.
Tripping over really valuable insights can happen a lot more easily than when you're more naïve about it.

What we found is that when you start to get real insights from data, you want to get a little bit more, collect it maybe here or there. Also, as our product was growing, we faced some real exponential growth on the data and adopted clever solutions for maximizing that metric that we care about -- cost per insight, or minimizing the cost for insight.

Gardner: But you're not willing to predict if that's going to go up or down based on your efficiency and the use of the technology?

Fishman: There are many things going on simultaneously. So tripping over really valuable insights can happen a lot more easily than when you're more naïve about it. Essentially, you're facing headwinds in that. Finding insights become harder. At the same time, you have larger data volumes and some economies of scale there. So there are a lot of things simultaneously interacting, but clearly one thing to drive down that metric is best-in-breed tools.

Gardner: Of course, best to get the information of the people who can use it than to simply look to cut cost.

Fishman: Of course. If you view analytics as a cost center, that's the wrong view. It should be aimed at optimizing revenue streams. We micro-optimize the product, we micro-optimize sales and marketing, the business. Analytics is about improving everybody at their job, making data available to allow people to be more effective.

Gardner: Well, great. I'm afraid we will have to leave it there. We've been learning about how various organizations are developing the means to far better analyze their customers, and these are some impressive organizations with very large sets of customers and data that go along with that.

We've seen how they deployed in HP Vertica Analytics Platform to provide better analytics to their internal users, and then, in some cases, back out to the very customers that they are gathering data from. So a big thank you to our guests, Rob Winters, Director of Reporting and Analytics at Spil Games based in Amsterdam. Thanks so much.

Winters: Thank you.

Gardner: And we've also been joined by Davide Conforti, Business Intelligence Director at Jobrapido in Milan. Thank you, David.

Conforti: Thank you, guys. It's been a pleasure.

Gardner: And also Pete Fishman, Director of Analytics at Yammer in San Francisco. Thanks, Pete.

Fishman: My pleasure. Thank you very much.

Gardner: And thanks to you all for joining us for this special HP Discover Performance Podcast coming to you from the HP Vertica Big Data Conference in Boston.

I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host for this ongoing series of HP Sponsored Discussions. Thanks again for joining us, and do come back next time.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: HP.

Transcript of a BriefingsDirect podcast on how firms are using HP Vertica to gain more and faster insight from customer actions and interaction. Copyright Interarbor Solutions, LLC, 2005-2013. All rights reserved.

You may also be interested in:

Thursday, September 26, 2013

Application Development Efficiencies Drive Agile Payoffs for Healthcare Tech Provider TriZetto

Transcript of a BriefingsDirect podcast on how a major healthcare software provider is using HP tools to move from waterfall to agile.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: HP.

Dana Gardner: Hello, and welcome to the next edition of the HP Discover Performance Podcast Series. I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your moderator for this ongoing discussion of IT innovation and how it’s making an impact on people’s lives.

Gardner
Once again, we're focusing on how IT leaders are improving their services' performance to deliver better experiences and payoffs for businesses and end users alike, and this time we're coming to you directly from the HP Discover 2013 Conference in Las Vegas.

We’re here the week of June 10 to explore some award-winning case studies from leading enterprises. And we’ll see how a series of innovative solutions and IT transformation approach to better development and test and deployment of applications is benefiting these companies.

Our next innovation case study interview highlights how TriZetto has been improving its development processes and modernizing its ability to speed the applications development process, and bring better tools for its internal developers as well as support a lifecycle approach to software.

To learn more about how TriZetto is modernizing its development and deployment capabilities, please join me in welcoming Rubina Ansari, Associate Vice President of Automation and Software Development Lifecycle Tools at TriZetto. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

Rubina Ansari: Thank you, Dana.

Gardner: We hear a lot about improving software capabilities, and Agile of course is an important part of that. Tell me where you are in terms of moving to Agile processes, and we’ll get more into how you're enabling that through tools and products?

Ansari: TriZetto currently is going through an evolution. We’re going through a structured waterfall-to-scaled-Agile methodology. As you mentioned, that's one of the innovative ways that we're looking at getting our releases out faster with better quality, and be able to respond to our customers. We realize that Agile, as a methodology, is the way to go when it comes to all those three things I just mentioned.

We're currently in the midst of evolving how we work. We’re going through a major transformation within our development centers throughout the country.

Gardner: And software is very important to your company, tell us why and then a little bit about what TriZetto does?

Ansari: TriZetto is a healthcare software provider. We have the software for all areas of healthcare. Our mission is to integrate different healthcare systems to make sure our customers have seamless information. Over 50 percent of the American insured population goes through our software for their claims processing. So, we have a big market and we want to stay there.

Leaner and faster

Our software is very important to us, just as it is to our customers. We're always looking for ways of making sure we’re leaner, faster, and keeping up with our quality in order to keep up with all the healthcare changes that are happening.

Gardner: You've been working with HP Software and Application Lifecycle Management (ALM) products for some time. Tell us a little bit about what you have in place, and then let's learn a bit more about the Asset Manager capabilities that you're pioneering?

Ansari
Ansari: We've been using HP tools for our testing area, such as the QTP Products Performance Center and Quality Center. We’ve recently went ahead with ALM 11.5, it has a lot of cross-project abilities. As for agile, we're now using HP Agile Manager.

This has helped us move forward fairly quickly into scaled agile using HP Agile Manager, while integrating with our current HP tools. We wanted to make sure that our tools were integrated and that we didn’t lose that traceability and the effectiveness of having a single vendor to get all our data.

HP Agile Manager is very important to us. It's a software-as-a-service (SaaS) model, and it was very easy for us to implement within our company. There was no concept of installing, and the response that we get from HP has been very fast, as this is the first experience we’ve had with a SaaS deliverable from HP.
It's very lightweight, it's web-based SaaS and it integrates with their current tool suite.

They're following agile, so we get releases every three months. Actually, every few weeks, we get enhancements for defects we may find within their product. It's worked out very well. It's very lightweight, it's web-based SaaS and it integrates with their current tool suite, which was vital to us.

Gardner: And how large of an organization are you in terms of developers, and how many of them are actively using these products?

Ansari: We have between 500 and 1,000 individuals that make up development teams throughout United States. For Agile Manager, the last time we checked, it was approximately 400. We're hoping to get up to 1,000 by end of this year, so that way everyone is using Agile Manager for all their agile/scrum teams and their backlogs and development.

Gardner: Tell us a bit also about how paybacks are manifesting themselves. Do you have any sense of how much faster you're able to develop? What are the paybacks in terms of quality, traceability, and tracking defects? What's the payback from doing this in the way you have?

Working together

Ansari: We’ve seen some, but I think the most is yet to come in rolling this out. One of the things that Agile Manager promotes is collaboration and working together in a scrum team. Agile Manager, having the software work all around the agile processes, makes it very easy for us to roll an agile methodology.

This has helped us collaborate better between testers and developers, and we're finding those defects earlier, before they even happen. We’ll have more hard metrics around this as we roll this out further. One of the major reasons we went with HP Agile Manager is that it has very good integration with the development tools we use.

They integrate with several development tools, allowing our testers to be able to see what changes occurred, what piece of code has changed for each defect enhancement that the tester would be testing. So that tight integration with other development tools was a very pivotal factor in our decision of going forward with that HP Agile Manager.

Gardner: So Rubina, not only are you progressing from waterfall to agile and adopting more up-to-date tools, but you’ve made this leap to a SaaS-based delivery for this. If that's working out well as you’ve said, do you think this is going to lead to doing more with other SaaS tools and tests and capabilities and maybe even look at cloud platform as a service opportunity?
We're also looking at offering some of our products in a SaaS model. So we realize what's involved in it.

Ansari: Absolutely. This was our first experience and it is going very well. Of course, there were some learning curves and some learning pains. Being able to get these changes so quickly and not having it do it ourselves was kind of a mind shift change for us. We're reaping the benefits from it obviously, but we did have to have a little more scheduled conversations, release notes, and documentation about changes from HP.

We're not new to SaaS. We're also looking at offering some of our products in a SaaS model. So we realize what's involved in it. It was great to be on the receiving end of a SaaS product, knowing that TriZetto themselves are playing that space as well.

Gardner: Tell us what the future holds. Are you going to be adding any additional lifecycle elements moving on this journey, as you've described it? What's next?

Ansari: There's always so much more to improve. What we’re looking for is how to quickly respond to our customers. That means also integrating HP Service Manager and any other tools that may be part of this software testing lifecycle or part of our ability to release or offer something to our clients.

We'll continue doing this until there is no more space for efficiency. But, there are always places where we can be even more effective.

Mobile development

Gardner: How about mobile development? Is that something that’s on your radar and that you’ll be doing more of, given that devices are becoming more popular? I imagine that’s true of your customers too?

Ansari: We've talked about it, but it's really not on our roadmap right now. It hasn't been one of our main priorities.

Gardner: I suppose that you're in a good position to be able to move in that direction should you decide to.

Ansari: Absolutely. There's no doubt. The technologies that we’re advancing towards as well will allow us to easily go into the mobile space once we plan and do that.
The technologies that we’re advancing towards as well will allow us to easily go into the mobile space.

Gardner: Well great. I'm afraid we’ll leave it there. We’ve been learning about how TriZetto has been moving to a more agile methodology for its development and using a variety of HP software products for Application Lifecycle Management.

So please join me in thanking our guest. We’ve been here with Rubina Ansari, Associate Vice President of Automation and Software Development Lifecycle Tools at TriZetto. Thank you.

Ansari: Thank you. The pleasure was all mine.

Gardner: And I'd also like to thank our audience as well for joining us for this special HP Discover Performance Podcast coming to you from the HP Discover 2013 Conference in Las Vegas.

I'm Dana Gardner. Principal Analyst at Interarbor Solutions, your host for this ongoing series of HP sponsored discussions. Thanks again for joining, and come back next time.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: HP.

Transcript of a BriefingsDirect podcast on how a major healthcare software provider is using HP tools to move from waterfall to agile. Copyright Interarbor Solutions, LLC, 2005-2013. All rights reserved.

You may also be interested in:

Monday, September 23, 2013

Navicure Gains IT Capacity Optimization and Performance Monitoring Using VMware vCenter Operations Manager

Transcript of a BriefingsDirect podcast on how claims clearinghouse Navicure has harnessed advanced virtualization to meet the demands of an ever-growing business.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: VMware.

Dana Gardner: Hello, and welcome to a special BriefingsDirect podcast series coming to you from the 2013 VMworld Conference in San Francisco. We're here the week of August 26 to explore the latest in cloud-computing and virtualization infrastructure developments.

Gardner
I'm Dana Gardner, Principal Analyst at Interarbor Solutions, and I'll be your host throughout the series of VMware-sponsored BriefingsDirect discussions.

Our next innovator interview focuses on how a fast-growing healthcare claims company is gaining better control and optimization across its IT infrastructure. We're going to hear how IT leaders at Navicure have been deploying a comprehensive monitoring and operational management approach.

To understand how they're using dashboards and other analysis to tame IT complexity, and gain better return on their IT investments, please join me in welcoming Donald Wilkins, Director of Information Technology at Navicure Inc. in Duluth, Georgia. Welcome, Donald. [Disclosure: VMware is a sponsor of BriefingsDirect podcasts.]

Donald Wilkins: Glad to be here.

Gardner: Tell us a little bit about why your organization is focused on taming complexity. Is this a focus that's a result of cost, or is it taming complexity, or both?

Wilkins
Wilkins: At Navicure, we've been focused on scaling a fast-growing business. And if you incorporate very complex infrastructure, it becomes more difficult to scale it. So we're focused on technologies that are simple to implement, yet have a lot of upward availability of growth from the storage, the infrastructure, and the software we use. We do that in order to be able to scale that growth we needed to satisfy our business objectives.

Gardner: Tell us a little bit about Navicure, what you do, how is that you're growing, and why that's putting a burden on your IT systems.

Wilkins: Navicure has been around for about 12 years. We started the company in about 2001 and delivered the product to our customers in the late 2001-2002 timeframe. We've been growing very fast. We're adding 20 to 30 employees every year, and we're up to about 230 employees today.

We have approximately 50,000 physicians on our system. We're growing at a rate of 8,000 to 10,000 physicians a year, and it’s a healthy growth. We don't want to grow too fast, so as not to water down our products and services, but at the same time, we want to grow at a pace that better enables us to deliver better products for our customers.

Customer service is one of the foundation cornerstones of our business. We feel that our customers are number one, and retaining those customers is one of our primary goals.

Gardner: As I understand it, you're an Internet-based medical claims clearinghouse. Tell us what that boils down to. What is that you do?

Revenue cycle management

Wilkins: Claim clearinghouses have been around for a couple of decades now. We've evolved from that claim-clearinghouse model to what we refer to as revenue cycle management. We pioneered that term early as we started the company.

We take the transactions from physicians and send them to the insurance companies. That’s what the clearinghouse model is. But on that product, we added a lot of value-added services, a lot analytics around those transactions to help the provider generate more revenue for their transactions. They get paid faster, and that they get paid the first time through the system.

It was very costly for transactions to be delayed weeks because of poorly submitted transactions to the insurance company or denials because they coded something wrong.

We try to catch all that, so that they get paid the first time through. That’s the return on investment (ROI) that our customers are looking for when they look at our products, to lower the AR days and to increase their revenue at the bottom line.
We wanted to build a foundational structure that we can just build on as we get go into business and growing the transaction volume.

Gardner: Tell us a little bit about your IT environment. What do you have in your data center? Then, we'll get to how you've been able to better manage it.

Wilkins: The first thing we did at Navicure, when we started the company, is we looked at and decided that we didn't want to be in the data-center business. We wanted to use a colo that does that work at a much higher level than we could ever do. We wanted to focus on our product and let the colo focus on what they do.

They serve us from our infrastructure standpoint, and then we can focus on our products and build a good product. With that, we adopted very early on, the grid approach or the rack approach. This means that we wanted to build a foundational structure that we can just build on as we get go into business and grow the transactions volume.

That terminology has changed over the years, and that can be referred to a software-defined infrastructure today, but back then it was that we wanted to build infrastructure that would have a grid approach to it, so we could plug in more modules and components to add to scale out as we scale up.

With that, we continued to evolve what we do, but that inherent structure is still there. We need to be able to scale our business as our transactional volume doubles approximately every two years.

Gardner: And how did you begin your path to virtualization, and how did that progress into this more of a software-defined environment?

Ramping up fast

Wilkins: In the first few years of the operation of the company, we really had enough headroom in our infrastructure that it wasn't a big issue, but as we got four years into the company, we started realizing that we were going to hit a point where we would have to start ramping-up really fast.

Consolidation was not something that we had to worry about, because we didn’t have a lot to consolidate. It was a very early product, and we had to build the customer base. We had to build our reputation in the industry, and we did that. But then we started adding physicians by the thousands to our system every year.

With that, we started to have to add infrastructure. Virtualization came along at such a time that we could add it virtually faster and more efficiently than we could ever have if we added physical infrastructure.

So it became a product that we put in a test, dev, and production all at the same time, but it was something that just allowed us to meet the demands of the business.
We want to evolve that to be more proactive in our approach to monitoring.

Gardner: Of course, as many organizations have used virtualization to their benefit, they've also recognized that there is some complexity involved. And getting better management means further optimization, which further reduces costs. That also, of course, maintains their performance requirements. How did you then focus in on managing and optimizing this over time?

Wilkins: Well, one of the things we tried to look at, when we look at products and services, was to keep it simple. I have a very limited staff, and the staff needs to be able to drive to the point of whatever issue they're researching and/or inspecting.

As we've added technologies and services, we tried to add those that are very simple to scale, very, very simple to operate. We look at all these different tools to make that happen. This has led us to new products like VMware as they have also tried to drive to the same level, trying to simplify their product offering with their new products.

Gardner: Which products you are using? Maybe you could be more specific about what's working best for you?

Wilkins: For years, we've been doing monitoring with other tools that were network-based monitoring tools. Those drive only so much value. They give us things like up-time alerting and responsiveness that are just about when issues happen. We want to evolve that to be more proactive in our approach to monitoring.

It’s not so much about how we can fix a problem when there is one. It’s more of, let’s keep the problem from happening to start with. That's where we've looked at some products for that. Recently we've actually implemented vCenter Operations Manager.

That product gives us a different twist that other SMNP monitoring tools do. It's a history of what's going on, but also a future analysis of that history and how it will change, based on our historical trends.

New line-up

Gardner: Of course, here at VMworld, we're hearing vSphere improvements and upgrades, but also the arrival of VMware vCloud Suite 5.5 and VMware vSphere with Operations Management 5.5. Is there anything in the new line-up that is particularly of interest to you, and have you had a chance to look at over?

Wilkins: I haven’t had a chance to look over the most recent offering, but we're running the current version. Again, for us, it's the efficiency mechanism inside the product that drives the most value for us to make sure that we can budget a year in advance of the expanding infrastructure that we need to have to meet the demands.

Gardner: What sort of paybacks are there? Do you have any sense on a metrics or ROI basis? What you have been able to gain maybe through virtualization generally, and then the improved operations of those of workloads over time?

Wilkins: Just being able to drive more density in our colo by being virtualized is a big value for us. Our footprint is relatively small. As for an actual dollar amount, it’s hard to pin something on there. We're growing so fast, we're trying to keep up with the demand, and we've been meeting that and exceeding that.
Desktop virtualization is going to be a critical component for that.

Really, the ROI is that our customers aren’t experiencing major troubles with our infrastructure not expanding fast enough. That's our goal, to drive high availability for infrastructure and low downtime, and we can do that with VMware and with their products and service.

Gardner: How about looking to the future, Donald? Do you have any sense of whether things like disaster recovery or mobile support, perhaps even hybrid cloud services, will be something you would be interested in as you grow further?

Wilkins: We're a current customer of Site Recovery Manager. That's a staple in our virtual infrastructure and has been since 2008. We've been using that product for many years. It drives all of the planning and the testing of our virtual disaster recovery (DR) plan. I've been a very big proponent of that product and services for years, and we couldn’t do without it.

There are other products we will be looking at. Desktop virtualization is something that will be incorporated into the infrastructure in the next year or two.

As a small business, the value of that becomes a little harder to prove from a dollar standpoint. Some of those features like remote working come into play as office space continues to be expensive. It's something we will be looking at to expand our operations, especially as we have more remote employees working. Desktop virtualization is going to be a critical component for that.

Gardner: How about some 20/20 hindsight. If there were other folks that were ramping up on virtualization, or getting to the point where complexity was becoming an issue for them, do you have any thoughts on getting started or lessons learned that you could share?

Trusted partner

Wilkins: The best thing with virtualization is to get a trusted partner to help you get over the hurdle of the technical issues that may bring themselves to light.

I had a very trusted partner when I started this in 2005-2006. They actually just sat with me and worked with me, with no compensation whatsoever, to help work through virtualization. They made it such an easy value that it just became, "I've got to do this, because there's no way I can sustain this level of operational expense and of monitoring and managing this infrastructure, if it's all physical."

So, seeing that value proposition from a partner is key, but it has to be a trusted partner. It has to be a partner that has your best interest in mind, and not so much a new product to sell. It’s going to be somebody that brings a lot to the table, but, at the same time, helps you help yourself and lets you learn these products, so that you can actually implement it and research it on your own to see what value you can bring into the company.
It has to be a partner that has your best interest in mind, and not so much a new product to sell.

It’s easy for somebody to tell you how you can make your life better, but you have t to actually see it, because then, you become a passionate person for the technology, and then you become a person that realizes you have to do this and will do whatever it takes to get this in here, because it will make your life easier.

Gardner: How about specific advice for mid-market organizations, not too large. Is there something about dashboard, single pane, ease in getting a sense as the head of IT in your organization, over all the systems? Is there anything in particular that helps on that visualization basis that you would recommend that others perhaps consider?

Wilkins: Well, vCenter Operations Manager is key to understanding your infrastructure. If you don’t have it today, you're going to be very reactive to some of your pains and the troubles you're dealing with.

That product, while it does allow you to do a lot of research for various problems and services to drill down from the cluster level, down into the virtual machine levels and find out where your problems and pain points or, actually allows you to more quickly isolate the issue. At the same time, it allows you to project where you're growing and where you need to put your money into resources, whether that's more storage, compute resources, or network resources.

That's where we're seeing value out of the product, because it allows me to go during budget cycles to say that looking at infrastructure and our current growth, we will be out of resources by this time. We need to add this much, based on our current growth. Barring additional new products and services we may be coming up with, we may be adding to our service, if we don't do anything today. We're growing at this pace and here's the numbers to prove it.

When you have that information in front of you, you can actually build a business case around that that further educates the CFOs and the finance people to understanding what your troubles are and what you have to deal with on a day-to-day basis to operate the business.

Gardner: Must feel good to have some sense of being future proof, no matter what comes down you are going to be prepared for it.

Wilkins: Most definitely.

Gardner: Well, great. We'll have to leave it there. We've been talking about how an organization is gaining better control and optimization over their IT infrastructure, and we have heard how Navicure has been exploring comprehensive, monitoring, and operational management approach.

So a big thank you to our guest. We have been here with Donald Wilkins, the Director of IT at Navicure. Thank Donald.

Wilkins: My pleasure. Thank you.

Gardner: And thanks to our audience for joining this special podcast coming to you from the recent 2013 VMworld Conference in San Francisco.

I'm Dana Gardner; Principal Analyst at Interarbor Solutions, your host throughout the series of VMware-sponsored BriefingsDirect discussions. Thanks again for listening, and come back next time.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: VMware.

Transcript of a BriefingsDirect podcast on how claims clearinghouse Navicure has harnessed virtualization to meet the demands of an ever-growing business. Copyright Interarbor Solutions, LLC, 2005-2013. All rights reserved.

You may also be interested in:

Thursday, September 19, 2013

MZI HealthCare Identifies Big Data Patient Productivity Gems Using HP Vertica

Transcript of a BriefingsDirect podcast on how a healthcare services provider has harnessed data analytics to help its users better understand complex trends and outcomes.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: HP.

Dana Gardner: Hello, and welcome to the next edition of the HP Discover Performance Podcast Series. I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your moderator for this ongoing discussion of IT innovation and how it’s making an impact on people’s lives.

Gardner
Once again, we’re focusing on how IT leaders are improving their business performance for better access, use and analysis of their data and information. This time we’re coming to you directly from the recent HP Vertica Big Data Conference in Boston.

Our next innovation case study highlights how a healthcare solutions provider leverages big-data capabilities. We'll see how they've deployed the HP Vertica Analytics Platform to help their customers better understand population healthcare trends and identify how well healthcare processes are working.

To learn more about how high performance and cost-effective big data processing forms a foundational element to improving overall healthcare quality and efficiency, please join me now in welcoming our guest, Greg Gootee, Product Manager at MZI Healthcare, based in Orlando. Welcome, Greg. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

Greg Gootee: Hi. Thank you, Dana.

Gardner: Tell me a little bit about how important big data is turning out to be for how healthcare is being administered. It seems like there is a lot of change going on in terms of how compensation is going to take place, and information analysis seems perhaps important than ever.

Gootee: Absolutely. When you talk about change, change in healthcare is really dramatic, maybe more dramatic than any other industry has ever been. If you look at other industries where they have actually been able to spread that change over time, in healthcare it's being rapidly accelerated.

Gootee
In the past, data had been stored in multiple systems and multiple areas on given patients. It's been difficult for providers and organizations to make informed decisions about that patient and their healthcare. So we see a lot of change in being able to bring that data together and understand it better.

Gardner: Tell us about MZI, what you do, who your customers are, and where you're going to be taking this big data ability in the future.

Gootee: MZI Healthcare has predominantly been working on the payer side. We have a product that's been on the market for over 25 years helping with benefit administration and the lines of payers and different independent physician associations (IPAs) and third-party administrators (TPAs).

Our customers have always had a very tough time bringing in data from different sources. A little over two years ago, MZI decided to look at how we could leverage that data to help our customers better understand their risk and their patients, and ultimately change the outcomes for those patients.

Predictive analysis

Gardner: I think that's how the newer regulatory environment is lining up in terms of compensation. This is about outcomes, rather than procedures. Tell us about your requirements for big data in order to start doing more of that predictive analysis.

Gootee: If you think about how data has been stored in the past for patients across their continuum of care, where, as they went from facility to facility, and physician to physician, it's really been so spread apart. It's been difficult to help understand even how the treatments are affecting that patient.

I've talked a lot about my aunt in previous interviews. Last year, she went into a coma, not because the doctors weren't doing the right thing, but because they were unable to understand what the other doctors were doing.

She went to many specialists and took medication from each one of those to help with her given problem, but what happened was there was an interaction with medication. They didn't even know if she’d come out of the coma.

These things happen every day. Doctors make informed decisions from their experience and the data that they have. So it's critical that they can actually see all the information that's available to them.

When we look at healthcare and how it's changing, for example the Affordable Care Act, one of the main focuses is obviously cost. We all know that healthcare is growing at a rate that's just unsustainable, and while that's the main focus, it's different this time.
Not only are we trying to reduce cost, but we are trying to increase the care that's given to those patients.

We've done that before. In the Clinton Administration we had a kind of HMO and it really made a dramatic difference on cost. It was working, but it didn't give people a choice. There was no basis on outcomes, and the quality of care wasn't there.

This time around, that's probably the major difference. Not only are we trying to reduce cost, but we are trying to increase the care that's given to those patients. That's really vital to making the healthcare system a better system throughout the United States.

Gardner: Given the size of the data, the disparate nature of the data, more-and-more human data will be brought to bear. What were your technical requirements, and what was the journey that you took in finding the right infrastructure?

Gootee: We had a couple of requirements that were critical. When we work with small- and medium-size organizations (SMBs), they really don't have the funds to put in a large system themselves. So our goal was that we wanted to do something similar to what Apple has done with the iPhone. We wanted to take multiple things, put them into one area, and reduce that price point for our customers.

One of the critical things that we wanted to look at was overall price point. That included how we manage those systems and, when we looked at Vertica, one of the things that we found very appealing was that the management of that system is minimal.

High-end analytics

The other critical thing was speed, being able to deliver high-end analytics at the point of care, instead of two or three months later, and Vertica really produced. In fact, we did a proof of concept with them. It was almost unbelievable some of the queries that ran and the speed at which that data came back to us.

You hear things like that and see it through the conference, no matter what volume you may have. It's very good. Those were some of our requirements, and we were able to put that in the cloud. We run in the Amazon cloud and we were able to deliver that content to the people that need it at the right time at a really low price point.

Gardner: Let me understand also the requirement for concurrency. If you have this posted on Amazon Web Services, you're then opening this up to many different organizations and many different queriers. Is there an issue for the volume of queries happening simultaneously, or concurrency? Has that been something you've been able to work through?

Gootee: Absolutely. That's another value add that we get. The ability to expand and scale the Vertica system along with the scalability that we get with the Amazon Services allows us to deliver that information. No matter what type of queries we're getting, we can expand that automatically. We can grow that need, and it really makes a large difference in how we could be competitive in the marketplace.

Gardner: I suppose another dynamic to this on the economic side is the predictability of your cost -- x data volume, x queries. I can predict with perhaps even linear ability what my cost would be. Is that the case with you, because I know that in the past, many organizations didn't know what the costs were going to be until they got in, and it was too late.
Cloud services take some of that unknown away. It lets you scale as you need it and scale back if you don't need it.

Gootee: If you look at traditional ways that we've delivered software or a content before, you always over-buy, because you don’t know what it's going to be. Then, at some point, you don't have enough resources to deliver. Cloud services take some of that unknown away. It lets you scale as you need it and scale back if you don't need it.

So it's the flexibility for us. We're not a large company, and what's exciting about this is that these technologies help us do the same thing that the big guys do. It really lets our small company compete in a larger marketplace.

Gardner: Going back to the population health equation and the types of data and information, we heard a presentation this morning and we saw some examples of HP HAVEn, bringing together Hadoop, Autonomy, Vertica, Enterprise Security, and then creating applications on top of that. Is this something that's of interest to you? How important is this ability to get at all the information in all the different formats as you move forward?

Gootee: That's very critical for us. The way we interact in America and around the world has changed a lot. The HAVEn platform provides us with some opportunities to improve on what we have with healthcare's big security concerns, and the issue of the mobility of data. Getting it anywhere is critical to us, as well as better understanding how that data is changing.

We've heard from a lot of companies here that really are driving that user experience. More-and-more companies are going to be competing on how they can deliver things to a user in the way that they like it. That's critical to us, and that platform really gives us the ability to do that.

Gardner: Well great. I'm afraid we'll have to leave it there. We've been learning how a healthcare solutions provider has been leveraging big-data capabilities, and we've seen how they've deployed the HP Vertica Analytics platform to help customers better understand population healthcare trends, and also to identify how well healthcare processes are working.

So a big thank you to our guest, Greg Gootee, Product Manager at MZI Healthcare. Thanks, Greg.

Gootee: Thank you, Dana.

Gardner: And thanks also to our audience for joining us for this special HP Discover Performance Podcast coming to you directly from the recent HP Vertica Big Data Conference in Boston.

I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host for this ongoing series of HP Sponsored Discussions. Thanks again for joining, and come back next time.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: HP.
Transcript of a BriefingsDirect podcast on how a healthcare services provider has harnessed data analytics to help its users better understand complex trends and outcomes. Copyright Interarbor Solutions, LLC, 2005-2013. All rights reserved.

You may also be interested in: