Showing posts with label HP Vertica. Show all posts
Showing posts with label HP Vertica. Show all posts

Thursday, February 26, 2015

RealTime Medicare Data Delivers Caregiver Trends Insights By Taming its Huge Healthcare Data Trove

Transcript of a BriefingsDirect podcast on how a healthcare data collection site met the challenge of increasing volumes by using HP tools.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: HP.

Dana Gardner: Hello, and welcome to the next edition of the HP Discover Podcast Series. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for this ongoing sponsored discussion on IT innovation and how it’s making an impact on people’s lives.

Gardner
Once again, we're focusing on how companies are adapting to the new style of IT to improve IT performance and deliver better user experiences, as well as better business results.

This time, we're coming to you directly from the recent HP Discover 2014 Conference in Las Vegas. We're here to learn directly from IT and business leaders alike how big data, cloud, and converged infrastructure implementations are supporting their goals.
   
Our next innovation case study interview highlights how RealTime Medicare Data analyzes huge volumes of Medicare data and provides analysis to their many customers on the caregiver side of the healthcare sector.
and gain access to the
FREE HP Vertica Community Edition
Here to explain how they manage such large data requirements for quality, speed, and volume, we're joined by Scott Hannon, CIO of RealTime Medicare Data and he's based in Birmingham, Alabama. Welcome, Scott.

Scott Hannon: Thank you.

Gardner:  First, tell us a bit about your organization and some of the major requirements you have from an IT perspective.

Hannon: RealTime Medicare Data has full census Medicare, which includes Part A and Part B, and we do analysis on this data. We provide reports that are in a web-based tool to our customers who are typically acute care organizations, such as hospitals. We also do have a product that provides analysis specific to physicians and their billing practices.

Gardner:  And, of course, Medicare is a very large US government program to provide health insurance to the elderly and other qualifying individuals.

Hannon: Yes, that’s true.

Gardner: So what sorts of data requirements have you had? Is this a volume, a velocity, a variety type of the problem, all the above?

Volume problem

Hannon: It’s been mostly a volume problem, because we're actually a very small company. There are only three of us in the IT department, but it was just me as the IT department, back when I started in 2007.

Hannon
At that time, we had one state, Alabama and then, we began to grow. We grew to seven states which was the South region: Florida, Georgia, Tennessee, Alabama, Louisiana, Arkansas, and Mississippi. We found that Microsoft SQL Server was not really going to handle the type of queries that we did with the volume of data.

Currently we have 18 states. We're loading about a terabyte of data per year, which is about 630 million claims and our database currently houses about 3.7 billion claims.

Gardner: That is some serious volume of data. From the analytics side, what sort of reporting do you do on that data, who gets it, and what are some of their requirements in terms of how they like to get strategic benefit from this analysis.

Hannon: Currently, most of our customers are general acute-care hospitals. We have a web-based tool that has reports in it. We provide reports that start at the physician level. We have reports that start at the provider level. We have reports that you can look at by state.
This allows them to look not only at themselves, but to compare themselves to other places, like their market, the region, and the state.

The other great thing about our product is that typically providers have data on themselves, but they can't really compare themselves to the providers in their market or state or region. So this allows them to look not only at themselves, but to compare themselves to other places, like their market, the region, and the state.

Gardner: I should think that’s hugely important, given that Medicare is a very large portion of funding for many of these organizations in terms of their revenue. Knowing what the market does and how they compare to it is essential.

Hannon: Typically, for a hospital, about 40 to 45 percent of their revenue depends on Medicare. The other thing that we've found is that most physicians don't change how they practice medicine based on whether it’s a Medicare patient, a Blue Cross patient, or whoever their private insurance is.

So the insights that they gain by looking at our reports are pretty much 90 to 95 percent of how their business is going to be running.

Gardner: It's definitely mission-critical data then. So you started with a relational database, using standard off-the-shelf products. You grew rapidly, and your volume issues grew. Tell us what the problems were and what requirements you had that led you to seek an alternative.

Exponential increase

Hannon: There were a couple of problems. One, obviously, was the volume. We found that we had to increase the indexes exponentially, because we're talking about 95 percent reads here on the database. As I said, the Microsoft SQL Server really was not able to handle that volume as we expanded.

The first thing we tried was to move to an analysis services back end. For that project, we got an outside party to help us because we would need to redesign our front end completely to be able to query analysis services.

It just so happened that that project was taking way too long to implement. I started looking at other alternatives and, just by pure research, I happened to find Vertica. I was reading about it and thought "I'm not sure how this is even possible." It didn’t even seem possible to be able to do this with this amount of data.

So we got a trial of it. I started using it and was impressed that it actually could do what it said it could do.
and gain access to the
FREE HP Vertica Community Edition
Gardner: As I understand it, Vertica has the column store architecture. Was that something understood? What is it about the difference of the Vertica approach to data -- one that perhaps caught your attention at first, and how has that worked out for you?

Hannon: To me the biggest advantages were the fact that it uses the standard SQL query language, so I wouldn't have to learn the MDX, which is required with the analysis services. I don’t understand the complete technical details about column storage, but I understand that it's much faster and that it doesn't have to look at every single row. It can build the actual data set much faster, which gives you much better performance on the front end.

Gardner: And what sort of performance have you had?

Hannon: Typically we have seen about a tenfold decrease in actual query performance time. Before, when we would run reports, it would take about 20 minutes. Now, they take roughly two minutes. We're very happy about that.

Gardner: How long has it been since you implemented HP Vertica and what are some of supporting infrastructures that you've relied on?

Hannon: We implemented Vertica back in 2010. We ended up still utilizing the Microsoft SQL Server as a querying agent, because it was much easier to continue to interface the SQL reporting services, which is what our web-based product uses. And the stored procedure functionality that was in there and also the open query feature.

So we just pull the data directly from Vertica and then send it through Microsoft SQL Server to the reporting services engine.

New tools

Gardner: I've heard from many organizations that not only has this been a speed and volume issue, but there's been an ability to bring new tools to the process. Have you changed any of the tooling that you've used for analysis? How have you gone about creating your custom reports?

Hannon: We really haven't changed the reports themselves. It's just that I know when I design a query to pull a specific set of data that I don’t have to worry that it's going to take me 20 minutes to get some data back. I'm not saying that in Vertica every query is 30 seconds, but the majority of the queries that I do use don’t take that long to bring the data back. It’s much improved over the previous solution that we were using.

Gardner: Are there any other quality issues, other than just raw speeds and feeds issues, that you've encountered? What are some of the paybacks you've gotten as a result of this architecture?
But I will tell people to not be afraid of Linux, because Vertica runs on Linux and it’s easy.

Hannon: First of all, I want to say that I didn’t have a lot of experience with Unix or Linux on the back end and I was a little bit rusty on what experience I did have. But I will tell people to not be afraid of Linux, because Vertica runs on Linux and it’s easy. Most of the time, I don’t even have to mess with it.

So now that that's out of the way, some of the biggest advantages of Vertica is the fact that you can expand to multiple nodes to handle the load if you've got a larger client base. It’s very simple. You basically just install commodity hardware, but whatever flavor of Unix or Linux that you prefer, as long as it’s compatible, the installation does all the rest for you, as long as you tell it you're doing multiple nodes.

The other thing is the fact that you have multiple nodes that allow for fault tolerance. That was something that we really didn't have with our previous solution. Now we have fault tolerance and load balancing.

Gardner: Any lessons learned, as you made this transition from a SQL database to a Vertica columnar store database? You even moved the platform from Windows to Linux. What might you tell others who are pursuing a shift in their data strategy because they're heading somewhere else?

Jump right in

Hannon: As I said before, don’t be afraid of Linux. If you're a Microsoft or a Mac shop, just don’t be afraid to jump in. Go get the free community edition or talk to a salesperson and try it out. You won't be disappointed. Since the time we started using it, they have made multiple improvements to the product.

The other thing that I learned was that with OPENQUERY, there are specific ways that you have to write the store procedures. I like to call it "single-quote hell," because when you write OPENQUERY and you have to quote something, there are a lot of other additional single quotes that you have put in there. I learned that there was a second way of doing it that lessened that impact.

Gardner: Okay, good. And we're here at HP Discover. What's interesting for you to learn here at the show and how does that align with what your next steps are in your evolution?

Hannon:  I'm definitely interested in seeing all the other capabilities that Vertica has and seeing how other people are using it in their industry and for their customers.
I'm definitely interested in seeing all the other capabilities that Vertica has and seeing how other people are using it in their industry and for their customers.

Gardner: In terms of your deployment, are you strictly on-premises for the foreseeable future? Do you have any interest in pursuing a hybrid or cloud-based deployments for any of your data services?

Hannon: We actually use a private cloud, which is hosted at TekLinks in Birmingham. We've been that way ever since we started, and that seems to work well for us, because we basically just rent rack space and provide our own equipment. They have the battery backup, power backup generators, and cooling.

Gardner: How about backup and recovery? How were those issues managed for you?

Hannon: We have multiple copies of it on multiple server systems and we also do cloud backup.

Gardner: I see. So you've got a separate location in the cloud that you use, should something unfortunate happen.

Hannon: Correct.

Gardner: So a good insurance for a Medicare insurance database.

Hannon: Absolutely.

Gardner: Okay. We’ll leave it there. Please join me in thanking our guest. We've been talking about how RealTime Medicare Data is managing a huge volume of data and providing analysis to care providers in 18 states in the US.
and gain access to the
FREE HP Vertica Community Edition
So a big thank you to Scott Hannon, CIO at RealTime Medicare Data in Birmingham, Alabama. Thanks.

Hannon: Thank you, Dana.

Gardner: And thanks also to our audience for joining us for this special new style of IT discussion coming to you directly from the recent HP Discover 2014 Conference in Las Vegas.

I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host for this ongoing series of HP sponsored discussions. Thanks again for listening and come back next time.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: HP.

Transcript of a BriefingsDirect podcast on how a healthcare data collection site met the challenge of increasing volumes by using HP tools. Copyright Interarbor Solutions, LLC, 2005-2015. All rights reserved.

You may also be interested in:


      Tuesday, February 03, 2015

      HP Vertica Enables Rapid Matching of Consumer Inferences to Ads at Huge Scale for adMarketplace

      Transcript of a BriefingsDirect podcast on how big data and data analytics combine to instantly match search users with ads appropriate to their interests.

      Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: HP.

      Dana Gardner: Hello, and welcome to the next edition of the HP Discover Podcast Series. I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for this ongoing sponsored discussion on IT innovation and how it’s making an impact on people’s lives.

      Once again, we're focusing on how companies are adapting to the new style of IT to improve IT performance and deliver better user experiences, as well as better business results.
      Become a member of myVertica today
      and gain access to the
      FREE HP Vertica Community Edition
      This time, we're coming to you directly from the recent HP Big Data 2014 Conference in Boston. We're here to learn directly from IT and business leaders alike how big data, cloud, and converged infrastructure implementations are supporting their goals.

      Gardner
      Our next innovation case study interview highlights how adMarketplace is using big data to improve its search advertising capabilities for its customers. [See part one of this two-part series.] We'll learn more about search for advertising and how big data plays a role in that in our discussion with our guest, Raj Yakkali, Director of Data Infrastructure at adMarketplace in New York. Welcome, Raj.

      Raj Yakkali: Hello, Dana. Very nice to be here, and thank you.

      Gardner: Good to have you here. Tell us a little bit about adMarketplace. What do you do, and why is big data such a big part of that?

      Yakkali: adMarketplace is a leading advertising platform for search intent. We provide the advertisers with the consumer space where they can project their ads. The benefit of the adMarketplace comes into play where we provide a data platform that can match those ads with the right user intent.

      Yakkali
      When user searches for a certain keyword, they're directly telling us what they want to see, and we match it perfectly well with our ads. The relationship that we have with our advertisers is that we match them well and make it accessible in exactly what the user is thinking. We do some predictive analytics on top of what the user is saying. We add that dimension to our user search and provide ads aptly.

      Gardner: I'm all for getting better ads based on lot of things I already get. Do you have more than just keywords in terms of how you can draw inference, and what sort of scale of data are we talking about when it comes to all that inference information about an intent on behalf of the consumer?

      15 dimensions

      Yakkali: Keyword search is one side or one dimension of the user search. There are also category campaigns that the advertisers are running. At the same time, there's a geospatial analysis to it as well. There are 15 dimensions that we go through to provide an ad that is perfectly fit for the advertiser and for the consumer to see and take advantage of to meet their needs. With some of the ads, we are trying to serve the user’s requirements and needs.

      http://bit.ly/1sWpHmCGardner: With all these variables, this sounds like you're going to be gathering an awful lot of information. You also need to reply back with your results very fast or you lose the opportunity for that consumer to get the ad and then even click through and make a decision. Tell me about scale and speed.

      Yakkali: You're right on with that question. In this business, latency is your enemy. If you look into the certain metrics, there are almost a half a billion requests that we're receiving every day and we have to match all of those ads with a sub-second performance. We have internal proprietary datasets, which we take care of before matching these ads. And there are two platforms that we've built internally.

      One is called Bid Smart. That performs the analysis between the user intent and the traffic sources that the user search is coming from. At the same time, the price of that ad goes to the publisher. There are the pricing strategies, the traffic sources, and the user intent of the search. All of these things are put together. That predictive analytics system gathers all this information and emits the right ad towards the consumer.
      With the partnership with Vertica, we’re able to take the dataset, derive analytics about it, and provide our marketers with all that information.

      On top of it, if you look into the amount of data, those half a billion requests that are coming into our system, it generates around two terabytes per hour. At certain times, we can't store all of it for analytics. There is a lot of data that's not inside the database. Now, with the partnership with Vertica, we’re able to take the dataset, derive analytics about it, and provide our marketers with all that information. Bid Smart is the one that does the pricing and matching.

      The other thing is Advertiser 3D, which provides that detailed analytics into all these dimensions on the metrics. That provides a very good insight. Now, when it comes to the competition or the opportunity to deliver the right ad at the right time, that's where data work flows make a difference.

      We utilize Vertica to directly stream all this click data into it, rather than going into certain other locations and then doing it in a batch format. We directly live-stream that data into Vertica, so that it is readily available for analytics. Our Bid Smart System makes use of that dataset. That's where we get the opportunity to deliver much better ads, with price tags, and the right user intent matched.

      Gardner: It sounds very complex. There's an awful lot going on for just serving up an ad. I suppose people don’t appreciate that, but the economics here are very compelling, the more refined and appropriate an ad can be, the more likely the consumer is to buy, but there are a lot of resources that don't get wasted in the meantime. Do you have any sense of what the payoff is, either in business, financial, or technical terms for when you can really accomplish this goal of targeted advertising?

      Conversion rate

      Yakkali: So our conversion rate is a major key performance indicator (KPI) when it comes to understanding how well we are doing. When we see higher conversion rates, that gives us the sense that we've done the best job and user is happy with what they are searching and what they are getting.

      At the at the same time, the publishers, as well as the advertisers, are happy, because the user is coming to us again and again to get that similar, beautiful experience. The advertisers are able to sell more products that meet the needs of the user. And the users are able to get the product that really caters to their needs. We're in the middle of all these things, trying provide the facilitation to the advertisers, as well as the users and the publishers' space.

      Gardner: I daresay this is the very future of advertising. Now for you to accomplish these goals and create those positive KPIs, are you housing Vertica in your own data center, do you use cloud, hybrid cloud? Given that you have different platforms, different datasets, how do you manage this technically?

      Yakkali: On that end, we started with testing cloud two or three years ago, but again, it turned out that because of so many unknowns and troubleshooting, we had to go with our data centers. Now, we host all our systems in our own data centers and we manage it.

      We have our own hardware to deal with. Our system is a 24/7, and we have to be able to deliver the sub-second latency performance. Having your own infrastructure, you have the controlled environment where you can tweak and tune your system to get the best performance out of it.
      Become a member of myVertica today
      and gain access to the
      FREE HP Vertica Community Edition
      Considering that it is a 24/7, there are fewer excuses that you can get away with in not delivering it. For that, we do innovation in terms the data flows and the process of how we ingest the data, how we process the data, how we emit the data, and how we clean up the data when we don’t really need it.

      All these things have to come together, and it really helps us having that control on all of our infrastructure and all elements in the data pipeline, starting from the user intent and user search, until we provide the data and the results.

      Gardner: How long have you been using Vertica in this regard and how did you go about making that decision?

      Yakkali: We've been using Vertica for four to five years and our data pipeline was not on Vertica to start with, but as Vertica came into the picture and we saw the great beauty and the powerful features that it brings to capitalize our ability.

      That really helped us. With Vertica in place, we have been migrating our mechanics slowly to use it for the real-time analysis and real-time bidding and all those beautiful features that make us do what we can do better. So it’s been a great partnership with Vertica and we see many more features coming in with the new version. Our Bid Smart mechanism is also improving, and with that, algorithmic capabilities are increasing. So it’s progressing.

      Feedback loop

      Gardner: Tell us a little bit about where your business is heading. In addition to speed, complexity, and scale, where do you see the ability to create this feedback loop? It’s very rapid feedback loop between a lot of incoming data and an action like streaming up an ad. It seems like this could be applied to either other marketing or advertising chores or perhaps even have an ancillary business-development direction. You’ve got this platform and these data centers. Is there something else that you're gearing up for?

      Yakkali: At this point, we're in the business of connecting the advertisers, the publishers, and the users. But that is an untapped business to what it can accomplish. The market has started its pathway towards the level of reaching that epitome. If we take a step back and try to understand it, initially, when search started, there was no Google or anything. It was more about curated search.

      So the publishers put out all this content together and then projected it out to the user. They didn't know what user wanted. At the same time, when the user looked at this content, they didn't know whether they want it or whether it catered to their needs.

      Then, Google came along and user search started. What that directly told was "I want this piece of information. I want to use this piece of information. And I want to see this ad that is relevant to my needs." That’s a very powerful thing. When you hear that part, you're able to analyze that piece and match it properly with the advertisers. But then again, it started to fragment.
      At this point, we're in the business of connecting the advertisers, the publishers, and the users.

      Now, it’s not only Google. There is Yahoo, Bing, there is mobile, and there are certain apps. There are many apps in the mobile space and each one has its own search. So not all the searches are going to Google, Yahoo, or Bing. Search is already fragmented.

      We tap all those pieces. The market that is beyond Google. Yahoo-Bing is stronger and it is growing. So there is a lot of market that needs to be tapped into. We come into the place connecting the advertisers to tap that untapped marketplace.

      We've been improving our internal Bid Smart algorithm that came out in the last year. Then, we also launched Advertiser 3D last year as well. Those two products have been providing tremendous growth in our revenue, and the retention rates have been stellar.

      The top 60 percent of Google’s top spenders are working with us to complement their business. At the same time, we're also able to provide 50 percent increase in year-over-year revenues. It's additional revenue for them, and even our revenues are increasing based on that fact.

      Gardner: It seems like you have an awful lot of runway ahead of you in terms of where search could be applied, and analytics can be drawn from that to augment these services and explode that market.

      Is Vertica being used just for the intercept between the incoming data and the outgoing ad, or you are also analyzing what goes on within these marketplace so that you better appreciate, whether you can offer reports, audit trails, and that sort of thing? Is this an inclusive platform, or do you use different analytics platforms for different aspects of what you are doing?

      End to end

      Yakkali: We do almost everything. It is an end-to-end platform. As part of the business we look into the operational metrics of the whole thing, starting from the user search until the ad is delivered. Then, from that end, there is always that analytics piece that comes onto play, which provides insights to the marketers.

      Our market base is filled with the very data-savvy marketers, and they look into each and every data dimension to understand their return on investment (ROI). We give them transparency through our Advertise 3D System and utilizing that, they're able to navigate through the space and aptly tune their campaigns to get the best out of it and to deliver the best to the customer.

      Gardner: Any thoughts about other organizations that are also facing significant challenges around speed, scale, also perhaps with a big runway, in terms of knowing that more and more business could be coming their way therefore more data? What would you advise them in terms of the data architecture or the planning in order to accomplish the goals?

      Yakkali: When we look at the industries and the market, the ad industry still is untapped. The healthcare industry is just getting into the business of doing much more with analytics. It’s all about the speed and the latency and the insights as well. One, at the operational level and the other, at the insight level to do more innovation on top of it.
      Our market base is filled with the very data-savvy marketers, and they look into each and every data dimension to understand their return on investment (ROI).

      The ability to listen to the customer depends on how fast you can capture all that feedback, and you tighten that loop of feedback so that you're able to do something with it and make a better product out of it.

      So it’s all about taking a look at the datasets very closely as to what they mean, what the user is asking us, what do they want to see, and how you are listening to the customer. Those two aspects really make the difference.

      You want to listen to the customer, what they really want. Are you providing it and are you able to guess what they want for tomorrow for that predictive, and going into prescriptive analytics, phase later on. You're telling them what they need to do even before they tell you.

      That's the stage that the market is going towards. We're not even scratching the surface of prescriptive analytics. The wave has not yet started towards that route. We're still at the predictive analytics phase, and there is still a lot more to go within that space. Get the foundation stronger, drive towards prescriptive analytics, and listening to your customer, are the three aspects that would make any industry. Those three would be the key foundational pieces for making innovation.
      Become a member of myVertica today
      and gain access to the
      FREE HP Vertica Community Edition
      Gardner: Thanks so much. We've been learning about how adMarketplace is using big data to perform some very complex marketing activities for their advertisers to match intent from a customer with an ad that suits their needs based on ever-growing amount of data and inference. [See part one of this two-part series.] I'd like to thank our guest. We've been joined Raj Yakkali, Director of Data Infrastructure at adMarketplace in New York. Thanks so much.

      Yakkali: Thank you very much, Dana. It was a pleasure talking to you.

      Gardner: And I'd like to thank our audience as well, for joining us for the special new style of IT discussion, coming to you directly from the recent HP Big Data 2014 Conference in Boston.

      I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host for this ongoing series of HP sponsored discussions. Thanks again for listening, and come back next time.

      Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: HP.

      Transcript of a BriefingsDirect podcast on how big data and data analytics combine to instantly match search users with ads appropriate to their interests. Copyright Interarbor Solutions, LLC, 2005-2015. All rights reserved.

      You may also be interested in:

      Thursday, January 22, 2015

      adMarketplace Solves Search Intent Challenge with HP Vertica Big Data Warehouse Solution

      Transcript of a BriefingsDirect podcast on how an consumer intent search company is able to handle massive amounts of data and analyze it quickly with HP Vertica.

      Listen to the podcast. Find it on iTunesDownload the transcript. Sponsor: HP.

      Dana Gardner: Hello, and welcome to the next edition of the HP Discover Podcast Series. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for this ongoing sponsored discussion on IT innovation and how it’s making an impact on people’s lives.

      Gardner
      Once again, we're focusing on how companies are adapting to the new style of IT to improve IT performance and deliver better user experiences, as well as better business results.

      This time, we're coming to you directly from the recent HP Discover 2014 Conference in Las Vegas. We're here to learn directly from IT and business leaders alike how big datacloud, and converged infrastructure implementations are supporting their goals.

      Our next innovation case study interview explores how New York-based adMarketplace, a search syndication advertising network, has met its daunting data-warehouse requirements.
      and gain access to the
      FREE HP Vertica Community Edition
      We'll learn how adMarketplace captures and analyzes massive data to allow for efficient real-time bidding for traffic sources for online advertising. And we'll hear how the data-analysis infrastructure also delivers rapid cost-per-click insights to advertisers.

      To learn more about how adMarketplace manages its big-data challenges, please join me in welcoming Michael Yudin, the Chief Technology Officer at adMarketplace.

      Michael Yudin: Hello. Thank you, Dana.

      Gardner: Tell us first about what adMarketplace does. It sounds very interesting, but I'm not sure I fully understand it.

      Yudin: Well, adMarketplace is the leading marketplace for search intent advertising, and let me explain what that means. Search advertising is the best form of advertising ever invented. For the first time, a consumer actually tells a computer what they're interested in. That’s why Google became so successful as a search engine.

      Yudin
      Some things are changing in the marketplace these days. Consumer search intent is fracturing. You probably wonder what this means. It’s very simple. What this means is Google is no longer the only place you go to search for stuff.

      I'll give you an example. Last night, I was looking for a Brazilian steakhouse here in Las Vegas. I didn't go on google.com. I opened my iPhone and I fired up a yellow pages (YP) app and I entered "Brazilian steakhouse" in the search box.

      There are a variety of apps in my phone like that for travel, sports, news, and various other things I'm interested in. Anytime I search there, I don’t go to google.com. Consumer search has really fractured and adMarketplace has solved the monetization problem for that.

      Providing value

      Gardner: So when people are searching in areas other than say Google or Yahoo, how does your organization intercept with that and how does that provide value to both the consumer that’s searching and advertisers that want to provide them information?

      Yudin: It benefits both the consumer and the advertiser. In the search world, an ad is really nothing more than a search result in response to user’s query. That’s why it’s so great.

      Our clients are the Internet's largest marketers and brands. They use adMarketplace to acquire additional customers in addition to the other marketing channels like Google, where they are pretty much already maxed out.

      http://bit.ly/1En8DHKThere are only so many searches that happen in Google and they're declining. So advertisers are looking for new ways to capture consumer intent and to convert this into sales and measurable return on investment (ROI), and that's what we do for them.

      Gardner: Of course, a really important thing here is to match properly, and that requires data and analysis -- and it requires speed. Tell us a little about the requirements. How do you do this technically?

      Yudin: You just nailed it. This is a very, very big data problem and it has to be solved at scale and fast. And it’s also a 24x7 problem. We can never take our system down. We have a global business, and anytime you go and you search for something as a consumer, you expect to see the result right away.
      and gain access to the
      FREE HP Vertica Community Edition
      Our network handles about half a billion search queries per day and this results in about two terabytes of data per hour constantly generated by our platform, across multiple data centers. We needed a very scalable and robust analytical data warehouse solution that could handle this. Two years ago, we evaluated a number of vendors and settled on HP Vertica, which was best able to satisfy our tough requirements.

      Gardner: And are these requirements primarily about the scale and volume, or are we talking also about a need for rapid query, or all the above? Give us a bit more insight into the actual requirements for your network?

      Yudin: That's a great question, and I think this is what makes Vertica unique. There are products out there that can store a lot of data, but you can't get this data out of these solutions quickly and at high concurrency. We require a system that can ingest large amounts of data constantly. I am talking about terabytes and terabytes of data. This data has to be queryable right away, with very low latency requirements.

      Some of our queries for Advertiser 3D and analytical dashboard are preplanned queries obviously, but they are very big data queries and the service-level agreement (SLA) on these queries is two seconds. Very few products can do that. Some queries are obviously more complex, but we're still talking about seconds and not hours.

      Concurrency requirement

      On top of this, there's a concurrency requirement and that’s a very big weak spot of a lot of products. Vertica is actually able to provide sufficient concurrency, and it’s never enough.

      I do know that there's an upcoming release of Vertica 7, where this is going to be improved even further, but it’s quite acceptable right now. And it has to be fault tolerant, which means that it should be able to sustain a hardware failure on any of its nodes -- and it can do that.

      Gardner: Tell us a bit about where you've built Vertica in terms of data centers. Are they your own? Do you have managed service providers? How are you managing your infrastructure that supports Vertica and then therefore your data processes?

      Yudin: We own our own infrastructure. So these are not managed services. We actually once used managed services, but we've outgrown them. And Vertica runs on dedicated hardware.
      This was driven by business requirements. We didn’t just decide that we needed this

      We also have several other Vertica clusters that run on virtualized hardware, and even though it’s dedicated infrastructure, it’s really dedicated at the cloud level now. So call it private cloud. It's a buzzword. It's a mix of dedicated and virtualized. It's elastic scaling.

      Gardner: And the transition. You mentioned that two years ago, you were searching for a product. How were you able to bring this on board and what sort of growth have you had as a result -- in terms of data volume, but also in your business, in terms of customers and overall business metrics of growth?

      Yudin: This was driven by business requirements. We didn’t just decide that we needed this. So we started to undertake a very, very ambitious project -- Advertiser 3D. If you go to our website, www.admarketplace.com, you can read more about it.

      This is a very elegant, simple, and yet powerful, system to match and price traffic across a multitude of traffic sources. To deliver this product, we didn’t have a choice. We had to have a powerful analytical back-end data warehouse. That's when we started to evaluate products and chose Vertica.

      Gardner: And have there been any other benefits of going to Vertica in terms of being able to increase the number of features, or have you been able to leverage the technology in new business opportunities in terms of what you can offer your customers, not just to have met the requirements, but perhaps whole new types of benefits?
      and gain access to the
      FREE HP Vertica Community Edition

      Heavy lifting

      Yudin: Definitely. Our customers don’t know and don’t even care that we use Vertica on the back end. That’s probably why we won an HP award, because we integrated it into our overall solution very elegantly and seamlessly, but it obviously does a lot of heavy lifting on the back end.

      And the project was successful and transformed our business. Our growth rates have accelerated over 50 percent on our core revenue and performance. Data-savvy marketers, and our clients started to see significantly double-digit improvement in ROIs.

      Gardner: As Chief Technology Officer there, you've gone through a fairly significant change in your infrastructure and adoption, as you've just described. Looking back, are there any lessons learned that you could offer to others who are also running into a wall with their data infrastructure or looking for alternatives? Any thoughts on how you would advise them to make the transition?
      Our growth rates have accelerated over 50 percent on our core revenue and performance.

      Yudin: Definitely. The number one advice I would give anybody is don’t believe anything until you do two things: Try it yourself and get references from people who actually use this and whom you trust. That's very important.

      Gardner: Well, great. We've been talking about how adMarketplace captures and analyzes massive data to allow for efficient real-time bidding for traffic sources for online advertising.

      I would like to thank our guest, Michael Yudin, the Chief Technology Officer at adMarketplace. Thanks so much.

      Yudin: Thank you, Dana. My pleasure.

      Gardner: And I also want to thank our audience as well for joining us for this special new style of IT discussion coming to you directly from the recent HP Discover 2014 Conference.

      I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host for this ongoing series of HP-sponsored discussions. Thanks again for listening, and come back next time.

      Listen to the podcast. Find it on iTunesDownload the transcript. Sponsor: HP.

      Transcript of a BriefingsDirect podcast on how an consumer intent search company is able to handle massive amounts of data and analyze it quickly with HP Vertica. Copyright Interarbor Solutions, LLC, 2005-2015. All rights reserved.

      You may also be interested in:

      Monday, December 01, 2014

      Hortonworks Accelerates the Big Data Mashup between Hadoop and HP Haven

      Transcript of a BriefingsDirect podcast on how companies are beginning to capture large volumes of data for past, present and future analysis capabilities.

      Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: HP.

      Dana Gardner: Hello, and welcome to the next edition of the HP Discover Podcast Series. I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for this ongoing sponsored discussion on IT innovation and how it’s making an impact on people’s lives.

      Gardner
      Once again, we're focusing on how companies are adapting to the New Style of IT to improve IT performance, gain new insights, and deliver better user experiences — as well as better overall business results.

      This time, we're coming to you directly from the recent HP Big Data 2014 Conference in Boston to learn directly from IT and business leaders alike how big data changes everything … for IT, for businesses and governments, as well as for you and me.

      Our next innovation interview highlights how Hortonworks is now working with HP on the management of very large datasets. We'll hear how these two will integrate into more of the HP Haven family, but also perhaps into the cloud, and to make it easier for developers to access business intelligence (BI) as a service.
      Fully experience the HP Vertica analytics platform...
      Get the free HP Vertica Community Edition

      Become a member of myVertica
      To learn more about these ongoing big data trends, we are joined by Mitch Ferguson, Vice President of Business Development at Hortonworks. Welcome, Mitch.

      Mitch Ferguson: Thank you, Dana. Pleasure to be here.

      Gardner: We’ve heard the news earlier this year about HP taking a $50-million stake in Hortonworks, and Hortonworks' IPO plans. Please fill us in little bit about why Hortonworks and HP are coming together.

      Ferguson: There are two core parts to that answer. One is that the majority of Hadoop came out of Yahoo. Hortonworks was formed by the major Hadoop engineers at Yahoo moving to Hortonworks. This was all in complete corporation with Yahoo to help evolve the technology faster. We believe the ecosystem around Hadoop is critical to the success of Hadoop and critical to the success of how enterprises will take advantage of big data.

      Ferguson
      If you look at HP, a major provider of technology to enterprises, both at the compute and storage level but the data management level, the analytics level, the systems management level, and the complimentary nature of Hadoop as part of the modern data architecture with the HP hardware and software assets provides a very strong foundation for enterprises to create the next generation modern data architecture.

      Gardner: I'm hearing a lot about the challenges of getting big data into a single set or managing the large datasets.

      Users are also trying to figure out how to migrate from SQL or other data stores into Hadoop and into HP Vertica. It’s a challenge for them to understand a roadmap. How do you see these datasets as they grow larger, and we know they will, in terms of movement and integration? How is that path likely to unfold?

      Machine data

      Ferguson: Look at the enterprises that have been adapting Hadoop. Very early adopters like eBay, LinkedIn, Facebook, and Twitter are generating significant amounts of machine data. Then we started seeing large enterprises, aggressive users of technology adopt it.

      One of the core things is that the majority of data being created everyday in an enterprise is not coming from traditional enterprise resource planning (ERP) or customer relationship management (CRM) financial management systems. It's coming from websites like Clickstream, data, log data, or sensor, data. The reason there is so much interest in Hadoop is that it allows companies to cost effectively capture very large amounts of data.

      Then, you begin to understand patterns across semi-structured, structured, and unstructured data to begin to glean value from that data. Then, they leverage that data in other technologies like Vertica, analytics technologies, or even applications or move the data back into the enterprise data warehouse.

      As a major player in this Hadoop market, one of the core tenets of the company was that the ecosystem is critical to the success of Hadoop. So, from day one, we’ve worked very closely with vendors like Microsoft, HP, and others to optimize how their technologies work with Hadoop.

      SQL has been around for a long time. Many people and enterprises understand SQL. That's a critical access mechanism to get data out of Hadoop. We’ve worked with both HP and Microsoft. Who knows SQL better than anyone? Microsoft. We're trying to optimize how SQL access to Hadoop can be leveraged by existing tools that enterprises know about, analytics tools, data management tools, whatever.

      That's just one way that we're looking at leveraging existing integration points or access mechanisms that enterprises are used to, to help them more quickly adopt Hadoop.
      The technology like Hadoop is optimized to allow an enterprise to capture very, very large amounts of that data.

      Gardner: But isn’t it clear that what happens in many cases is that they run out of gas with a certain type of database and that they seek alternatives? Is that not what's driving the market for Hadoop?

      Ferguson: It's not that they're running out of gas with an enterprise data warehouse (EDW) or relational database. As I said earlier, it's the sheer amount of data. By far, the majority of data is not coming from those traditional ERP,  CRM, or transactional systems. As a result, the technology like Hadoop is optimized to allow an enterprise to capture very, very large amounts of that data.

      Some of that data may be relevant today. Some of that data may be relevant three months or six months from now, but if I don't start capturing it, I won't know. That's why companies are looking at leveraging Hadoop.

      Many of the earlier adopters are looking at leveraging Hadoop to drive a competitive advantage, whether they're providing a high level of customer service, doing things more cost-effectively than their competitors, or selling more to their existing customers.

      The reason they're able to do that is because they're now being able to leverage more data that their businesses are creating on a daily basis, understanding that data, and then using it for their business value.

      More than size

      Gardner: So this is an alternative for an entirely new class of data problem for them in many cases, but there's more than just the size. We also heard that there's interest in moving from a batch approach to a streaming approach, something that HP Vertica is very popular around.

      What's the path that you see for Hortonworks and for Hadoop in terms of allowing it to be used in more than a batch sense, perhaps more toward this streaming and real-time analytics approach?

      Ferguson: That movement is under way. Hadoop 1.0 was very batch-oriented. We're now in 2.0 and it's not only batch, but interactive and also real-time, and there's a common layer within Hadoop.  Hortonworks is very influential in evolving this technology. It's called YARN. Think of it as a data operating system that is part of Hadoop, and it sits on top of the file system.

      Via YARN, applications or integration points, whether they're for batch oriented applications, interactive integration, or real-time like streaming or Spark, are access mechanisms. Then, those payloads or applications, when they leverage Hadoop, will go through these various batch interactive, real-time integration points.

      They don't need to worry about where the data resides within Hadoop. They'll get the data via their batch real-time interactive access point, based on what they need. YARN will take advantage of moving that data in and out of those applications. Streaming is just one way of moving data into Hadoop. That's very common for sensor data. It’s also a way to move it out. SQL is a way, among others, to move data.
      Fully experience the HP Vertica analytics platform...
      Get the free HP Vertica Community Edition

      Become a member of myVertica
      Gardner: So this is giving us choice about how to manage larger scales of data. We're seeing choice about the way in which we access that data. There's also choice around the type of the underlying infrastructure to reduce costs and increase performance. I am thinking about in-memory or columnar.

      What is there about the Hadoop community and Hortonworks, in particular, that allows you to throw the right horsepower at the problem?

      Ferguson: It was very important, from Hortonworks perspective from day one, to evolve the Hadoop technology as fast as possible. We decided to do everything in open source to move the technology very quickly and leverage the community effective open-source, meaning lots of different individuals helping to evolve this technology fast.

      The ability for the ecosystem to easily and optimally integrate with Hadoop is important. So there are very common integration points. For example, for systems management, there is the Ambari Hadoop services integration point.

      Whether it's an HP OpenView or System Center in the Microsoft world, that allows it to leverage, manage, or monitor Hadoop along with other IT assets that those management technologies integrate with.

      Access points

      Then there's SQL's access via Hive, an access point to allow any technology that integrates or understands SQL to access Hadoop.

      Storm and Spark are other access points. So, common open integration points well understood by the ecosystem are really designed to help optimize how various technologies at the virtualization layer, at the operating system layer, data movement, data management, access layer can optimally leverage Hadoop.

      Gardner: One of the things that I hear a lot from folks who don't understand yet how things will unfold, is where data and analytics applications align with the creation of other applications or services, perhaps in a cloud setting like a platform as a service (PaaS).

      It seems to me that, at some point, more and more application development will be done through PaaS with an associated or integrated cloud. We're also seeing a parallel trajectory here with the data, along the same lines of moving from traditional systems of record into relational, and now into big data and analytics in a cloud setting. It makes a lot of sense.
      What a number of people are doing with this concept is called the data lake. They're provisioning large Hadoop clusters on prem, moving large amounts of data into this data lake.

      I talked to lot of people about that. So the question, Mitch, is how do we see a commingling and even an intersection between the paths of PaaS in general application development and PaaS in BI services, or BI as a service, somehow relating?

      Ferguson: I'll answer that question in two ways. One is about the companies that are using Hadoop today, and using it very aggressively. Their goal is to provide Hadoop as a service, irrespective of whether it's on premises or in the cloud.

      Then we'll talk about what we see with HP, for example, with their whole cloud strategy, and how that will evolve into a very interesting hybrid opportunity and maybe pure cloud play.

      When you think about PaaS in the cloud, the majority of enterprise data today is on premises. So there's a physics issue of trying to run all of my big data in the cloud. As a result, what a number of people are doing with this concept is called the data lake. They're provisioning large Hadoop clusters on premises, moving large amounts of data into this data lake.

      That's providing data as a service to those business units that need data in Hadoop -- structured, semi-structured, unstructured for new applications, for existing analytics processes, for new analytics processes -- but they're providing effectively data as a service, capturing it all in this data lake that continues to evolve.

      Think about how companies may want to leverage then a PaaS. It's the same thing on premises. If my data is on premises, because that's where the physics requires that, I can leverage various development tools or application frameworks on top of that data to create new business apps. About 60 percent of our initial sales at Hortonworks are new business applications by an enterprise. It’s business and IT being involved.

      Leveraging datasets

      Within the first five months, 20 percent of those customers begin to migrate to the data-lake concept, where now they are capturing more data and allowing other business entities within the company to leverage these datasets for additional applications or additional analytics processes. We're seeing Hadoop as a service on premises already. When we move to the cloud, we'll begin to see more of a hybrid model.

      We are already starting to see this with one of Hortonworks large partners, where you put archive data from on premises to store in the cloud at low-cost storage. I think HP will have that same opportunity with Hadoop and their cloud strategy.

      Already, through an initiative at HP, they're providing Hadoop as a service in the cloud for those entities that would like to run Hadoop in a managed service environment.
      We're seeing Hadoop as a service on prem already. When we move to the cloud, we'll begin to see more of a hybrid model.

      That’s the first step of HP beginning to provide Hadoop in a managed service environment off premises. I believe you'll begin to see that migrate to on-prem/off-prem integration in a hybrid opportunity in the some companies as their data moves off prem. They just want to run all of their big-data services or have Hadoop as a service running completely in HP cloud, for example.

      Gardner: So, we're entering in an era now where we're going to be rationalizing how we take our applications as workloads, and continue to use them either on premises, in the cloud, or hybrid. At the same time, over on the side, we're thinking along the same lines architecturally with our data, but they're interdependent.

      You can’t necessarily do a lot with the data without applications, and the applications aren’t as valuable without access to the analytics and the data. So how do these start to come together? Do you have a vision on that yet? Does HP have a vision? How do you see it?

      Ferguson: The Hadoop market is very young. The vision today is that companies are implementing Hadoop to capture data that they're just letting fall on the floor. Now, they're capturing it. The majority of that data is on premises. They're capturing that data and they're beginning to use it in new a business applications or existing analytics processes.
      Fully experience the HP Vertica analytics platform...
      Get the free HP Vertica Community Edition

      Become a member of myVertica
      As they begin to capture that data, as they begin to develop new applications, and as vendors like HP working in combination with Hortonworks provide the ability to effectively move data from on premises to off premises and provide the ability to govern where that data resides in a secure and organized fashion, you'll begin to see much tighter integration of new business or big-data applications being developed on prem, off prem, or an integration of the two. It won't matter.

      Gardner: Great. We've been learning quite a bit about how Hortonworks and Hadoop are changing the game for organizations as they seek to use all of their data and very massive datasets. We’ve heard that that aligns with HP Vertica and HP Haven's strategy around enabling more business applications for more types of data.

      With that, I'd like to thank our guest, Mitch Ferguson, Vice President of Business Development at Hortonworks. Thank you, Mitch.

      Ferguson: Thank you very much, Dana.

      Gardner: This is Dana Gardner. I'd like our audience for joining us for a new style of IT discussion coming to you from the recent HP Big Data 2014 Conference in Boston. Thanks to HP for sponsoring our discussion, and don't forget to come back next time.

      Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: HP.

      Transcript of a BriefingsDirect podcast on how companies are beginning to capture large volumes of data for past, present and future analysis capabilities. Copyright Interarbor Solutions, LLC, 2005-2014. All rights reserved.

      You may also be interested in: