Showing posts with label HPE Vertica. Show all posts
Showing posts with label HPE Vertica. Show all posts

Tuesday, August 23, 2016

How HudsonAlpha Innovates on IT for Research-Driven Education, Genomic Medicine and Entrepreneurship

Transcript of a discussion on how HudsonAlpha leverages modern IT infrastructure and big data analytics to power research projects as well as pioneering genomic medicine findings.

Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: Hewlett Packard Enterprise.

Dana Gardner: Hello, and welcome to the next edition to the Hewlett Packard Enterprise (HPE) Voice of the Customer podcast series. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for this ongoing discussion on IT Innovation -- and how it's making an impact on people's lives.

Gardner
Our next IT infrastructure thought leadership case study explores how the HudsonAlpha Institute for Biotechnology engages in digital transformation for genomic research and healthcare paybacks.

We'll now hear how HudsonAlpha leverages modern IT infrastructure and big-data analytics to power a pioneering research project incubator and genomic medicine innovator.

To describe new possibilities for exploiting cutting-edge IT infrastructure and big data analytics for healthcare innovation, we're joined by Dr. Liz Worthey, Director of Software Development and Informatics at the HudsonAlpha Institute for Biotechnology in Huntsville, Alabama. Welcome, Liz.

Dr. Liz Worthey: Thanks for inviting me.

Gardner: It seems to me that genomics research and IT have a lot in common. There's not much daylight between them -- two different types of technology, but highly interdependent. Have I got that right?
Start Your HPE Vertica
Community Edition Trial
Worthey: Absolutely. It used to be that the IT infrastructure was fairly far away from the clinic or the research, but now they're so deeply intertwined that it necessitates many meetings a week between the leadership of both in order to make sure we get it right.

Gardner: And you have background in both. Maybe you can tell us a little bit about that.

Worthey: My background is primarily on the biology side, although I'm Director of Informatics and I've spent about 20 years working in the software-development and informatics side. I'm not IT Director, but I'm pretty IT savvy, because I've had to develop that skill set over the years. My undergraduate degree was in immunology, and since then, my focus has really been on genetics informatics and bioinformatics.

Gardner: Please describe what genetic informatics or genomic informatics is for our audience.

Worthey: Since 2003, when we received the first version of a human reference genome, there's been a large field involved in the task of extracting knowledge that can be used for society and health from genomic data.

Worthey
A [human] genome is 3.2 billion nucleotides in length, and in there, there's a lot of really useful information. There's information about which diseases that individual may be more likely to get and which diseases they will get.

It’s also information about which drugs they should and shouldn't take; information about which types of procedures, surveillance procedures, what colonoscopies they should have. And so, the clinical aspects of genomics are really developing the analytical capabilities to extract that data in real time so that we can use it to help an individual patient.

On top of that, there's also a lot of research. A lot of that is in large-scale studies across hundreds of thousands of individuals to look for signals that are more difficult to extract from a single genome. Genomics, clinical genomics, is all of that together.

Parallel trajectory

Gardner: Where is the societal change potential in terms of what we can do with this information and these technologies?

Worthey: Genomics has existed for maybe 20 years, but the vast majority of that was the first step. Over the last six years, we've taken maybe the second or third step in a journey that’s thousands of steps long.

We're right on the edge. We didn’t used to be able to do this, because we didn't have any data. We didn't have the capability to sequence a genome cheaply enough to sequence lots. We also didn't have the storage capabilities to store that data, even if we could produce it, and we certainly didn't have enough compute to do the analysis, infrastructure-wise. On top of that, we didn’t actually have the analytical know-how or capabilities either. All of that is really coalescing at the same time.

As we are doing genomics, and that technology and the sequencing side has come up, the compute and the computing technologies have come up at the time. They're feeding each other, and genomics is now driving IT to think about things in a very different way.

Gardner: Let's dive into that a little bit. What are the hurdles technologically for getting to where you want to be, and how do you customize that or need to customize that, for your particular requirements?

Worthey: There are a number of hurdles. Certainly, there are simpler hurdles that we have to get past, like storage, storage tied with compression. How do you compress that data to where you can store millions of genomes at a price that's affordable.

A bigger hurdle is the ability to query information at a lot of disparate sites. When we think about genomic medicine, one of the things that we really want do is share data between institutions that are geographically diverse. And the data that we want to share is millions of data points, each of which has hundreds or thousands of annotations or curations.
When we think about genomic medicine, one of the things that we really want do is share data between institutions that are geographically diverse.

Those are fairly complex queries, even when you're doing it in one site, but in order to really change the practice of medicine, we have to be able to do that regionally, nationally, and globally. So, the analytics questions there are large.

We have 3.2 billion data points for each individual. The data is quite broad, but it’s also pretty deep. One of the big problems is that we don’t have all the data that we need to do genomic medicine. There's going to be data mining -- generate the data, form a hypothesis, look at the data, see what you get, come back with a new hypothesis, and so on.

Finally, one of the problems that we have is that a lot of algorithms that you might use only exists in the brains of MDs, other clinical folks, or researchers. There is really a lot of human computer interaction work to be done, so that we can extract that knowledge.

There are lots of problems. Another big problem is that we really want to put this knowledge in the hands of the doctor while they have seven minutes to see the patient. So, it’s also delivery of answers at that point in time, and the ability to query the data by the person who is doing the analysis, which ideally will be an MD.

Cloud technology

Gardner: Interestingly, the emergence of cloud methods and technology over the past five or 10 years would address some of those issues about distributing the data effectively -- and also perhaps getting actionable intelligence to a physician in an actual critical-care environment. How important is cloud to this process and what sort of infrastructure would be optimal for the types of tasks that you have in mind?

Worthey: If you had asked me that question two years ago, on the genomic medicine side, I would have said that cloud isn't really part of the picture. It wasn't part of the picture for anything other than business reasons. There were a lot of questions around privacy and sharing of healthcare information, and hospitals didn’t like the idea.

They're very reluctant to move to the cloud. Over the last two years, that has started to change. Enough of them had to decide to do it, before everybody would view it as something that was permissible.

Cloud is absolutely necessary in many ways, because we have periods where lots of data that has to be computed and analytics has to be run. Then, we have periods where new information is coming off the sequencer. So, it’s that perfect crest and trough.

If you don't have the ability to deal with that sort of fluctuation, if you buy a certain amount of hardware and you only have it available in-house, your pipeline becomes impacted by the crests and then often sits idle for a long time.
Start Your HPE Vertica
Community Edition Trial
But it’s also important to have stuff in-house, because sometimes, you want to do things in a different way. Sometimes, you want to do things in a more secure manner.

It's kind of our poster child for many of the new technologies that are coming out that look at both of those, that allow you to run things in-house and then also allow you to run the same jobs on the same data in the cloud as well. So, it’s key.

Gardner: That brings me to the next question about this concept of genomics as a service or a platform to support genomics as a service. How do you envision that and how might that come about?

Worthey: When we think about the infrastructure to support that, it has to be something flexible and it has to be provided by organizations that are able to move rapidly, because the field is moving really quickly.

It has to be infrastructure that supports this hypothesis-driven research, and it has to be infrastructure that can deal with these huge datasets. Much of the data is ordered, organized, and well-structured, but because it's healthcare, a lot of the information that we use as part of the interpretation phase of genomic medicine is completely unstructured. There needs to be support for extraction of data from silos.

My dream is that the people who provide these technologies will also help us deal with some of these boundaries, the policy boundaries, to sharing data, because that’s what we need to do for this to become routine.

Data and policy

Gardner: We've seen some of that when it comes to other forms of data, perhaps in the financial sector. More and more, we're seeing tokenization, authentication, and encryption, where data can exist for a period of time with a certain policy attached to it, and then something will happen if the data is a result for that policy. Is that what you're referring to?

Worthey: Absolutely. It's really interesting to come to a meeting like HPE Discover because you get to see what everybody else is doing in different fields. Much of the things that people in my field have regarded as very difficult are actually not that hard at all; they happen all the time in other industries.

A lot of this -- the encryption, the encrypted data sharing, the ability to set those access controls in a particular way that only lasts for a certain amount of time for a particular set of users -- seems complex, but it happens all the time in other fields. A big part of this is talking to people who have a lot of experience in a regulated environment. It’s just not this regulated environment and learning the language that they use to talk to the people that set policy there and transferring that to our policy makers and ideally getting them together to talk to one another.

Gardner: Liz, you mentioned the interest layers in getting your requirements to the technology vendors, cloud providers, and network providers. Is that under way? Is that something that's yet to happen? Where is the synergy between the genomic research community and the technology-vendor platform provider community?
This is happening fast. For genomics, there's been a shift in the volume of genomic data that we can produce with some new sequencing technology that's coming.

Worthey: This is happening fast. For genomics, there's been a shift in the volume of genomic data that we can produce with some new sequencing technology that's coming. If you're a provider of hardware or service user solutions to deal with big data, looking at genomics, as the people here are probably going to overtake many of those other industries in terms of the volume and complexity of the data that we have.

The reason that that's really interesting is because then you get invited to come and talk at forums, where there's lots of technology companies and you make them aware of the work that has to be done in the field of medicine, and in genomic research, and then you can start having those discussions.

A lot of the things that those companies are already doing, the use cases, are similar and maybe need some refinement, but a lot of that capability is already there.

Gardner: It's interesting that you’ve become sort of the “New York” of use cases. If you can make it there, you can make it anywhere. In other words, if we can solve this genomic data issue and use the cloud fruitfully to distribute and gather -- and then control and monitor the data as to where it should be under what circumstances -- we can do just about anything.

Correct me if I am wrong, though. We're using data in the genomic sense for population groups. We're winnowing those groups down into particular diseases. How farfetched is it to think about individuals having their own genomic database that would follow them like an authenticated human design? Is that completely out of the bounds? How far would that possibly be?

Technology is there

Worthey: I’ve had my genome sequenced, and it’s accessible. I could pick it up and look at it on the tools that I developed through my phone sitting here on the table. In terms of the ability to do that, a lot of that technology is already here.

The number of people that are being sequenced is increasing rapidly. We're already using genomics to make diagnosis in patients and to understand their drug interactions. So, we are here.

One of the things that we are talking about just now is, at what point in a person’s life should you sequence their genome. I and a number of other people in the field believe that that is earlier, rather than later, before they get sick. Then, we have that information to use when they get those first symptoms. You are not waiting until they're really ill before you do that.

I can’t imagine a future where that's not what's going to happen, and I don’t think that future is too far away. We're going to see it in our lifetimes, and our children are definitely going to see it in theirs.
The data that we already have, clinical information, is really for that one person, but your genome is shared among your family, even distant relatives that you’ve never met.

Gardner: The inhibitors, though, would be more of an ethical nature, not a technological nature.

Worthey: And policy, and society; the society impact of this is huge.

The data that we already have, clinical information, is really for that one person, but your genome is shared among your family, even distant relatives that you’ve never met. So, when we think about this, there are many very hard ethical questions that we have to think about. There are lots of experts that are working on that, but we can’t let that get in the way of progress. We have to do it. We just have to make sure we do it right.

Gardner: To come back down a little bit toward the technology side of things, seeing as so much progress has been made and that there is the tight relationship between information technology and some of the fantastic things that can happen with the proper knowledge around genomic information, can you describe the infrastructure you have in place? What’s working? What do you use for big-data infrastructure, and cloud or hybrid cloud as well?

Worthey: I'm not on the IT side, but I can tell you about the other side and I can talk a little bit on the IT side as well. In terms of the technologies that we use to store all of that varying information, we're currently using Hadoop and Mongo DB. We finished our proof of concept with HPE, looking at their Vertica solution.

We have to work out what the next steps might be for our proof of concept. Certainly, we’re very interested in looking at the solutions that they have in here. They fit with our needs. The issue that’s been addressed on that side is lots of variants, complex queries, that you need to answer really fast.

On the other side, one of the technological hurdles that we have to meet is the unstructured data. We have electronic health record (EHR) information that’s coming in. We want to hook up to those EHRs and we want to use systems to process that data to make it organized, so that we can use it for the interpretation part.

In-house solution

We developed in-house solutions that we're using right now that allow humans to come in and look at that data and select the terms from it. So, you’d select disease terms. And then, we have in-house solutions to map them to the genomic side. We're looking at things like HPE’s IDOL as a proof-of-concept (POC) on that side. We're talking to some EHR companies about how to hook up the EHR to those solutions to our software to make it a seamless product and that would give us all that.

In terms of hardware, we do have HPE hardware in-house. I think we have 12 petabytes of their storage. We also have data direct network hardware, a general parallel file system solution. We even have things down to graphics processors for some of the analysis that we do. We’ve a large deck of such GPUs because in some cases it’s much faster for some other types of problems that we have to solve. So we are pretty IT-rich, a lot of heavy investment on the IT side.

Gardner: And cloud -- any preference to the topology that works for you architecturally for cloud, or is that still something you are toying with?
We not only do the research and the clinical, but we also have a lab that produces lots of data for other customers, a lab that produces genomic data as a service.

Worthey: We're currently looking at three different solutions that are all cloud solutions. We not only do the research and the clinical, but we also have a lab that produces lots of data for other customers, a lab that produces genomic data as a service.

They have a challenge of getting that amount of data returned to customers in a timely fashion. So, there are solutions that we're looking at there. There are also, as we talked at the start, solutions to help us with that in-flow of the data coming off the sequencers and the compute -- and so we're looking at a number of different solutions that are cloud-based to solve some of those challenges.

Gardner: Before we close, we’ve talked about healthcare and population impacts, but I should think there's also a commercial aspect to this. That kind of information will lend itself to entrepreneurial activities, products and services, a great demand in the marketplace? Is that something you're involved with as well, and wouldn’t that help foot the bill for some of these many costly IT infrastructure investments?

Worthey: One of the ways that HudsonAlpha Institute was set up was just that model. We have a research, not-for-profit side, but we also have a number of affiliate companies that are for-profit, where intellectual property and ideas can go across to that site and be used to generate revenue that fund the research and keep us moving and be on the cutting-edge.

We do have a services lab that does genomic sequencing in analytics. You can order that from them. We also service a lot of people who have government contracts for this type of work. And then, we have an entity called Envision Genomics. For disclosure, I'm one of founders of that entity. It’s focused on empowering people to do genomic medicine and working with lots of different solution providers to get genomic medicine being done everywhere it’s applicable.

Gardner: Well, it's been a fascinating discussion. Thank you for sharing that, and I look forward to tracking the close relationship between IT and genomics, because as you say, they are self-supporting, reinforcing, and both very powerful in their impacts on society.
Start Your HPE Vertica
Community Edition Trial
We've been learning how the HudsonAlpha Institute for Biotechnology engages in digital transformation for genomic research and for healthcare paybacks. We’ve heard how HudsonAlpha leverages modern IT and cloud infrastructures and big data analytics to power research projects as well as pioneering medicine uses and companies.

So please join me in thanking our guest, Dr. Liz Worthey, Director of Software Development and Informatics at the HudsonAlpha Institute for Biotechnology in Huntsville, Alabama. Thank you so much, Liz.

Worthey: Thank you very much. Thanks for inviting me.

Gardner: And I will also thank our audience as well for joining us for this Hewlett Packard Enterprise Voice of the Customer Podcast. I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host for this ongoing series of HPE-sponsored discussions. Thanks again for listening, and do come back next time.

Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: Hewlett Packard Enterprise.

Transcript of a discussion on how HudsonAlpha leverages modern IT infrastructure and big data analytics to power research projects as well as pioneering genomic medicine findings. Copyright Interarbor Solutions, LLC, 2005-2016. All rights reserved.

You may also be interested in:

Friday, June 24, 2016

Here's How Two Part-Time DBAs Maintain Mobile App Ad Platform Tapjoy’s Massive Data Needs

Transcript of a discussion on how mobile app advertising platform Tapjoy handles fast and massive data flows with just two part-time database administrators.

Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: Hewlett Packard Enterprise.

Dana Gardner: Hello, and welcome to the next BriefingsDirect Voice of the Customer innovation case study discussion. I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for this ongoing discussion on IT transformation, and how it’s making an impact on people’s lives.

Gardner
Our next big-data case study discussion examines how mobile app advertising platform Tapjoy handles fast and massive data -- some two dozen terabytes per day -- with just two part-time database administrators (DBAs).

So examine how Tapjoy’s data-driven business of serving 500 million global mobile users -- or more than 1.5 million add engagements per day, a data volume of a 120 terabytes -- runs on HPE Vertica.

We'll learn more about how high scale and complexity meets minimal labor for building user and advertiser loyalty from David Abercrombie, Principal Data Analytics Engineer at Tapjoy in San Francisco.
HPE Vertica
Community Edition
Start Your Free Trial Now
Welcome, David.

David Abercrombie: Thank you so much for having me.

Gardner: Mobile advertising has really been a major growth area, perhaps more than any other type of advertising. We hear a lot about advertising waning, but not mobile app advertising. How does Tapjoy and its platform help contribute to the success of what we're seeing in the mobile app ad space?

Abercrombie: The key to Tapjoy’s success is engaging the users and rewarding them for engaging with an ad. Our advertising model is you engage with an ad and then you get typically some sort of reward: A virtual currency in the game you're playing or some sort of discount.

Abercrombie
We actually have the kind of ads that lead users to seek us out to engage with the ads and get their rewards.

Gardner: So this is quite a bit different than a static presented ad. This is something that has a two-way street, maybe multiple directions of information coming and going. Why the analysis? Why is that so important? And why the speed of analysis?

Abercrombie: We have basically three types of customers. We have the app publishers who want to monetize and get money from displaying ads. We have the advertisers who need to get their message out and pay for that. Then, of course, we have the users who want to engage with the ads and get their rewards.

The key to Tapjoy’s success is being able to balance the needs of all of these disparate uses. We can’t charge the advertisers too much for their ads, even though the monetizers would like that. It’s a delicate balancing act, and that can only be done through big-data analysis, careful optimization, and careful monitoring of the ad network assets and operation.

Gardner: Before we learn more about the analytics, tell us a bit more about what role Tapjoy plays specifically in what looks like an ecosystem play for placing, evaluating, and monetizing app ads? What is it specifically that you do in this bigger app ad function?

Ad engagement model

Abercrombie: Specifically what Tapjoy does is enable this rewarded ad engagement model, so that the advertisers know that people are going to be paying attention to their ads and so that the publishers know that the ads we're displaying are compatible with their app and are not going to produce a jarring experience. We want everybody to be happy -- the publishers, the advertisers, and the users. That’s a delicate compromise that’s Tapjoy’s strength.

Gardner: And when you get an end user to do something, to take an action, that’s very powerful, not only because you're getting them to do what you wanted, but you can evaluate what they did under what circumstances and so forth. Tell us about the model of the end user specifically. What is it about engaging with them that leads to the data -- which we will get to in a moment?

Abercrombie: In our model of the user, we talk about long-term value. So even though it may be a new user who has just started with us, maybe their first engagement, we like to look at them in terms of their long-term value, both to the publishers and the advertiser.

We don’t want people who are just engaging with the ad and going away, getting what they want and not really caring about it. Rather, we want good users who will continue their engagement and continue this process. Once again, that takes some fairly sophisticated machine-learning algorithms and very powerful inferences to be able to assess the long-term value.

As an example, we have our publishers who are also advertisers. They're advertising their app within our platform and for them the conversion event, what they are looking for, is a download. What we're trying to do is to offer them users who will not only download the game once to get that initial payoff reward, but will value the download and continue to use it again and again.
The people who are advertising don’t want people to just see their ads. They want people to follow up with whatever it is they're advertising.

So all of our models are designed with that end in mind -- to look at the long-term value of the user, not just the immediate conversion at this instant in time.

Gardner: So perhaps it’s a bit of a misnomer to talk about ads in apps. We're really talking about a value-add function in the app itself.

Abercrombie: Right. The people who are advertising don’t want people to just see their ads. They want people to follow up with whatever it is they're advertising. If it’s another app, they want good users for whom that app is relevant and useful.

That’s really the way we look at it. That’s the way to enhance the overall experience in the long-term. We're not just in it for the short-term. We're looking at developing a good solid user base, a good set of users who engage thoroughly.

Gardner: And as I said in my set-up, there's nothing hotter in all of advertising than mobile apps and how to do this right. It’s early innings, but clearly the stakes are very high.

A tough business

Abercrombie: And it’s a tough business. People are saturated. Many people don’t want ads. Some of the business models are difficult to master.

For instance, there may be a sequence of multiple ad units. There may be a video followed by another ad to download something. It becomes a very tricky thing to balance the financing here. If it was just a simple pass-through and we take a cut, that would be trivial, but that doesn't work in today's market. There are more sophisticated approaches, which do involve business risk.

If we reward the user, based on the fact that they're watching the video, but then they don't download the app, then we don't get money. So we have to look very carefully at the complexity of the whole interaction to make it as smooth and rewarding as possible, so that the thing works. That's difficult to do.

Gardner: So we're in a dynamic, fast-growing, fairly fresh, new industry. Knowing what's going to happen before it happens is always fun in almost any industry, but in this case, it seems with those high stakes and to make that monetization happen, it’s particularly important.
HPE Vertica
Community Edition
Start Your Free Trial Now
Tell me now about gathering such large amounts of data, being able to work with it, and then allowing analysis to happen very swiftly. How do you go about making that possible?

Abercrombie: Our data architecture is relatively standard for this type of clickstream operation. There is some data that can be put directly into a transactional database in real time, but typically, that's only when you get to the very bottom of the funnel, the conversion stuff. But all that clickstream stuff gets written, has JSON formatted log files, gets swept up by a queuing system, and then put into our data systems.

Our legacy system involved a homegrown queuing system, dumping data into HDFS. From there, we would extract and load CSVs into Vertica. As with so many other organizations, we're moving to more real-time operations. Our queuing system has evolved from a couple of different homegrown applications, and now we're implementing Apache Kafka.

We use Spark as part of our infrastructure, as sort of a hub, if you will, where data is farmed out to other systems, including a real-time, in-memory SQL database, which is fairly new to us this year. Then, we're still putting data in HDFS, and that's where the machine learning occurs. From there, we're bringing it into Vertica.

In Vertica -- and our Vertica cluster has two main purposes -- there is the operational data store, which has the raw, flat tables that are one row for every event, with the millisecond timestamps and the IDs of all the different entities involved.

From that operational data store, we do a pure SQL ETL extract into kind of an old-school star schema within Vertica, the same database.

Pure SQL

So our business intelligence (BI) ETL is pure SQL and goes into a full-fledged snowflake schema, moderately denormalized with all the old-school bells and whistles, the type 1, type 2, slowly changing dimensions. With Vertica, we're able to denormalize that data warehouse to a large degree.

Sitting on top of that we have a BI tool. We use MicroStrategy, for which we have defined our various metrics and our various attributes, and it’s very adept at knowing exactly which fact table and which dimensions to join.

So we have sort of a hybrid architecture. I'd say that we have all the way from real-time, in-memory SQL, Hadoop and all of its machine learning and our algorithmic pipelines, and then we have kind of the old-school data warehouse with the operational data store and the star schema.

Gardner: So a complex, innovative, custom architectural approach to this and yet I'm astonished that you are running and using Vertica in multiple ways with two part-time DBAs. How is it possible that you have minimal labor, given this topology that you just described?

Abercrombie: Well, we found Vertica very easy to manage. It has been very well-behaved, very stable.
In terms of ad-hoc users of our Vertica database, we have well over 100 people who have the ability to run any query they want at any time into the Vertica database.

For instance, we don’t even really use the Management Console, because there is not enough to manage. Our cluster is about 120 terabytes. It’s only on eight nodes and it’s pretty much trouble free.

One of the part-times DBAs deals with kind of more operating-system level stuff --  patches, cluster recovery, those sorts of issues. And the other part-time DBA is me. I deal more with data structure design, SQL tuning and Vertica training for our staff.

In terms of ad-hoc users of our Vertica database, we have well over 100 people who have the ability to run any query they want at any time into the Vertica database.

When we first started out, we tried running Vertica in Amazon EC2. Mind you, this was four or five years ago. Amazon EC2 was not where it is today. It failed. It was very difficult to manage. There were perplexing problems that we couldn’t solve. So we moved our Vertica and essentially all of our big-data data systems out of the cloud onto dedicated hardware, where they are much easier to manage and much easier to bring the proper resources.

Then, at one time in our history, when we built a dedicated hardware cluster for Vertica, we failed to heed properly the hardware planning guide and did not provision enough disk I/O bandwidth. In those situations, Vertica is unstable, and we had a lot of problems.

But once we got the proper disk I/O, it has been smooth sailing. I can’t even remember the last time we even had a node drop out. It has been rock solid. I was able to go on a vacation for three weeks recently and know that there would be no problem, and there was no problem.

Gardner: The ultimate key performance indicator (KPI), "I was able to go on vacation."

Fairly resilient

Abercrombie: Exactly. And with the proper hardware design, HPE Vertica is fairly resilient against out-of-control queries. There was a time when half my time was spent monitoring for slow queries, but again, with the proper hardware, it's smooth sailing. I don’t even bother with that stuff anymore.

Our MicroStrategy BI tool writes very good SQL. Part of the key to our success with this BI portion is designing the Vertica schema and the MicroStrategy metadata layer to take advantage of each other’s strengths and avoid each other’s weaknesses. So that really was key to the stable, exceptional performance we get. I basically get no complaints of slow queries from my BI tool. No problem.

Gardner: The right kind of problem to have.

Abercrombie: Yes.

Gardner: Okay, now that we have heard quite a bit about how you are doing this, I'd like to learn, if I could, about some of the paybacks when you do this properly, when it is running well, in terms of SQL queries, ETL load times reduction, the ability for you to monetize and help your customers create better advertising programs that are acceptable and popular. What are the paybacks technically and then in business terms?
The only way to get that confidence was by having highly accurate data and extensive quality control (QC) in the ETL.

Abercrombie: In order to get those paybacks, a key element was confidence in the data, the results that we were shipping out. The only way to get that confidence was by having highly accurate data and extensive quality control (QC) in the ETL.

What that also means is that as a product is under development and when it’s not ready yet, the instrumentation isn’t ready, that stuff doesn’t make it into our BI tool. You can only get that stuff from ad hoc.

So the benefit has been a very clear understanding of the day-to-day operations of our ad network, both for our internal monitoring to know when things are behaving properly, when the instrumentation is working as expected, and when the queues are running, but also for our customers.

Because of the flexibility that we can do from a traditional BI system with 500 metrics, over a couple of dozen dimensions, our customers, the publishers and the advertisers, get incredible detail, customized exactly the way they need for ingestion into their systems or to help them understand how Tapjoy is serving them. Again, that comes from confidence in the data.

Gardner: When you have more data and better analytics, you can create better products. Where might we look next to where you take this? I don’t expect you to pre-announce anything, but where can you now take these capabilities as a business and maybe even expand into other activities on a mobile endpoint?

Flexibility in algorithms

Abercrombie: As we expand our business and move into new areas, what we really need is flexibility in our algorithms and the way we deal with some of our real-time decision making.

So one area that’s new to us this year is the in-memory SQL database like MemSQL. Some of our old real-time ad optimization was based on pre-calculating data and serving it up through HBase KeyValue, but now, where we can do real-time aggregation queries using SQL, that is easy to understand, easy to modify, very expressive and very transparent. It gives us more flexibility in terms of fine-tuning our real-time decision-making algorithms, which is absolutely necessary.

As an example, we acquired a company in Korea called 5Rocks that does app tech and that tracks the users within the app, like what level they're on, or what activities they're doing and what they enjoy, with an eye towards in-app purchase optimization.

And so we're blending the in-app purchase optimization along with traditional ad network optimization, and the two have different rules and different constraints. So we really need the flexibility and expressiveness of our real-time decision making systems.

Gardner: One last question. You mentioned machine learning earlier. Do you see that becoming more prominent in what you do and how you're working with data scientists, and how might that expand in terms of where you employ it?

Abercrombie: Tapjoy started with machine learning. Our data scientists are machine learning. Our productive algorithm team is about six times larger than our traditional Vertica BI team. Mostly what we do at Tapjoy is predictive analytics and various machine-learning things. So we wouldn't be alive without it. And we expanded. We're not shifting in one direction or another. It's apples and oranges, and there's a place for both.
HPE Vertica
Community Edition
Start Your Free Trial Now
Gardner: I'm afraid we will have to leave it there. We've been examining how mobile app advertising platform Tapjoy handles fast and massive data flows with just two part-time database administrators. And we've learned how Tapjoy’s data-driven business of serving 500 million global mobile users, more than 1.5 million ad engagements per day, runs on HPE Vertica.

Please join me in thanking our guest. We've been here with David Abercrombie, Principal Data Analytics Engineer at Tapjoy in San Francisco. Thank you so much, David.

Abercrombie: Thank you.

Gardner: And I'd like to thank our audience as well for joining us for this big data innovation case study discussion.

I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host for this ongoing series of HPE-sponsored discussions. Thanks again for listening, and come back next time.

Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: Hewlett Packard Enterprise.

Transcript of a discussion on how mobile app advertising platform Tapjoy handles fast and massive data flows with just two part-time database administrators. Copyright Interarbor Solutions, LLC, 2005-2016. All rights reserved.

You may also be interested in:

Thursday, May 12, 2016

Playtika Bets on Big Data Analytics to Deliver Captivating Social Gaming Experiences

Transcript of a discussion on how Playtika uses data science and a unique architectural approach to conquer big data hurdles around volume, velocity, and variety of data.

Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: Hewlett Packard Enterprise.

Dana Gardner: Hello, and welcome to the next edition of the Voice of the Customer podcast series. I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for this ongoing discussion on IT innovation and how it’s making an impact on people’s lives.

Gardner
Our next big-data case study discussion explores how social gaming company Playtika uses big-data analytics to deliver captivating user experiences and engagement.

We'll learn how feedback from user action streams can be analyzed in bulk rapidly to improve the features and attractions of online games and so help Playtika react well in an agile market.

To learn more about leveraging big data in the social casino industry, we're pleased to welcome Jack Gudenkauf, Vice President of Big Data at Playtika in Santa Monica, California. Welcome, Jack.
HPE Vertica
Community Edition
Start Your Free Trial Now
Jack Gudenkauf: Thank you. It’s great to be here.

Gardner: Tell us about Playtika. I understand that you're part of Caesars Interactive Entertainment and that you have a number of online games. What are you all about?

Gudenkauf: We have a few free-to-play social casino games. In fact, we're the industry leader. We have maybe 10 games at this point. World Series of Poker, which you've probably heard about, Slotomania, House of Fun, Bingo Blitz, a number of studios combined.

Gudenkauf
Worldwide, we're about 1,000 employees. As I say, we're the industry leader in this space at this moment. And it's a very challenging space, as you might imagine, just within gaming itself. The amount of data is huge, especially across all of these games. Collecting information about how the users play the game and what they like about the game, is really a completely data-driven experience.

If we release a new feature, we get feedback. Of course, it’s social gaming as well. If we find out that they don't like the feature, we have to rev the game pretty quickly. It's not like the old days, where you go away for a year or so, and come out with something that you hope people like -- Halo, or something like that. It's more about the users driving the experience and what they enjoy.

So we'll try something with some content or something else and see if they like this feature or functionality. If the data comes back immediately that, as they do the slot spin and they have a new version of the game and they're clearly not playing, we literally change the game.

In fact, in the Bingo Blitz game, we will revise the game as often as once a week, if you can imagine that. So we have to be pretty agile. The data completely drives the user experience as well. Do they like this, do they not like this, shall we make this game change?

Data-driven environment

It’s a complete data-driven environment. That's what brought me there. I came from Twitter, where we used very big data, as you might imagine, with Hewlett Packard Enterprise (HPE) Vertica and Hadoop and such, but it was more about volume there. Here it’s about variety, velocity, and changing game events across all of our studios.

You can imagine the amount of data that we have to crunch through, do analytics on, and then get user feedback. The whole intention is to get feedback sooner so that we can change the game as rapidly as possible, so that users are happy with the game.

So it’s completely user-driven as far as kind of the experience and what they enjoy, which is fun and makes it challenging as well.

Gardner: So being a data scientist in this particular organization gives you a pretty important place at a major table. It's not something to think about at the end of the month when we run some reports. This is essential and integral to the success of the company?

Gudenkauf: Of course, we do analyze the data for daily, monthly, and general key performance indicators (KPIs), daily active users or monthly active users, those types of things. But you're absolutely right. With the game events themselves, we need to process the data as quickly as possible and do the analysis. So analytics is a huge part of our processing.
With the user experience and what they enjoy and the free to play, in particular, the demand is pretty high.

We actually have a game economy as well, which is kind of fascinating. If you think of it in terms of the US economy, you can only have so much money in the economy without having inflation and deflation. Imagine if I won all the money and nobody else could have money to play with. It’s kind of game over for us, because they can’t play the game anymore. So we have to manage that quite well.

Of course, with the user experience and what they enjoy and the free to play, in particular, the demand is pretty high. It’s like with apps that you pay for. The 99-cent apps are the ones that people think the most about.

When somebody is spending a dollar, it's very important to them. You want the experience to be a great experience for them. So the data-driven aspects of that and doing the analysis and analytics of it, and feeding that back to the game is extremely important to us. The velocity and the variety of games and different features that we have and processing that as fast as possible is quite a challenge.

Gardner: Now, games like poker, slots, or bingo, these are games that have been around for decades, if not hundreds of years, and they've had a new life online in the past 15 years, which is the Dark Ages of online gaming. What's new and different about games now, even though the game is essentially quite familiar to people? What's new and different about a social casino game?

Social aspect

Gudenkauf: I've thought about that quite a bit. A lot of it has to do with the social aspect. Now, you can play bingo, not just with your friends at the local club, but you can play with people around the world.

You can share items and gifts, and if you are running low on money, maybe you can borrow some from your friends. And you can chat with them. The social aspect just opened up all kinds of avenues.

In our case, with our games in the studios, because they're familiar, they stand the test of time. Take something like a bingo or slots, as opposed to some new game that people don't really understand. They may like it. They may only like it for a while. It’s like playing Scrabble or Monopoly with your family. It's a game that's just very familiar and something you enjoy playing.

But, with the online and the social aspect of it, I explain it to other people as imagine Carmen Sandiego meets bingo. You can have experiences where you're playing bingo, you go on this journey to Egypt, and you're collecting items and exploring Egypt, trying to get to another thing. We can take it to places that you wouldn't normally take a traditional kind of board game and in a more social aspect.
So you extract that data as usual and then you transform it. You reshape it and change it around a little bit to put it in a format to get it into a data warehouse like Vertica.

Gardner: So this really appeals to what's conceived of as entertainment in multiple ways for an individual. Again, as you established, the analysis and feedback loops are really important.

I understand why doing great data analysis is so important to this particular use case. Tell us a little bit about how you pull that off. What sort of data architecture do you have? What sort of requirements do you have? What are the biggest problems you have to overcome to achieve your goals?

Gudenkauf: If you think about the traditional way of consuming data and getting it into a reporting system, you have an extract. You're going to bring in data from somewhere, and of course, in our case it’s from mobile devices, the web, from playing on Facebook. You have information about how much money did you spend, and user behavior. Did they like it?

So you extract that data as usual, and then you transform it. You reshape it and change it around a little bit to put it in a format to get it into a data warehouse like Vertica.

Once you get it into HPE Vertica, you have the extract, transform, and the load (ETL), the traditional model. You load it into Vertica and then you do your analysis there, where you can do SQL, JOINs, and analytics over it.

A new industry term that I'm coining is what we call Parallelized Streaming Transformation Loader (PSTL) instead of ETL. This is about ingesting data as fast as possible, processing it, and making analytics available through the entire data pipeline, instead of just in the data warehouse.

Real-time streaming

Imagine, instead of the extract, we're taking real-time streaming data. We're reading, in our case, off a Kafka queue. Kafka is very robust and has been used by LinkedIn and Twitter. So it’s pretty substantial and scalable.

We read the messages in parallel as they're streaming in from all the game studios, certain amounts of data here and there, depending on how much we do with the particular studio. With Bingo Blitz, in our case, we consume a lot more user behavior than say some of the other studios.

But we ingest all the data. We need to get it in in real-time streaming. So we read it in in parallel. That’s the parallel part and the streaming part. But then we take it from the streaming, and instead of extracting, it's being fed into us.

Then we do parallel transformations in Spark and our Hadoop cluster. Think of it as  bringing in a bunch of JSON event data, we are putting it into an in-memory table that’s distributed in Spark.
HPE Vertica
Community Edition
Start Your Free Trial Now
Then, we do parallel transformations, meaning we can restructure the data, we can do transforms from uppercase, lowercase, whatever we need to do. But it's done in parallel across the cluster as well. Where, traditionally, there's a single monolithic app that was running, we could run independent to the extract of the load.

We have so much data that we need to also do the transformations in parallel. We do that in what are called Resilient Distributed Datasets (RDDs). It’s kind of a mouthful, but think of it as just a bunch of slices of data across a bunch of computers and your nodes, and then doing transforms on that in parallel. Then, something that has been a dream of mine is how to get all that data in parallel at the same time into HPE Vertica.

HPE Vertica does a great job of doing massive parallel processing (MPP) and all that means is running the query and pulling data off of different nodes in the cluster. Then, maybe you're grouping by this and you are summing this and doing an average.

But, to date they hadn't had something that I tried to do when I was at Twitter, but managed to pull off now, which is to load the data in parallel. While the data is in memory in Spark and distributed datasets, we use the Vertica Hash function that will tell us exactly where the data will land when we write it to a Vertica node.

We can say, User A, if I were to write this to Vertica, I know that it’s going to go on this machine. User B will go to the next machine. It just distributes the load, but we, a priori, hash the data into buckets, so that we know, when we actually write the data, that it goes to this node. Then, Vertica doesn’t have to move it. Usually you write it to one node and it says, "No, you really belong over here," and so it asks you to move it and shuffle, like a traditional MapReduce.

Working with Vertica

So we created something in conjunction with the Vertica developers. We announced it. That part of it is kind of a TCP server aspect that we extend in the Copy command that exist in Vertica itself. We literally go from streaming in parallel, reading into in-memory data structures, do the transformations, and then write it directly from memory into our Vertica data warehouse.

That allows us to get the data in as fast as possible from streaming right to the right. We don’t have to hit a disk along the way and we can do analytics in Vertica sooner. We can also do analytics in Hadoop clusters for older data and do machine learning on that. We can do all kinds of things based on historical user behavior.

If we're doing a sale or something like that, how well is it resonating compared to the past. What we're doing is pushing the envelope to push the analytics as close as we can up to the actual game itself.

As I said, traditionally, you do the analytics, get the feedback, change the game, release it in a week, etc. We're going to try to push that all the way up to be as near real time as we can. Basically, the PSTL pipeline, allows us to do that, do analytics, and tighten that loop down so that we can get the user behavior to the user as fast as possible.
Once you have it in as fast as you can, reshaping it while it’s in memory, which of course is faster, and taking advantage of doing the parallel transformations at the same time, and in the parallel loading as well, it’s just a way more optimized solution.

Gardner: It’s intriguing. It sounds as if you're able, with a common architecture, to do multiple types of analysis readily but without having to reshuffle the deck chairs each time. Is that fair?

Gudenkauf: That's exactly right. That’s the beauty of this model and why I'm putting up more prescriptive guidance around it. It changes the paradigm of the traditional way of processing data.

We announced some benchmarking. Last year at the HPE Big Data Conference, Facebook stole the show with 36 terabytes an hour on 270 machines. With our model, you could do it with about 80 machines. So it scales very well. Some people say, "We're not Twitter or Facebook scale, but the speed at which we want to consume the data and make it available for analytics is extremely important to us."

The less busy the machines are, the more you can do with them. So does it need to scale like that? No, we are not processing as much data, but the volume, velocity, and variety is a big deal for us. We do need to process the volume, and we do have a lot of events. The volume is not insignificant. We're talking about billions of events, mind you. We're not on the sheer scale of say Twitter or Facebook, but the solution will work for both, in both scenarios.

Gardner: So, Jack, with this capability analysis as close to real-time with the volume and the variety that you are able to accomplish, while this is a great opportunity for you to react in a gaming environment. you're also pushing the envelope on what analysis and reaction can happen to almost any human behaviors at scale. In this case, it happens to be gaming, but there are probably other applications for this. Have you thought about that or are there other places you can take it within an interactive entertainment environment?

All kinds of solutions

Gudenkauf: I can imagine all kinds of solutions for it. In fact, I've had a number of people come up to me and say, "We're doing this Chicago Stock Exchange, and we have a massive amount of streaming-in data. This is a perfect solution for that."

I've had other people come in to talk to me about other aspects and other games as well that are not social casino genre, but they have the same problem. So it's the traditional problem of how to ingest data, massage it, load it, and then have analytics through that entire process. It’s applicable really in any scenario. That’s one of the reasons I'm so excited about the PSTL model, because it just scales extremely well along the way.

Gardner: Let’s relate this back to this particular application, which is higher entertaining games that react, and maybe even start pushing envelope into anticipating what people will want in a game. What’s the next step for making these types of games engaging? I'm even starting to toy with the concept of artificial intelligence (AI), where people wouldn’t know that it’s a game. They might not even know the difference between the game and other social participants. Are we getting anywhere close to that?
Looking at historical data and doing machine learning, we can make better determinations of games and game behavior.

Gudenkauf: You're thinking extremely clearly on the spectrum in analytics in general. Before, it was just general reporting in the feedback loop, but you're absolutely right. As you can see, it’s enabled through our model of prescriptive analytics. Looking at historical data and doing machine learning, we can make better determinations of games and game behavior that will drive the game based on historical knowledge or incoming data that’s more predictive analytics.

Then, as you say, maybe even into the future, beyond predictive and prescriptive analytics, we can almost change as rapidly as possible. We know the user behavior before the user knows the behavior. That will be a great world, and I'm sure we would be extremely successful to get to that final spectrum. But just doing the prescriptive analytics alone, so that the user is happy with the game, and we can get that back to them as quickly as possible, that’s big in and of itself.

Gardner: So maybe a new game some day will be pass the Turing Test, you against our analysis capabilities?

Gudenkauf: Yeah, that would be pretty cool. Maybe eventually it will tie into the whole virtual reality. It’s kind of happening based on the information behaviors immediately. That will be neat.

Gardner: Very exciting world coming our way, right? We're only scratching the surface. I guess I have run out of questions because my mind is reeling at some of these possibilities.

One last area though. For a platform like HPE Vertica, what would you like to see them do intrinsic to the product? We have the announcement recently about the next version of Vertica, but what might be on your list, a wish-list if you will, for what should be in the product to allow this sort of thing to happen even more readily?

Influencing the product

Gudenkauf: That’s one of the reasons we go to conferences. It’s one of the few conferences where you can get to the actual developers or professional services and influence the product itself.

One of the reasons why I like to be on the leading edge or bleeding edge is so that we can affect product development and what they are working on. I've been fortunate enough to be able to work with developers and people internal to HPE Vertica for quite a while now. I just love the product and I want to see it be successful. With the adoption and their more openness of  working with open source like Spark and MapReduce, the whole ecosystem works well together, as opposed to opposing each other, which I think is what most people think. It’s a very collaborative, cooperative environment especially through our pipeline.

I really like the fact that when I talk about things like Kafka and the PSTL, and that Spark is a core part of our architecture, now we're having conversation, and lots of them, to help Vertica and influence them to invest more in Spark and the interaction between Vertica data warehouse, Spark, and that eco-system from Kafka.
One of the reasons why I like to be on the leading edge or bleeding edge is so that we can affect product development and what they are working on.

From the part of the work that we did with Vertica over the last year with reading streaming data from Kafka into Spark, of course, and then into Vertica, they said that  reading real-time streaming data from Kafka directly into HPE Vertica will be a great add-on  and they announced it. Ben Vandiver and developers announced it.

I really want to be in a place, and this affords us to be in that place, to influence where they are going, because it benefits all of us and the entire community. It's being able to give them prescriptive guidance as well from the customer perspective, because this is what we're doing in the real world, of course. They want to make us happy, and we will make them happy.

Our investments have been in things like Kafka streaming and Spark and how does Spark SQL work with Vertica and VSQL. They don’t necessarily have to compete. There is a world for both. So coexisting, influencing that, and having them be receptive to it is amazing. A lot of companies aren’t very receptive to taking the feedback from us as consumers and baking that into offerings.

One of the things in our model to load the data as fast as possible in parallel is that we pre-hash the data. If you just take user IDs, for instance, and you hash on those IDs, so that you can put this user on this node, and this one on this one and this one, is an even distribution of data, that wasn’t exposed in Vertica. I've been asking for it since the Twitter days for years.

So we wrote our own version of it. I managed to have the Vertica developers, which is a rare and a great opportunity, review what we had done. They said, "Yes, that’s spot on. That’s exactly the implementation." I said, "You know what would be even better. I've been asking for this for years and I know you have lots of other customers. Why don’t you just make it available for everybody to use. Then, I don’t have to use mine and everybody else can benefit from it as well.
HPE Vertica
Community Edition
Start Your Free Trial Now
They announced in 2015 that they're going to make it available. So being able to influence things like that just helped the whole ecosystem.

Gardner: Excellent. I'm afraid we'll have to leave it there. We've been exploring how Playtika uses big data analytics deliver captivating social game experiences and engagement for their end users, but we've also seen that they have a tremendous amount of data science going on and an architectural approach to conquer some of these hurdles around volume, velocity, and variety that I think probably are applicable in many other cutting-edge applications.

So a big thank to our guest. We've been here with Jack Gudenkauf, Vice President of Big Data at Playtika in Santa Monica, California. Thanks so much, Jack.

Gudenkauf: Thank you. It was a pleasure.

Gardner: And a big thank you to our audience as well for joining us for this big data innovation case study discussion.

I'm Dana Gardner; Principal Analyst at Interarbor Solutions, your host for this ongoing series of HPE-sponsored Voice of the Customer discussions. Thanks again for listening, and come back next time.

Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: Hewlett Packard Enterprise.

Transcript of a discussion on how Playtika uses data science and a unique architectural approach to conquer big data hurdles around volume, velocity, and variety of data. Copyright Interarbor Solutions, LLC, 2005-2016. All rights reserved.

You may also be interested in: