Showing posts with label data analytics. Show all posts
Showing posts with label data analytics. Show all posts

Thursday, May 12, 2016

Playtika Bets on Big Data Analytics to Deliver Captivating Social Gaming Experiences

Transcript of a discussion on how Playtika uses data science and a unique architectural approach to conquer big data hurdles around volume, velocity, and variety of data.

Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: Hewlett Packard Enterprise.

Dana Gardner: Hello, and welcome to the next edition of the Voice of the Customer podcast series. I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for this ongoing discussion on IT innovation and how it’s making an impact on people’s lives.

Gardner
Our next big-data case study discussion explores how social gaming company Playtika uses big-data analytics to deliver captivating user experiences and engagement.

We'll learn how feedback from user action streams can be analyzed in bulk rapidly to improve the features and attractions of online games and so help Playtika react well in an agile market.

To learn more about leveraging big data in the social casino industry, we're pleased to welcome Jack Gudenkauf, Vice President of Big Data at Playtika in Santa Monica, California. Welcome, Jack.
HPE Vertica
Community Edition
Start Your Free Trial Now
Jack Gudenkauf: Thank you. It’s great to be here.

Gardner: Tell us about Playtika. I understand that you're part of Caesars Interactive Entertainment and that you have a number of online games. What are you all about?

Gudenkauf: We have a few free-to-play social casino games. In fact, we're the industry leader. We have maybe 10 games at this point. World Series of Poker, which you've probably heard about, Slotomania, House of Fun, Bingo Blitz, a number of studios combined.

Gudenkauf
Worldwide, we're about 1,000 employees. As I say, we're the industry leader in this space at this moment. And it's a very challenging space, as you might imagine, just within gaming itself. The amount of data is huge, especially across all of these games. Collecting information about how the users play the game and what they like about the game, is really a completely data-driven experience.

If we release a new feature, we get feedback. Of course, it’s social gaming as well. If we find out that they don't like the feature, we have to rev the game pretty quickly. It's not like the old days, where you go away for a year or so, and come out with something that you hope people like -- Halo, or something like that. It's more about the users driving the experience and what they enjoy.

So we'll try something with some content or something else and see if they like this feature or functionality. If the data comes back immediately that, as they do the slot spin and they have a new version of the game and they're clearly not playing, we literally change the game.

In fact, in the Bingo Blitz game, we will revise the game as often as once a week, if you can imagine that. So we have to be pretty agile. The data completely drives the user experience as well. Do they like this, do they not like this, shall we make this game change?

Data-driven environment

It’s a complete data-driven environment. That's what brought me there. I came from Twitter, where we used very big data, as you might imagine, with Hewlett Packard Enterprise (HPE) Vertica and Hadoop and such, but it was more about volume there. Here it’s about variety, velocity, and changing game events across all of our studios.

You can imagine the amount of data that we have to crunch through, do analytics on, and then get user feedback. The whole intention is to get feedback sooner so that we can change the game as rapidly as possible, so that users are happy with the game.

So it’s completely user-driven as far as kind of the experience and what they enjoy, which is fun and makes it challenging as well.

Gardner: So being a data scientist in this particular organization gives you a pretty important place at a major table. It's not something to think about at the end of the month when we run some reports. This is essential and integral to the success of the company?

Gudenkauf: Of course, we do analyze the data for daily, monthly, and general key performance indicators (KPIs), daily active users or monthly active users, those types of things. But you're absolutely right. With the game events themselves, we need to process the data as quickly as possible and do the analysis. So analytics is a huge part of our processing.
With the user experience and what they enjoy and the free to play, in particular, the demand is pretty high.

We actually have a game economy as well, which is kind of fascinating. If you think of it in terms of the US economy, you can only have so much money in the economy without having inflation and deflation. Imagine if I won all the money and nobody else could have money to play with. It’s kind of game over for us, because they can’t play the game anymore. So we have to manage that quite well.

Of course, with the user experience and what they enjoy and the free to play, in particular, the demand is pretty high. It’s like with apps that you pay for. The 99-cent apps are the ones that people think the most about.

When somebody is spending a dollar, it's very important to them. You want the experience to be a great experience for them. So the data-driven aspects of that and doing the analysis and analytics of it, and feeding that back to the game is extremely important to us. The velocity and the variety of games and different features that we have and processing that as fast as possible is quite a challenge.

Gardner: Now, games like poker, slots, or bingo, these are games that have been around for decades, if not hundreds of years, and they've had a new life online in the past 15 years, which is the Dark Ages of online gaming. What's new and different about games now, even though the game is essentially quite familiar to people? What's new and different about a social casino game?

Social aspect

Gudenkauf: I've thought about that quite a bit. A lot of it has to do with the social aspect. Now, you can play bingo, not just with your friends at the local club, but you can play with people around the world.

You can share items and gifts, and if you are running low on money, maybe you can borrow some from your friends. And you can chat with them. The social aspect just opened up all kinds of avenues.

In our case, with our games in the studios, because they're familiar, they stand the test of time. Take something like a bingo or slots, as opposed to some new game that people don't really understand. They may like it. They may only like it for a while. It’s like playing Scrabble or Monopoly with your family. It's a game that's just very familiar and something you enjoy playing.

But, with the online and the social aspect of it, I explain it to other people as imagine Carmen Sandiego meets bingo. You can have experiences where you're playing bingo, you go on this journey to Egypt, and you're collecting items and exploring Egypt, trying to get to another thing. We can take it to places that you wouldn't normally take a traditional kind of board game and in a more social aspect.
So you extract that data as usual and then you transform it. You reshape it and change it around a little bit to put it in a format to get it into a data warehouse like Vertica.

Gardner: So this really appeals to what's conceived of as entertainment in multiple ways for an individual. Again, as you established, the analysis and feedback loops are really important.

I understand why doing great data analysis is so important to this particular use case. Tell us a little bit about how you pull that off. What sort of data architecture do you have? What sort of requirements do you have? What are the biggest problems you have to overcome to achieve your goals?

Gudenkauf: If you think about the traditional way of consuming data and getting it into a reporting system, you have an extract. You're going to bring in data from somewhere, and of course, in our case it’s from mobile devices, the web, from playing on Facebook. You have information about how much money did you spend, and user behavior. Did they like it?

So you extract that data as usual, and then you transform it. You reshape it and change it around a little bit to put it in a format to get it into a data warehouse like Vertica.

Once you get it into HPE Vertica, you have the extract, transform, and the load (ETL), the traditional model. You load it into Vertica and then you do your analysis there, where you can do SQL, JOINs, and analytics over it.

A new industry term that I'm coining is what we call Parallelized Streaming Transformation Loader (PSTL) instead of ETL. This is about ingesting data as fast as possible, processing it, and making analytics available through the entire data pipeline, instead of just in the data warehouse.

Real-time streaming

Imagine, instead of the extract, we're taking real-time streaming data. We're reading, in our case, off a Kafka queue. Kafka is very robust and has been used by LinkedIn and Twitter. So it’s pretty substantial and scalable.

We read the messages in parallel as they're streaming in from all the game studios, certain amounts of data here and there, depending on how much we do with the particular studio. With Bingo Blitz, in our case, we consume a lot more user behavior than say some of the other studios.

But we ingest all the data. We need to get it in in real-time streaming. So we read it in in parallel. That’s the parallel part and the streaming part. But then we take it from the streaming, and instead of extracting, it's being fed into us.

Then we do parallel transformations in Spark and our Hadoop cluster. Think of it as  bringing in a bunch of JSON event data, we are putting it into an in-memory table that’s distributed in Spark.
HPE Vertica
Community Edition
Start Your Free Trial Now
Then, we do parallel transformations, meaning we can restructure the data, we can do transforms from uppercase, lowercase, whatever we need to do. But it's done in parallel across the cluster as well. Where, traditionally, there's a single monolithic app that was running, we could run independent to the extract of the load.

We have so much data that we need to also do the transformations in parallel. We do that in what are called Resilient Distributed Datasets (RDDs). It’s kind of a mouthful, but think of it as just a bunch of slices of data across a bunch of computers and your nodes, and then doing transforms on that in parallel. Then, something that has been a dream of mine is how to get all that data in parallel at the same time into HPE Vertica.

HPE Vertica does a great job of doing massive parallel processing (MPP) and all that means is running the query and pulling data off of different nodes in the cluster. Then, maybe you're grouping by this and you are summing this and doing an average.

But, to date they hadn't had something that I tried to do when I was at Twitter, but managed to pull off now, which is to load the data in parallel. While the data is in memory in Spark and distributed datasets, we use the Vertica Hash function that will tell us exactly where the data will land when we write it to a Vertica node.

We can say, User A, if I were to write this to Vertica, I know that it’s going to go on this machine. User B will go to the next machine. It just distributes the load, but we, a priori, hash the data into buckets, so that we know, when we actually write the data, that it goes to this node. Then, Vertica doesn’t have to move it. Usually you write it to one node and it says, "No, you really belong over here," and so it asks you to move it and shuffle, like a traditional MapReduce.

Working with Vertica

So we created something in conjunction with the Vertica developers. We announced it. That part of it is kind of a TCP server aspect that we extend in the Copy command that exist in Vertica itself. We literally go from streaming in parallel, reading into in-memory data structures, do the transformations, and then write it directly from memory into our Vertica data warehouse.

That allows us to get the data in as fast as possible from streaming right to the right. We don’t have to hit a disk along the way and we can do analytics in Vertica sooner. We can also do analytics in Hadoop clusters for older data and do machine learning on that. We can do all kinds of things based on historical user behavior.

If we're doing a sale or something like that, how well is it resonating compared to the past. What we're doing is pushing the envelope to push the analytics as close as we can up to the actual game itself.

As I said, traditionally, you do the analytics, get the feedback, change the game, release it in a week, etc. We're going to try to push that all the way up to be as near real time as we can. Basically, the PSTL pipeline, allows us to do that, do analytics, and tighten that loop down so that we can get the user behavior to the user as fast as possible.
Once you have it in as fast as you can, reshaping it while it’s in memory, which of course is faster, and taking advantage of doing the parallel transformations at the same time, and in the parallel loading as well, it’s just a way more optimized solution.

Gardner: It’s intriguing. It sounds as if you're able, with a common architecture, to do multiple types of analysis readily but without having to reshuffle the deck chairs each time. Is that fair?

Gudenkauf: That's exactly right. That’s the beauty of this model and why I'm putting up more prescriptive guidance around it. It changes the paradigm of the traditional way of processing data.

We announced some benchmarking. Last year at the HPE Big Data Conference, Facebook stole the show with 36 terabytes an hour on 270 machines. With our model, you could do it with about 80 machines. So it scales very well. Some people say, "We're not Twitter or Facebook scale, but the speed at which we want to consume the data and make it available for analytics is extremely important to us."

The less busy the machines are, the more you can do with them. So does it need to scale like that? No, we are not processing as much data, but the volume, velocity, and variety is a big deal for us. We do need to process the volume, and we do have a lot of events. The volume is not insignificant. We're talking about billions of events, mind you. We're not on the sheer scale of say Twitter or Facebook, but the solution will work for both, in both scenarios.

Gardner: So, Jack, with this capability analysis as close to real-time with the volume and the variety that you are able to accomplish, while this is a great opportunity for you to react in a gaming environment. you're also pushing the envelope on what analysis and reaction can happen to almost any human behaviors at scale. In this case, it happens to be gaming, but there are probably other applications for this. Have you thought about that or are there other places you can take it within an interactive entertainment environment?

All kinds of solutions

Gudenkauf: I can imagine all kinds of solutions for it. In fact, I've had a number of people come up to me and say, "We're doing this Chicago Stock Exchange, and we have a massive amount of streaming-in data. This is a perfect solution for that."

I've had other people come in to talk to me about other aspects and other games as well that are not social casino genre, but they have the same problem. So it's the traditional problem of how to ingest data, massage it, load it, and then have analytics through that entire process. It’s applicable really in any scenario. That’s one of the reasons I'm so excited about the PSTL model, because it just scales extremely well along the way.

Gardner: Let’s relate this back to this particular application, which is higher entertaining games that react, and maybe even start pushing envelope into anticipating what people will want in a game. What’s the next step for making these types of games engaging? I'm even starting to toy with the concept of artificial intelligence (AI), where people wouldn’t know that it’s a game. They might not even know the difference between the game and other social participants. Are we getting anywhere close to that?
Looking at historical data and doing machine learning, we can make better determinations of games and game behavior.

Gudenkauf: You're thinking extremely clearly on the spectrum in analytics in general. Before, it was just general reporting in the feedback loop, but you're absolutely right. As you can see, it’s enabled through our model of prescriptive analytics. Looking at historical data and doing machine learning, we can make better determinations of games and game behavior that will drive the game based on historical knowledge or incoming data that’s more predictive analytics.

Then, as you say, maybe even into the future, beyond predictive and prescriptive analytics, we can almost change as rapidly as possible. We know the user behavior before the user knows the behavior. That will be a great world, and I'm sure we would be extremely successful to get to that final spectrum. But just doing the prescriptive analytics alone, so that the user is happy with the game, and we can get that back to them as quickly as possible, that’s big in and of itself.

Gardner: So maybe a new game some day will be pass the Turing Test, you against our analysis capabilities?

Gudenkauf: Yeah, that would be pretty cool. Maybe eventually it will tie into the whole virtual reality. It’s kind of happening based on the information behaviors immediately. That will be neat.

Gardner: Very exciting world coming our way, right? We're only scratching the surface. I guess I have run out of questions because my mind is reeling at some of these possibilities.

One last area though. For a platform like HPE Vertica, what would you like to see them do intrinsic to the product? We have the announcement recently about the next version of Vertica, but what might be on your list, a wish-list if you will, for what should be in the product to allow this sort of thing to happen even more readily?

Influencing the product

Gudenkauf: That’s one of the reasons we go to conferences. It’s one of the few conferences where you can get to the actual developers or professional services and influence the product itself.

One of the reasons why I like to be on the leading edge or bleeding edge is so that we can affect product development and what they are working on. I've been fortunate enough to be able to work with developers and people internal to HPE Vertica for quite a while now. I just love the product and I want to see it be successful. With the adoption and their more openness of  working with open source like Spark and MapReduce, the whole ecosystem works well together, as opposed to opposing each other, which I think is what most people think. It’s a very collaborative, cooperative environment especially through our pipeline.

I really like the fact that when I talk about things like Kafka and the PSTL, and that Spark is a core part of our architecture, now we're having conversation, and lots of them, to help Vertica and influence them to invest more in Spark and the interaction between Vertica data warehouse, Spark, and that eco-system from Kafka.
One of the reasons why I like to be on the leading edge or bleeding edge is so that we can affect product development and what they are working on.

From the part of the work that we did with Vertica over the last year with reading streaming data from Kafka into Spark, of course, and then into Vertica, they said that  reading real-time streaming data from Kafka directly into HPE Vertica will be a great add-on  and they announced it. Ben Vandiver and developers announced it.

I really want to be in a place, and this affords us to be in that place, to influence where they are going, because it benefits all of us and the entire community. It's being able to give them prescriptive guidance as well from the customer perspective, because this is what we're doing in the real world, of course. They want to make us happy, and we will make them happy.

Our investments have been in things like Kafka streaming and Spark and how does Spark SQL work with Vertica and VSQL. They don’t necessarily have to compete. There is a world for both. So coexisting, influencing that, and having them be receptive to it is amazing. A lot of companies aren’t very receptive to taking the feedback from us as consumers and baking that into offerings.

One of the things in our model to load the data as fast as possible in parallel is that we pre-hash the data. If you just take user IDs, for instance, and you hash on those IDs, so that you can put this user on this node, and this one on this one and this one, is an even distribution of data, that wasn’t exposed in Vertica. I've been asking for it since the Twitter days for years.

So we wrote our own version of it. I managed to have the Vertica developers, which is a rare and a great opportunity, review what we had done. They said, "Yes, that’s spot on. That’s exactly the implementation." I said, "You know what would be even better. I've been asking for this for years and I know you have lots of other customers. Why don’t you just make it available for everybody to use. Then, I don’t have to use mine and everybody else can benefit from it as well.
HPE Vertica
Community Edition
Start Your Free Trial Now
They announced in 2015 that they're going to make it available. So being able to influence things like that just helped the whole ecosystem.

Gardner: Excellent. I'm afraid we'll have to leave it there. We've been exploring how Playtika uses big data analytics deliver captivating social game experiences and engagement for their end users, but we've also seen that they have a tremendous amount of data science going on and an architectural approach to conquer some of these hurdles around volume, velocity, and variety that I think probably are applicable in many other cutting-edge applications.

So a big thank to our guest. We've been here with Jack Gudenkauf, Vice President of Big Data at Playtika in Santa Monica, California. Thanks so much, Jack.

Gudenkauf: Thank you. It was a pleasure.

Gardner: And a big thank you to our audience as well for joining us for this big data innovation case study discussion.

I'm Dana Gardner; Principal Analyst at Interarbor Solutions, your host for this ongoing series of HPE-sponsored Voice of the Customer discussions. Thanks again for listening, and come back next time.

Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: Hewlett Packard Enterprise.

Transcript of a discussion on how Playtika uses data science and a unique architectural approach to conquer big data hurdles around volume, velocity, and variety of data. Copyright Interarbor Solutions, LLC, 2005-2016. All rights reserved.

You may also be interested in:

Tuesday, December 08, 2015

Need for Fast Analytics in Healthcare Spurs Sogeti Converged Solutions Partnership Model

Transcript of a BriefingsDirect discussion on how a triumvirate of big players have teamed to deliver a rapid and efficient analysis capability for healthcare data.

Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: Hewlett Packard Enterprise.

Dana Gardner: Hello, and welcome to the next edition of the HPE Discover Podcast Series. I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for this ongoing discussion on IT innovation and how it’s making an impact on people’s lives.

Google
Our next big-data discussion explores how a triumvirate of big-data players have teamed to deliver a rapid and efficient analysis capability across disparate data types for the healthcare industry. We'll learn how the drive for better patient outcomes amid economic efficiency imperatives has created a demand for a new type of big-data implementation model.

This model with the support from Hewlett Packard Enterprise, Microsoft, and Sogeti leverages a nimble big-data platform, converged solutions, hybrid cloud, and deep vertical industry expertise. The result is innovative and game-changing insights across healthcare ecosystems of providers, patients, and payers.

The ramp-up to these novel insights is rapid, and the cost-per-analysis value is very aggressive. Here to share the story on how the Data-Driven Decisions for Healthcare initiative arose and why it portends more similar vertical industry focused solutions, we're joined by Bob LeRoy, Vice President in the Global Microsoft Practice and Manager of the HPE Alliance at Sogeti USA. He's based in Cincinnati. Welcome Bob.
Converged Systems +
Analytics = Transformation
Learn More from Sogetilabs
Bob LeRoy: Hi, Dana. Thanks for inviting me. Glad to be here.

Gardner: Why the drive for a new model for big data analytics in healthcare? What are some of the drivers, some of the trends, that have made this necessary now?

LeRoy: Everybody is probably very familiar with the Affordable Care Act (ACA), also known as ObamaCare. They've put a lot of changes in place for the healthcare industry, and primarily it's around cost containment. Beyond that, the industry itself understands that they need to improve the quality of care that they're delivering to patients. That's around outcomes, how can we affect the care and the wellness of individuals.

LeRoy
So it’s around cost and the quality of the care, but it’s also about how the industry itself is changing, both from how providers are now doing more with payments and how classic payers are doing more to actually provide care themselves. There is this blur between the lines of payer and provider.

Some of these people are actually becoming what we call accountable care organizations (ACOs). We see a new one of these ACOs come up each week, where they are both payer and provider.

Gardner: Not only do we have a dynamic economic landscape, but the ability to identify what works and what doesn't work can really be important, especially when dealing with multiple players and multiple data types. This is really not just knowing your own data; this is knowing data across organizational boundaries.

LeRoy:  Exactly. And there are a lot of different data models that exists. When you look at things like big data and the volume of data that exist out in the field, you can put that data to use to understand who are your critical patients, and how that can affect your operations?

Gardner:  Why do we look to a triangulated solution between players like Hewlett Packard Enterprise, Microsoft, and Sogeti? What is it about the problem that you're trying to solve that has led it to a partnership type of solution?

Long-term partner

LeRoy: Sogeti, a wholly-owned subsidiary of the Capgemini Group, has been a long-term partner with Microsoft. The tools that Microsoft provides are one of the strengths of Sogeti. We've been working with HPE now for almost two years, and it's a great triangulation between the three companies. Microsoft provides the software, HPE provides the hardware, and Sogeti provides the services to deliver innovative solutions to customers and do it in a rapid way. What you're getting is best in class in all three of those categories -- the software, the hardware, and the services.

Gardner: There's another angle to this, too, and it’s about the cloud delivery model. How does that factor into this? When we talked about hardware, it sounds like there's an on-premises aspect to it, but how does the cloud play a role?

LeRoy: Everybody wants to hear about the cloud, and certainly it’s important in this space, too, because of the type of data that we're collecting. You could consider social data or data from third party software-as-a-service (SaaS) applications, and that data can exist everywhere.

You have your on-premise data and you have your off-premise data. The tool that we're using, in this case from HPE and Microsoft, really lend themselves well to developing a converged environment to deliver best in class across those different environments. They're secured, delivered quickly, and they provide the information and the insights that their hospitals and insurance companies really need.

Gardner: So we have a converged solution set from HPE. We have various clouds that we can leverage. We have great software from Microsoft. Tell us a little about Sogeti and what you're bringing to the table. What is it that you've been doing in healthcare that helps solidify this solution and the rapid analysis requirements?
Sogeti’s strength is that we're really focused on the technology and the implementations of technology.

LeRoy: This is one of the things that Sogeti brings into the table. Sogeti is part of the Capgemini Group, a global organization with 150,000 employees, and Sogeti is one of the five strategic business units of the group. Sogeti’s strength is that we're really focused on the technology and the implementations of technology and we are focused on several different verticals, healthcare being one of them.

We have experts on the technology stacks, but we also have experts in healthcare itself. We have people who we've pulled from the healthcare industry. We taught them what we do in the IT world, so they can help us focus best practices and technologies to solve real healthcare organizational problems, so that we can get toward the quality of care and the cost reduction that the ACA is really looking for. That’s a real strength that's going to add significant values to healthcare organizations.

Gardner: It’s very important to see that one size does not fit all when it comes to the systems. Having industry verticalization is required, and you're embarking on a retail equivalent to this model, and manufacturing in other sectors might come along as well.

Let's look at why this approach to this problem is so innovative. What have been some of the problems that have held back the ability of large and even mid-sized organizations in the healthcare vertical industry from getting these insights? What are some of the hurdles that they've had to overcome and that perhaps beg for a new and different model and a new approach?

Complexity of data

LeRoy: There are a couple of factors. For sure, it’s the complexity of the data itself. The data is distributed over a wide variety of systems. So it’s hard to get a full picture of a patient or a certain care program, because the systems are spread out all over the place. When the systems in so many different ways end up with you, you get part of the data. You don’t get the full picture. We call that poor data quality, and that y makes it hard for somebody who's doing analysis to really understand and gain insight from their data.

Of course, there's also the existing structure that’s in place within organizations. They've been around for a long time. People are sometimes resistant to change. Take all of those things together and you end up with a slow response time to delivering the data that they're looking for.

Access to the data becomes very complex or difficult for an end-user or a business analyst. The cost of changing those structures can be pretty expensive. If you look at all those things together, it really slows down an organization’s ability to understand the data that they've got to gain insights about their business.

Gardner: Just a few years ago, when we used to refer to data warehouses, it was a fairly large undertaking. It would take months to put these into place, required a data center or some sort of a leasing arrangement, and of course a significant amount of upfront costs. How has this new model approached those costs and length of time or ramp-up time issues?
HPE is providing a box that’s going to allow me to put both into a single environment. So that’s going to reduce my cost a lot.

LeRoy: Microsoft’s model that they have put in place to support their Analytics Platform System (APS) allows them to license their tools at a lower price. The other thing that's really made a difference is the way HPE has put together their ConvergedSystem that allows us to tie these hybrid environments together to aggregate the data in a very simple solution that provides a lot of performance.

If I have to look at unstructured data and structured data, I often need two different systems. HPE is providing a box that’s going to allow me to put both into a single environment. So that’s going to reduce my cost a lot.

They have also delivered it as an appliance, so I don't need to spend a lot of time buying, provisioning, or configuring servers, setting up software, and all those things, I can just order this ConvergedSystem from HPE, put it in my data center, and I am almost ready to go. That’s the second thing that really helps save a lot of time.
Converged Systems Help Transform
Healthcare and Financial Services
Learn More from Sogetilabs
The third one is that at Sogeti Services, we have some intellectual property (IP) to help the data integration from these different systems and the aggregation of the data. We've put together some software and some accelerators to help make that integration go faster.

The last piece of that is a data model that structures all this data into a single view that makes it easier for the business people to analyze and understand what they have. Usually, it would take you literally years to come up with these data models. Sogeti has put all the time into it, created these models, and made it something that we can deliver to a customer much faster, because we've already done it. All we have to do is install it in your environment.

It's those three things together -- the software pricing from Microsoft, the appliance model from HP, and the IP and the accelerators that Sogeti has.

Consumer's view

Gardner: Bob, let’s look at this now through the lens of that consumer, the user. It wasn’t that long ago where most of the people doing analytics were perhaps wearing white lab coats, very accomplished in their particular query languages and technologies. But part of the thinking now for big data is to get it into the hands of more people.

What is it that your model, this triumvirate of organizations coming together for a solution approach, does in terms of making this data more available? What are the outputs, who can query it, and how has that had an impact in the marketplace?

LeRoy: We've been trying to get this to the end users for 30 years. I've been trying to get reports in the hands of users and let them do their own analysis, and every time I get to a point where I think this is the answer, the users are going to be able to do their own reports, that frees up guys in the IT world like me to go off and do other things, it doesn’t always work.

This time, though, it's really interesting. I think we have got it. We allow the users access directly to the data, using the tools that they already know. So I'm not going to create and introduce a new tool to them. We're using tools that are very similar to Excel, that point to a data source that’s well organized for them already and it’s the data that they are already familiar with.
This is something that we couldn't do before, and it’s very exciting to see that we're able to gain such insights and be able to take action against those insights.

So if they're using Microsoft Excel-like tools, they can do Power Pivots and pivot tables that they've already being doing, but just in an offline manner. Now, I can give them direct access to real-time data.

Instead of waiting until noon to get reports out, they can go and look online and get the data much sooner, so we can accelerate their access time to it, but deliver it in a format that they're comfortable with. That makes it easier for them to do the analysis and gain their insights without the IT people having to hold their hands.

Gardner: Perhaps we have some examples that we can look to that would illustrate some of this. You mentioned social media, the cloud-based content or data. How has that come to bear on some of these ways that your users are delivering value in terms of better healthcare analytics?

LeRoy: The best example I have is the ability to bring in data that’s not in a structured format. We often think of external data, but sometimes it’s internal data, too -- maybe x-rays or people doing queries on the Internet. I can take all of that structured data and correlate it to my internal electronic medical records or my health information systems that I have on-premise.

If I'm looking at Google searches, and people are looking for keywords such as "stress," "heart attacks," "cardiac care," or something like that, those keywords, I can map the times that people are looking at those kinds of queries by certain regions. I can tie that back to my systems and ask what the behavior or the traffic patterns look like within my facility at those same times. You can target certain areas to maybe change my staffing model, if there is a big jump in searches, do a campaign to ask people to come in and do a screening, or encourage people to get to their primary-care physicians.

There are a lot of things we can do with the data by looking just at the patterns. It will help us narrow down the areas of our coverage that we need to work with, what geographic areas I need to work on, and how I manage the operations of the organization, just by looking at the different types of data that we have and tying them together. This is something that we couldn't do before, and it’s very exciting to see that we're able to gain such insights and be able to take action against those insights.

Applying data science

Gardner: I can see now why you're calling it the Data Driven Decisions for Healthcare, because you're really applying data science to areas that would probably never have been considered for it before. People might use intuition or anecdote or deliver evidence that was perhaps not all that accurate. Maybe you could just illustrate a little bit more ways in which you're using data science and very powerful systems to gather insights into areas that we just never thought to apply such high-powered tools to before.
Converged Systems +
Analytics = Transformation
Learn More from Sogetilabs
LeRoy: Let’s go back to the beginning when we talked about how we change the quality of care that we are providing. Today, doctors collect diagnosis codes for just about every procedure that we have done. We don’t really look and see how many times those same procedures are repeated or which doctors are performing which procedures. Let’s look at the patients, too, and which patients are getting those procedures. So we can tie those diagnosis codes in a lot of different ways.

The one that I think I probably would like the best is that I want to know which doctors perform those procedures only once per patient and have the best results come from the treatments that that doctor performs. Now, if I'm from a hospital, I know which doctors perform which procedures the best and I can direct the patients that need those procedures to those doctors that provide the best care.
My quality of care goes up, the patient has a better experience, and we're going to do it a lower cost because we're only doing it once. 

And the reverse of that might be that if the doctor doesn’t perform that procedure well, let’s avoid sending him those kinds of patients. Now, my quality of care goes up, the patient has a better experience, and we're going to do it a lower cost because we're only doing it once. 

Gardner: Let’s dive into this solution a bit, because I'm intrigued by the fact that this model of bringing converged-infrastructure provider, a software provider and expertise in the field that crosses the chasm between a technology capability and a vertical industry knowledge-base works. So let’s dig in a little bit. The Microsoft APS, tell us a little bit about that -- what it includes and why it’s powerful and applicable in this situation?

LeRoy: The APS is a solution that combines unstructured data and structured data into a single environment and it allows the IT guys to run classic SQL queries against both.

On one side, we have what used to be called parallel data warehouse. It’s a really fast version of SQL Server. It's massively parallel processing and it can run queries super fast. That’s the important part. I have structured data that I can get to very quickly.

The other half of it is HDInsight, which is Microsoft's open source implementation of Hadoop. Hadoop is all unstructured data. In between these two things there is PolyBase. So I can query the two together and I can join structured and unstructured data together.

Then, since Microsoft created this APS specification, HPE then implemented that in a box that they call a ConvergedSystem 300. Sogeti has used that to build our IP against. We can consume data from all these different areas, put it into the APS, and deliver that data to an end user through a simple interface like Excel or Power BI or some other visualization tool.

Significant scale

Gardner: Just to be clear for our audience, sometimes people hear appliance and they don't think necessarily big scale, but the HPE ConvergedSystem 300 for the Microsoft APS is quite significant with server storage, networking technologies, and large amounts of data, up to 6 petabytes. So we're talking about some fairly significant amounts of data here, not small fry.

LeRoy: And they put everything into that one rack. We think of appliance as something like a toaster that we plug in. That’s pretty close to where they are, not exactly, but you drop this big rack into your data center, give it an IP address, give it some power, and now you can start to take existing data and put it in there. It runs extremely well because they've incorporated the networking and the computing platforms and the storage all within a single environment, which is really effective.

Gardner: Of course, one of the big initiatives at Microsoft has been cloud with Azure. Is there a way in which the HPE Converged Infrastructure in a data center can be used in conjunction with a cloud service like Azure or other cloud, public cloud, infrastructure-as-a-service (IaaS) cloud or even data warehousing cloud services that accelerates the ability to deliver this fast and/or makes it more inclusive or more types of data for more places? How does the public cloud fit into this?
One of the great things about the solution that Microsoft and HPE put together is it’s very much a converged system that allows us to bridge on-prem and the cloud together.

LeRoy: You can distribute the solution across that space. In fact, we take advantage of the cloud delivery as a model. We use a tool called Power BI from Microsoft that allows you to do visualizations.

The system from HPE is a hybrid solution. So we can distribute it. Some of it can be in the cloud and some of it can be on-prem. It really depends on what your needs are and how your different systems are already configured. It’s entirely flexible. We can put all of it on-prem, in a single rack or a single appliance or we can distribute it out to the cloud.

One of the great things about the solution that Microsoft and HPE put together is it’s very much a converged system that allows us to bridge on-prem and the cloud together.

Gardner: And of course, Bob, those end users that are doing those queries, that are getting insights, they probably don’t care where it's coming from as long as they can access it, it works quickly, and the costs are manageable.

LeRoy: Exactly.

Gardner: Tell me a little bit about where we take this model next -- clearly healthcare, big demand, huge opportunity to improve productivity through insights, improve outcomes, while also cutting costs.

You also have a retail solution approach in that market, in that vertical. How does that work? Is that already available? Tell us a little bit about why the retail was the next one you went to and where it might go next in terms of industries?

Four major verticals

LeRoy: Sogeti is focused on four major verticals: healthcare, retail, manufacturing, and life sciences. So we are kind of going across where we have expertise.

The healthcare one has been out now for nine months or so. We see retailers in another place. There are point solutions where people have solved part of this equation, but they haven’t really dug deep in understanding how to get it from end to end, which is something that Sogeti has done now. From the point a person walks into a store, we would be alerted through all of these analytics that we have. We would be alerted that the person arrived and take action against that.

We do what we can to increase our traffic and our sales with individuals and then aggregate all of that data. You're looking at things like customers, inventory, or sales across an organization. That end-to-end piece is something that I think is very unique within the retail space.

After that, we're going to go to manufacturing. Everybody likes to talk about the Internet of Things (IoT) today. We're looking at some very specific use cases on how we can impact manufacturing so IoT can help us predict failures right on a manufacturing line. Or if we have maybe heavy equipment out on a job site, in a mine, or something like that, we could better predict when equipment needs to be serviced, so we can maximize the manufacturing process time.
We're looking at some very specific use cases on how we can impact manufacturing so IoT can help us predict failures right on a manufacturing line.

Gardner: Any last thoughts in terms of how people who are interested in this can acquire it? Is this something that is being sold jointly through these three organizations, through Sogeti directly? How is this going to market in terms of how healthcare organizations can either both learn more and/or even experiment with it?

LeRoy: The best way to do it is search for us online. It's mostly being driven by Sogeti and HPE. Most of the healthcare providers that are also heavy HPE users could be aware of it already, and talking to an HPE rep or to a Sogeti rep is certainly the easiest path to move forward on.

We have a number of videos that are out on YouTube. If you search for Sogeti Labs and Data Driven Decisions, you will certainly find my name and a short video that shows it. And of course sales reps and customers are welcome to contact me or anybody from Sogeti or HP.
Converged Systems Help Transform
Healthcare and Financial Services
Learn More from Sogetilabs
Gardner: Once again, the official name of this initiative is the Data Driven Decisions for Healthcare. I'm afraid we will have to leave it there. We've been discussing how a triumvirate of big players -- Hewlett Packard Enterprise, Microsoft, and Sogeti -- have teamed to deliver a rapid and efficient analysis capability across disparate data types for the healthcare industry.

And we've learned how this new type of big data implementation model quickly and affordably delivers innovative and game changing insights across ecosystems of providers, patients and payers in healthcare, and it looks like it’s going to soon be doing interesting productivity benefits for retail, manufacturing, and life sciences as well.

So join me please in thanking our guest, Bob LeRoy, Vice President in the Global Microsoft Practice and Manager of the HPE Alliance at Sogeti USA, based in Cincinnati. Thank you, Bob.

LeRoy: Thanks, Dana.

Gardner: And I'd also like to thank our audience as well for joining us for this big data innovation discussion. I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host for this ongoing series of Hewlett Packard Enterprise-sponsored discussions. Thanks again for listening, and do come back next time.

Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: Hewlett Packard Enterprise.

Transcript of a BriefingsDirect discussion on how a triumvirate of big players have teamed to deliver a rapid and efficient analysis capability for healthcare data. Copyright Interarbor Solutions, LLC, 2005-2015. All rights reserved.

You may also be interested in:

Tuesday, November 17, 2015

Spirent Leverages Big Data to Keep User Experience Quality a Winning Factor for Telcos

Transcript of a discussion on the use of big data to provide improved user experiences for telecommunications operators' customers.

Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: Hewlett Packard Enterprise.

Dana Gardner: Hello, and welcome to the next edition of the HP Discover Podcast Series. I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for this ongoing discussion on IT innovation and how it’s making an impact on people’s lives.

Gardner
Our next big-data case study discussion explores the ways that Spirent Communications advances the use of big data to provide improved user experiences for telecommunications operators.

We'll learn how advanced analytics that draws on multiple data sources provide Spirent’s telco customers’ rapid insights into their networks and operations.  That insight, combined with analysis of user actions and behaviors, provides a "total picture" approach to telco services and uses that both improves the actual services proactively -- and also boosts the ability to better support help desks.

Spirent’s insights thereby help operators in highly competitive markets reduce the spend on support, reduce user churn, and better adhere to service-level agreements (SLAs), while providing significant productivity gains.
HP Big Data Analytics Engines
Meet Complex Enterprise-scale OEM Requirements
Get More Information
To hear how Spirent uses big data to make major positive impacts on telco operations, we're joined by Tom Russo, Director of Product Management and Marketing at Spirent Communications in Matawan, New Jersey. Welcome, Tom.

Tom Russo: Hi, Dana. Thanks for having me.

Gardner: User experience quality enhancement is essential, especially when we're talking about consumers that can easily change carriers. Controlling that experience is more challenging for an organization like a telco. They have so many variables across networks. So at a high-level, tell me how Spirent masters complexity using big data to help telcos maintain the best user experience.

Russo: Believe it or not, historically, operators haven't actually managed their customers as much as they've managed their networks. Even within the networks, they've done this in a fairly siloed fashion.

Russo
There would be radio performance teams that would look at whether the different cell towers were operating properly, giving good coverage and signal strength to the subscribers. As you might imagine, they wouldn't talk to the core network people, who would make sure that people can get IP addresses and properly transmit packets back and forth. They had their own tools and systems, which were separate, yet again, from the services people, who would look at the different applications. You can see where it’s going.

There were also customer-care people, who had their own tools and systems that didn’t leverage any of that network data. It was very inefficient, and not wrapped around the customer or the customer experience.

New demands

They sort of got by with those systems when the networks weren't running too hot. When competition wasn't too fierce, they could get away with that. But these days, with their peers offering better quality of service, over-the-top threats, increasing complexity on the network in terms of devices, and application services, it really doesn't work any more.

It takes too long to troubleshoot real customer problems. They spend too much time chasing down blind alleys in terms of solving problems that don't really affect the customer experience, etc. They need to take a more customer-centric approach. As you’d imagine that’s where we come in. We integrate data across those different silos in the context of subscribers.

We collect data across those different silos -- the radio performance, the core network performance, the provisioning, the billing etc. -- and fuse it together in the context of subscribers. Then, we help the operator identify proactively where that customer experience is suffering, what we call hotspots, so that they can act before the customers call and complain, which is expensive from a customer-care perspective and before they churn, which is very expensive in terms of customer replacement. It's a more customer-centric approach to managing the network.
Automate Data Collection and Analysis
In Support of Business Objectives
With Spirent InTouch Analytics
Gardner: So your customer experience management does what your customers had a difficult time doing internally. But one aspect of this is pulling together disparate data from different sources, so that you can get the proactive inference and insights. What did you do better around data acquisition?
We integrate data across those different silos in the context of subscribers.

Russo: The first key step is being able to integrate with a variety of these different systems. Each of the groups had their different tools, different data formats, different vendors.

Our solution has a very strong what we call extract, transform, load (ETL), or data mediation capability, to pull all these different data sources together, map them into a uniform model of the telecom network and the subscriber experience.

This allows us to see the connections between the subscriber experience, the underlying network performance and even things like outcomes -- whether people churn, whether they provide negative survey responses, whether they've called and complained to  customer care, etc.

Then, with that holistic model, we can build high-level metrics like quality of experience scores, predictive models, etc. to look across those different silos, help the operators see where the hot spots of customer dissatisfaction is, where people are going to eventually churn, or where other costs are going to be incurred.

Gardner: Before we go more deeply into this data issue, tell me a bit more about Spirent. Is the customer experience division the only part? Tell me about the larger company, just so we have a sense of the breadth and depths of what you offer.

World leader

Russo: Most people, at least in telecom, know Spirent as a lab vendor. Spirent is one of the world leaders in the markets for simulating, emulating, and testing devices, network elements, applications, and services, as they go from the development phase to the launch phase in their lifecycle. Most of their products focus on that, the lab testing or the launch testing, making sure that devices are, as we call it, "fit for launch."

Spirent has historically had less of a presence in the live network domain. In the last year or two, they’ve made a number of strategic acquisitions in that space. They’ve made a number of internal investments to leverage the capabilities and knowledge base that they have from the lab side into the live network.

One of those investments, for example, was an acquisition back in early 2014 of DAX Technologies, a leading customer experience management vendor. That acquisition, plus some additional internal investments has led to the growth of our Customer Experience Management (CEM) Business Unit.

Gardner: Tom, tell me some typical use cases where your customers are using Spirent in the field. Who are those that are interacting with the software? What is it that they're doing with it? What are some of the typical ways in which it’s bringing value there?

Russo: Basically, we have two user bases that leverage our analytics. One is the customer-care groups. What they're trying to do is obtain, very quickly, a 360-degree view of the experience that a subscriber is seeing -- who is calling in and complaining about their service and the root causes of problems that they might be having with their services.
HP Big Data Analytics Engines
Meet Complex Enterprise-scale OEM Requirements
Get More Information
If you think about the historic operation, this was a very time-intensive, costly operation, because they would have to swivel chair, as we call it, between a variety of different systems and tools trying to figure out whether I had a network-related issue, a provisioning issue, a billing issue, or something else. These all could potentially take hours, even hundreds of hours, to resolve.

With our system, the customer-care groups have one single pane of glass, one screen, to see all aspects of the customer experience to very quickly identify the root causes of issues that they are having and resolve them. So it keeps customers happier and reduces the cost of the customer-care operation.

The second group that we serve is on the engineering side. We're trying to help them identify hotspots of customer dissatisfaction on the network, whether that be in terms of devices, applications, services, or network elements so that they can prioritize their resources around those hotspots, as opposed to noisy, traditional engineering alarms. The idea here is that this allows them to have maximal impact on the customer experience with minimal costs and minimal resources.

Gardner: You recently rolled out some new and interesting services and solutions. Tell us a little but about that.

Russo: We’ve rolled out the latest iteration of our InTouch solution, our flagship product. It’s called InTouch Customer and Network Analytics (CNA) and it really addresses feedback that we've received from customers in terms of what they want in an analytic solution.

We're hearing that they want to be more proactive and predictive. Don’t just tell me what's going on right now, what’s gone on historically, how things have trended, but help me understand what’s going to happen moving forward, where our customer is going to complain. Where is the network going to experience performance problems in the future. That's an increasing area of focus for us and something that we've embedded to a great degree in the InTouch CNA product.

More flexibility

Another thing that they've told us is that they want to have more flexibility and control on the visualization and reporting side. Don't just give me a stock set of dashboards and reports and have me rely on you to modify those over time. I have my own data scientists, my own engineers, who want to explore the data themselves.

We've embedded Tableau business intelligence (BI) technology into our product to give them maximum flexibility in terms of report authorship and publication. We really like the combination of Tableau and Hewlett Packard Enterprise (HPE) Vertica because it allows them to be able to do those ad-hoc reports and then also get good performance through the Vertica database.

And another thing that we are doing more and more is what we call Closed Loop Analytics. It's not just identifying an issue or a customer problem on the network, but it's also being able to trigger an action. We have an integration and partnership with another business unit in Spirent called Mobilethink that can change device settings for example.

If we see a device is mis-provisioned, we can send alert to Mobilethink, and they can re-provision the device to correct something like a mis-provisioned access point name (APN) and resolve the problem. Then, we can use our system to confirm indeed that the fix was made and that the experience has improved.
We're trying to tie it all together, everything from the subscriber transactions and experience to the underlying network performance, again to the outcome type information.

Gardner: It’s clear to me, Tom, how we can get great benefits from doing this properly and how the value escalates the more data and the more information you get, and the better you can serve those customers. Let's drill down a bit into how you can make this happen. As far as data goes, are we talking about 10 different data types, 50? Given the stream and the amount of data that comes off of a network, what size data we are talking about and how do you get a handle on that?

Russo: In our largest deployment, we're talking about a couple of dozen different data sources and a total volume of data on the order of 50 to 100 billion transactions a day. So, it’s large volume, especially on the transactional side, and high variety. In terms of what we're talking about, it’s a lot of machine data. As I mentioned before, there is the radio performance, core network performance, and service performance type of information.

We also look at things like whether you're provisioning correctly for the services that you're trying to interact with. We look at your trouble ticket history to try and correlate things like network performance and customer care activity. We will look at survey data, net promoter score (NPS) type information, billing churn, and related information.

We're trying to tie it all together, everything from the subscriber transactions and experience to the underlying network performance, again to the outcome type information -- what was the impact of the experience on your behavior?

Gardner: What specifically is your history with HPE Vertica? Has this been something that's been in place for some time? Did you switch to it from something else? How did that work out?

Finishing migration

Russo: Right now, we're finishing the migration to HP Vertica technology, and it will be embedded in our InTouch CNA solution. There are a couple of things that we like about Vertica. One is the price-performance aspects. The columnar lookups, the projections, give us very strong query response performance, but it's also able to run on commodity hardware, which gives us price advantage that's also bolstered by the columnar compression.

So price performance-wise and maturity-wise we like it. It’s a field-proven, tested solution. There are some other features in terms of strong Hadoop integration that we like. A lot of carriers will have their own Hadoop clusters, data oceans, etc. that they want us to integrate with. Vertica makes that fairly straightforward, and we like a lot of the embedded analytics as well, the Distributed R capability for predictive analytics and things along those lines.

Gardner: It occurs to me that the effort that you put into this at Spirent and being able to take vast amounts of data across a complex network and then come out with these analytic benefits could be extended to any number of environments. Is there a parallel between what you are doing with mobile and telco carriers that could extend to maybe networks that are managing the Internet of Things (IoT) types of devices?
We definitely see our solution helping operators who are trying to be IoT platform providers to ensure the performances of those IoT services and the SLAs that they have for them.

Russo: Absolutely. We're working with carriers on IoT already. The requirements that these things have in terms of the performance that they need to operate properly are different than that of human beings, but nevertheless, the underlying transactions that have to take place, the ability to get a radio connection and set up an IP address and communicate data back and forth to one another and do it in a robust reliable way, is still critical.

We definitely see our solution helping operators who are trying to be IoT platform providers to ensure the performances of those IoT services and the SLAs that they have for them. We also see a potential use for our technology going a step further into the vertical IoT applications themselves in doing, for example, predictive analytics on sensor data itself. That could be a future direction for us.

Gardner: Any words of wisdom for folks that are starting to do with large data volumes across wide variety of sources and are looking also for that more real-time analytics benefit? Any lessons learned that you could share from where Spirent has been and gone for others that are going to be facing some of these same big data issues?
Automate Data Collection and Analysis
In Support of Business Objectives
With Spirent InTouch Analytics
Russo: It's important to focus on the end-user value and the use cases as opposed to the technology. So, we never really focus on getting data for the sake of getting data. We focus more on what problem a customer is trying to accomplish and how we can most simply and elegantly solve it. That steered us clear from jumping on the latest and greatest technology bandwagons, instead going with the proven technologies and leveraging our subject-matter expertise.

Gardner: I'm afraid we'll have to leave it there. We've been exploring the ways that Spirent Communications advances the use of big data to provide improved user experiences for their telecommunications operator’s customers. We've identified some advanced analytics and how they're drawing on more data sources and providing their telco customers more rapid insights into their networks and operations.

So join me in thanking Tom Russo, Director of Product Management and Marketing at Spirent Communications in Matawan, New Jersey. Thanks so much.

Russo: Thanks very much, Dana. Thanks for having me.

Gardner: And a big thank you to our audience as well for joining us for this big data information innovation case study discussion.

I'm Dana Gardner; Principal Analyst at Interarbor Solutions, your host for this ongoing series of HPE-sponsored discussions. Thanks again for listening, and come back next time.

Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: Hewlett Packard Enterprise.

Transcript of a discussion on the use of big data to provide improved user experiences for telecommunications operators' customers. Copyright Interarbor Solutions, LLC, 2005-2015. All rights reserved.

You may also be interested in: