Tuesday, April 14, 2015

GoodData Analytics Developers Share their Big Data Platform Wish List

Transcript of a BriefingsDirect podcast on how and why cloud data analytics provider GoodData makes HP Vertica an integral part of its infrastructure.

Listen to the podcast. Find it on iTunes. Get the mobile app for iOS or Android. Download the transcript. Sponsor: HP.

Dana Gardner: Welcome to the next edition of the HP Discover Podcast Series. I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for this ongoing sponsored discussion on IT innovation and how it’s making an impact on people’s lives.

Once again, we're focusing on how companies are adapting to the new style of IT to improve IT performance and deliver better user experiences, as well as better business results.
Become a member of MyVertica
Register now
And gain access to the Free HP Vertica Community Edition.
Our next innovation case study interview highlights how GoodData has created a business intelligence (BI)-as-a-service capability across multiple industries to enable users to take advantage of both big-data performance as well as cloud delivery efficiencies.

To learn more we are here with a panel consisting of Tomas Jirotka, Product Manager of GoodData. Welcome, Tomas.
Gardner

Tomas Jirotka: Hello. It's great to be here.

Gardner: We are also here with Eamon O'Neill, the Director of Product Management at HP Vertica. Welcome, Eamon.

Eamon O'Neill: Thanks, Dana.

Gardner: And Karel Jakubec, Software Engineer at GoodData. Welcome.

Karel Jakubec: Thanks. It's great to be here.

Gardner: Let’s we start with you, Tomas. Tell us a bit about GoodData and why you've decided that the cloud model, data warehouses, and BI as a service are the right fit for this marketplace?

Jirotka: GoodData was founded eight years ago, and from the beginning, it's been developed as a cloud company. We provide software as a service (SaaS). We allow our customers to leverage their data and not worry about hardware/software installations and other stuff. We just provide them a great service. Their experience is seamless, and our customers can simply enjoy the product.

Jirotka
Gardner: So can you attach your data warehouse to any type of data or are you focused on a certain kind? How flexible and agile are your services?

Jirotka: We provide a platform -- and the platform is very flexible. So it's possible to have any type of data, and create insights to it there. You can analyze data coming from marketing, sales, or manufacturing divisions no matter in which industry you are.

Gardner: If I'm an enterprise and I want to do BI, why should I use your services rather than build my own data center? What's the advantage for which your customers make this choice?

Cheaper solution

Jirotka: First of all, our solution is cheaper. We have a multi-tenant environment. So the customers effectively share the resources we provide them. And, of course, we have experience and knowledge of the industry. This is very helpful when you're a beginner in BI.

Gardner: So, in order to make sure that your cloud-based services are as competitive and even much better in terms of speed, agility and cost, you need to have the right platform and the right architecture.

Jakubec
Karel, what have been some of the top requirements you’ve had as you've gone about creating your services in the cloud?

Jakubec: The priority was to be able to scale, as our customers are coming in with bigger and bigger datasets. That's the reason we need technologies like Vertica, which scales very well by just adding nodes to cluster. Without this ability, you realize you cannot implement solution for the biggest customers as you're already running the biggest machines on the market, yet they're still not able to finish computation in a reasonable time.

Gardner: I've seen that you have something on the order of 40,000 customers. Is that correct?

Jirotka: Something like that.

Gardner: Does the size and volume of the data for each of these vary incredibly, or are most of them using much larger datasets? How diverse and how varied is the amount of data that you're dealing with, customer by customer?

Jirotka: It really depends. A lot of customers, for example, uses Salesforce.com or other cloud services like that. We can say that these data are somehow standardized. We know the APIs of these services very well, and we can deliver the solution in just a couple of days or weeks.

Some of the customers are more complex. They use a lot of services from the Internet or internally,  and we need to analyze all of the sources and combine them. That's really hard work.

Gardner: In addition to scale and efficiency in terms of cost, you need to also be very adept at a variety of different connection capabilities, APIs, different data sets, native data, and that sort of thing.

Jirotka: Exactly. Agility, in this sense, is really curial.

Gardner: How long you have been using Vertica and how long have you been using BI through Vertica for a variety of these platform services?

Working with Vertica

Jirotka: We started working with Vertica at the beginning of the last year. So, one and a half years. We began moving some of our customers with the largest data marts to Vertica in 2013.

Gardner: What were some of the driving requirements for changing from where you were before?

Jirotka: The most important factor was performance. It's no secret that we also have Postgres in our platform. Postgres simply doesn’t support big data. So we chose Vertica to have a solution that is scalable up to terabytes of data.

Gardner: We're learning quite a bit more about Vertica and the roadmap. I'd like to check in with Eamon and hear more about what some of the newer features are. What’s creating excitement?

O'Neill
O’Neill: Far and away, the most exciting is about real-time personalized analytics. This is going to allow GoodData to show a new kind of BI in the cloud. A new feature we released last year in our latest 7.1 release is called Live Aggregate Projections. It's for telling you about what’s going on in your electric smart meter, that FitBit that you're wearing on your wrist, or even your cell-phone plan or personal finances.

A few years ago, Vertica was blazing fast, telling you what a million people are doing right now and looking for patterns in the data, but it wasn’t as fast in telling you about my data. So we've changed that.

With this new feature, Live Aggregate Projections, you can actually get blazing fast analytics on discrete data. That discrete data is data about one individual or one device. It could be that a cell phone company wants to do analytics on one particular cell phone tower or one meter.

That’s very new and is going to open up a whole new kind of dashboarding for GoodData in the cloud. People are going to now get the sub-second response to see changes in their power consumption, what was the longest phone call they made this week, the shortest phone call they made today, or how often do they go over their data roaming charges. They'll get real-time alerts about these kinds of things.
Become a member of MyVertica
Register now
And gain access to the Free HP Vertica Community Edition.
When that was introduced last year, it was standing room only. They were showing some great stats from power meters and then from houses in Europe. They were fed into Vertica and they showed queries that last year we were taking Vertica one-and-half seconds. We're now taking 0.2 seconds. They were looking at 25 million meters in the space for a few minutes. This is going to open up a whole new kind of dashboard for GoodData and new kinds of customers.

Gardner: Tomas, does this sound like something your customers are interested in, maybe retail? The Internet of Things is also becoming prominent, machine to machine, data interactions. How do you view what we've just heard Eamon describe, how interesting is it?

More important

Jirotka: It sounds really good. Real-time, or near real-time, analytics is becoming a more-and-more important topic. We hear it also from our customers. So we should definitely think about this feature or how to integrate it into the platform.

Gardner: Any thoughts, Karel?

Jakubec: Once we introduce Vertica 7.1 to our platform, it will be definitely one of features we will focus on. We have developed a quite complex caching mechanism for intermediate results and it works like a charm for Postgres SQL, but unfortunately it doesn't perform so well for Vertica. We believe that features like Live Aggregate Projection will improve this performance.

Gardner: So it's interesting. As HP Vertica comes out with new features, that’s something that you can productize, take out to the market, and then find new needs that you could then take back to Vertica. Is there a feedback loop? Do you feel like this is a partnership where you're displaying your knowledge from the market that helps them technically create new requirements?

Jakubec: Definitely, it's a partnership and I would say a complex circle. A new feature is released, we provide feedback, and you have a direction to do another feature or improve the current one. It works very similarly with some of our customers.
Engineer-to-engineer exchanges happen pretty often in the conference rooms.

O’Neill: It happens at a deeper level too. Karel’s coworkers flew over from Brno last year, to our office in Cambridge, Massachusetts and hung out for a couple of days, exchanging design ideas. So we learned from them as well.

They had done some things around multi-tenancy where they were ahead of us and they were able to tell us how Vertica performed when they put extra schemers on a catalog. We learned from that and we could give them advice about it. Engineer-to-engineer exchanges happen pretty often in the conference rooms.

Gardner: Eamon, were there any other specific features that are popping out in terms of interest?

O’Neil: Definitely our SQL on Hadoop enhancements. For a couple of years now we've been enabling people to do BI on top of Hadoop. We had various connectors, but we have made it even faster and cheaper now. In this most recent 7.1 release, you can now install Vertica on your Hadoop cluster. So you no longer have to maintain dedicated hardware for Vertica and you don’t have to make copies of the data.

The message is that you can now analyze your data, where it is and as it is, without converting from the Hadoop format or a duplication. That’s going to save companies a lot of money. Now, what we've done is brought the most sophisticated SQL on Hadoop to people without duplication of data.

Gardner: Tomas, how does Hadoop factor into your future plans?

Using Hadoop

Jirotka: We employ Hadoop in our platform, too. There are some ETL scripts, but we've used it in a traditional form of MapReduce jobs for a long time. This is really costly and inefficient approach because it takes much time to develop and debug it. So we may think about using Vertica directly with Hadoop. This would dramatically decrease the time to deliver it to the customer and also the running time of the scripts.

Gardner: Eamon, any other issues that come to mind in terms of prominence among developers?

O’Neill: Last year, we had our Customer Advisory Board, where I got to ask them about those things. Security came to the forefront again and again. Our new release has new features around data-access control.

We now make it easy for them to say that, for example, Karel can access all the columns in a table, but I can only access a subset of them. Previously, the developers could do this with Vertica, but they had to maintain SQL views and they didn’t like that. Now it's done centrally.
They don’t want have to maintain security in 15 places. They'd like Vertica to help them pull that together.

They like the data-access control improvements, and they're saying to just keep it up. They want more encryption at rest, and they want more integration. They particularly stress that they want integration with the security policies in their other applications outside the database. They don’t want have to maintain security in 15 places. They'd like Vertica to help them pull that together.

Gardner: Any thoughts about security, governance and granularity of access control?

Jirotka: As we're a SaaS company, security is number one for us. So far, we have some solutions that work for us, but these solutions are quite complex. Maybe we can discover new features from Vertica and use that feature.

Jakubec: Any simplification of security and access controls is a great new. Restriction of access for some users to just subset of values or some columns is very common use case for many customers. We already have a mechanism to do it, but as Eamon said it involves maintenance of views or complex filtering. If it is supported by Vertica directly, it’s great. I didn’t know that before and I hope we can use it.

Gardner: Very good. I'm afraid we’ll have to leave it there. We've been hearing how GoodData, based in San Francisco, a BI service provider, acts as a litmus test for how a platform should behave in the market, both in terms of performance as well as economics. They've been telling us their story as well as their interest in the latest version of HP Vertica.

So a big thank you to our guests, Tomas Jirotka, Product Manager at GoodData; Eamon O’Neill, Director of Product Management at HP Vertica, and Karel Jakubec, the Software Engineer at GoodData.
Become a member of MyVertica
Register now
And gain access to the Free HP Vertica Community Edition.
And also a big thank you to our audience for joining this special new style of IT discussion. I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host for this ongoing series of HP-sponsored discussions. Thanks for joining, and don’t forget to come back next time.

Listen to the podcast. Find it on iTunes. Get the mobile app for iOS or Android. Download the transcript. Sponsor: HP.

Transcript of a BriefingsDirect podcast on how and why cloud data analytics provider GoodData makes HP Vertica an integral part of its infrastructure. Copyright Interarbor Solutions, LLC, 2005-2015. All rights reserved.

You may also be interested in:

Thursday, April 09, 2015

Source Refrigeration Selects Agile Mobile Platform Approach for its Large In-Field Workforce

Transcript of a BriefingsDirect podcast on how a nationwide company has harnessed the power of mobile applications to increase the productivity of its workforce.

Listen to the podcast. Find it on iTunes. Get the mobile app for iOS or Android. Download the transcript. Sponsor: Kony, Inc.

Dana Gardner: Hello, and welcome to a special BriefingsDirect interview coming to you from the Kony World 2015 Conference in Orlando.

Gardner
I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host throughout this series of penetrating discussions on the latest in enterprise mobility. We're here to explore advancements in mobile applications design and deployment technologies across the full spectrum of edge devices and operating environments.

Our next innovator interview focuses on how Source Refrigeration and HVAC has been extending the productivity of its workforce, much of it in the field, through the use of innovative mobile applications and services.

We'll delve in to how Source Refrigeration has created a boundaryless enterprise and reaped the rewards of Agile processes and the ability to extend data and intelligence to where it’s needed most.

To learn how their successful mobile journey has unfolded, please join me now in welcoming Hal Kolp, Vice President of Information Technology at Source Refrigeration and HVAC in Anaheim, California.

Welcome, Hal.

Hal Kolp: Thank you, Dana.

Gardner: Good to have you with us. It’s interesting to me, as I look at different use cases for mobility, how important the advancement is for organizations like yours with that mobile workforce?

It’s my understanding that there is something on the order of several hundred field-based service and installation experts serving the needs of 2,500 or more customers nationwide. Tell us a little bit about why mobility is essential for you and how this has created better efficiency and innovation for you?

Convert to electronic

Kolp: I'd be glad to, Dana. Source started to explore mobility back in 2006. I was tasked with a project to figure out if it made sense to take our service organization, which was driven by paper, and convert it to an electronic form of a service ticket.

Kolp
After looking at the market itself and at the technology for cellular telephones back in 2006, as well as data plans and what was available, we came to the conclusion that it did make sense. So we started a project to make life easier for our service technicians and our back office billers, so that we would have information in real time and we'd speed up our billing process.

At that time, the goals were pretty simple. They were to eliminate the paper in the field, shorten our billing cycle from 28 days to 3 days, and take all of the material, labor, and asset information and put it into the system as quickly as possible, so we could give our customers better information about the equipment, how they are performing, and total cost of ownership (TCO).

But over time, things change. In our service organization then, we had 275 guys. Today, we have 600. So we've grown substantially, and our data is quite a bit better. We also use mobility on the construction side of our business, where we're installing new refrigeration equipment or HVAC equipment into large supermarket chains around the country.

Our construction managers and foremen live their lives on their tablets. They know the status of their job, they know their cost, they're looking at labor, they're doing safety reports and daily turnover reports. Anyone in our office can see pictures from any job site. They can look at the current status of a job, and this is all done over the cellular network. The business has really evolved.

Gardner: It’s interesting that you had the foresight to get your systems of record into paperless mode and were ready to extend that information to the edge, but then also be able to accept data and information from the edge to then augment and improve on the systems of record. One benefits the other, or there is a symbiosis or virtuous adoption cycle. What have been some of the business benefits of doing it that way?

Kolp: There are simple benefits on the service side. First of all, the billing cycle changed dramatically, and that generated a huge amount of cash. It’s a one-time win, whatever you would bill between 3 days and 28 days. All of that revenue came in, and there was this huge influx of cash in the beginning. That actually paid for the entire project. Just the generation of that cash was enough to more than compensate for all the software development and all the devices. So that was a big win.

But then we streamlined billing. Instead of a biller looking at a piece of paper and entering a time ticket, it was done automatically. Instead of looking at a piece of paper, then doing an inventory transfer to put it on a job, that was eliminated. Technician’s comments never made it into our system or on paper. They just sent a photocopy of the document to the customer.

Today, within 30 seconds of the person completing a work order, it’s uploaded to the system. It’s been generated into PDF documents where necessary. All the purchase order and work order information has entered into the system automatically, and an acknowledgement of the work order is sent to our customer without any human intervention. It just happens, just part of our daily business.

That’s a huge win for the business. It also gives you data for things that you can start to measure yourself on. We have a whole series of key performance indicators (KPIs) and dashboards that are built to help our service managers and regional directors understand what’s going on their business.

Technician efficiency

Do we have customers where we're spending a lot of time in their store-servicing them? That means there is something wrong. Let’s see if we can solve our customer’s problems. We look at the efficiency of our technicians.

We look at the efficiency of drive times. That electronic data even led us into automatic dispatching systems. We have computers that look at the urgency of the call, the location of the call, and the skills necessary to do that service request. It automatically decides which technician to send and when to send them. It takes a work order and dispatches a specific technician on it.

Gardner: So you've become data-driven and then far more intelligent, responsive, and agile as a result. Tell me how you've been able to achieve that, but at the same time, not get bogged down in an application development cycle that can take a long time, or even find yourself in waterfall-type of affair, where the requirement shift rapidly, and by the time you finish a product, it’s obsolete.

How have you been able to change your application development for your mobile applications in a way that keeps up with these business requirements?
This last year, we converted to the Kony platform, and all indications so far are that that platform is going to be great for us.

Kolp: We've worked on three different mobile platforms. The claim in the beginning was to develop once and just update and move forward. That didn’t really work out so well on the first couple of platforms. The platforms became obsolete, and we essentially had to rewrite the application on to a new platform for which the claim was that it was going to survive.

This last year, we converted to the Kony platform, and all indications so far are that that platform is going to be great for us, because we've done a whole bunch of upgrades in the last 12 months on the platform. We're moving, and our application is migrating very quickly.

So things are very good on that side and in our development process. When we were building our new application initially, we were doing two builds a week. So every couple of days we do a little sprint up. We don’t really call them sprints, but essentially, it was a sprint to add functionality. We go into a quick testing cycle, and while we're testing, we have folks adding new functionality and fixing bugs. Then, we do another release.

At the current stage where we are in production really depends on the needs of the business. Last week, we had a new release and this week, we're having another release as we fix some small bugs or did enhancements to the products that came up during our initial rollout where we are making changes. It’s not that difficult to roll out a new version.

We send an alert. The text says that they have got a new version. They complete the work order that they're on, they perform an update, and they're back in business again. So it's pretty simple.

Field input

Gardner: So it's a very agile, iterative, easily adaptive type of development infrastructure. What about the input from those people in the field. Another aspect of agile development isn’t just the processes for the development itself, but being able to get more people involved with deciding features, functions, and not necessarily forcing the developers to read minds.

Has that crept into your process? Are you able to take either a business analyst or practitioner in the field and allow them to have the input that then creates better apps and better processes?

Kolp: In our latest generation application, we made tremendous changes in the user interface to make it easier for the technicians to do their job and for them to not have to think about anything. If they needed to do something, they knew what they had to do. It was kind of in their face, in other words. We use cues on screens by colors. It’s something that's required for them to do, it’s always red. If there is an input field that is optional, then it’s in blue. We have those kinds of cues.

We also built a little mini application, a web app, that's used by technicians for frequently asked questions (FAQs). If they have got some questions about how this application works, they can look at the FAQs. They can also submit a request for enhancements directly from the page. So we're getting requests from the field.
If they have a question about the application, we can take that question and turn it into a new FAQ page, response, or new question that people can click on and learn.

If they have a question about the application, we can take that question and turn it into a new FAQ page, response, or new question that people can click on and learn. We're trying to make the application to be more driven by the field and less by managers in the back office.

Gardner: Are there any metrics yet that would indicate an improvement in the use of the apps, based on this improved user interface and user experience. Is there any way to say the better we make it, the more they use it; the more they use it, the better the business results?

Kolp: We're in early stages of our rollout. In a couple of weeks we'll have about 200 of our 600 guys on the new application, and the guys noticed a few things. Number one, they believe the application is much more responsive to them. It’s just fast. Our application happens to be on iOS. Things happen quickly because of the processor and memory. So that’s really good for them.

The other thing they notice, is that if they're looking at assets and they need to find something in the asset, need to look up a part, or need to do anything, we've added search capability that just makes it brain-dead simple to find stuff that they need to look for. They can use their camera as a barcode scanner within our application. It’s easy to attach pictures.

What they find is that we've made it easier for them to add information and document their call. They have a much greater tendency to add information than they did before. For example, if they're in their work order notes, which for us is a summary, they can just talk. We use voice to text, and that will convert it. If they choose to type, they can type, but many of the guys really like the voice to text, because they have big fingers and typing on the screen is a little bit harder for them.

What's of interest?

Gardner: We are here at Kony World, Hal. Did anything jump out at you that’s particularly interesting? We've heard about solution ecosystems and vertical industries, Visualizer update, some cloud interactions for developers? Did anything really jump out at you that might be of interest for the coming year?

Kolp: I'm very interested in Visualizer 2.0. It appears to be a huge improvement over the original version. We use third-party development. In our case, we used somebody else’s front-end design tool for our project, but I really like the ability to be able to take our project and then use it with Visualizer 2.0, so that we can develop the screens and the flow that we want and hand it off to the developers. They can hook it up to the back end and go.

I just like having the ability to have that control, and now we've done the heavy lifting. For the most part, understanding your data, data flow or the flow of the application is usually where you spend quite a bit more time. For us to be able to do that ourselves is much better than writing on napkins or using PowerPoint or Visio to generate screens or some other application.

It’s nice because ultimately we will be able to go use Visualizer, push it into the application, take the application, push it back into Visualizer, make more changes, and go back and forth. I see that as a huge advantage. That’s one thing I took from the show.
When your business says that you can't mobilize some process, it's probably not true. There's this resistance to change that's natural to everyone.

Gardner: With this journey that you've been on since 2006, you’ve gone quite a way. Is there anything you could advise others who are perhaps just beginning in extending their enterprise to that mobile edge, finding the ways to engage with the people in the field that will get them to be adding information, taking more intelligence back from the apps into their work? What might you do with 20-20 hindsight and then relate that to people just starting?

Kolp: There are a couple of things that I’ll point out. There was a large reluctance for people to say that this would actually work. When your business says that you can't mobilize some process, it's probably not true. There's this resistance to change that's natural to everyone.

Our technicians today, who have been on mobile applications, hate to be on paper. They don't want to have anything to do with paper, because it's harder for them. They have more work to do. They have to collect the paper, shove the paper in an envelope, or hand it off to someone to do things. So they don’t like it.

The other thing you should consider is what happens when that device breaks? All devices will break at some point for some reason. Look at how those devices are going to get replaced. We operate in 20 states. You can't depend upon the home office to be able to rush out a replacement device for your suppliers in real time. We looked pretty hard at using all kinds of different methods to reduce the downtime for guys in the field.

You should look at that. That’s really important if the device is being used all day, every day for a field worker. That’s their primary communication method.

Simpler is better

The other thing I could say is, “simpler is better.” Don't make an application where you have to type-in a tremendous amount of data. Make data entry as easy as possible via taps or predefined fields.

Think about your entire process front to back and don't hesitate to change the way that you gather information today, as opposed to the way you want to in the future. Don't take a paper form and automate it, because that isn't the way your field worker thinks. You need to generate the new flow of information so that it fits on whatever size screen you want. It can't be a spreadsheet or it can’t be a bunch of checkboxes and stuff, because that doesn't necessarily suit the tool that you are using to drive the information gathering.

Spend a lot of time upfront designing screens and figuring out how the process should work. If you do that, you'll meet with very little pushback from the field once they get it and actually use it. I would communicate with the field regularly if you're developing and tell them what's going on, so that they are not blind-sided by something new.
The simpler you can make your application, the faster you can roll it out, and then just enhance, enhance, enhance.

I'd work closely with the field in designing the application. I'd also be involved with anybody that touches that data. In our case, it's service managers. We work with builders, inventory control, purchasing people, and timecards. All of those were pieces that our applications touch. So people from the business were involved, even people from finance, because we're making financial transactions in the enterprise resource planning (ERP) system.

So get all those people involved and make sure that they're in agreement with what you're doing. Make sure that you test thoroughly and that everybody signs off together at the end. The simpler you can make your application, the faster you can roll it out, and then just enhance, enhance, enhance.

Add a new feature if you're starting something new. If you're replacing an existing application, it's much harder to do that. You'll have to create all of the functionality because the business typically doesn't want to lose functionally.

Gardner: Well great. Thank you for that. I'm afraid we'll have to leave it there.
We've been learning about how advancements in mobile applications design and deployment technologies are bringing new productivity benefits across the growing spectrum of edge devices and types.

And we've seen how quality, speed, and value are rapidly increasing, thanks to the Kony Mobility Platform for such innovators as Source Refrigeration. So a big thank you to our guest, Hal Kolp, the Vice President of Information Technology at Source Refrigeration and HVAC in Anaheim, California. Thank you, Hal.

Kolp: Thank you, Dana.

Gardner: And a big thank you to our audience for joining us for this special podcast series coming to you from the Kony World 2015 Conference in Orlando.

I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host throughout this series of Kony sponsored BriefingsDirect IT discussions. Thanks again for listening, and come back next time.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: Kony, Inc.

Transcript of a BriefingsDirect podcast on how a nationwide company has harnessed the power of mobile applications to increase the productivity of its workforce. Copyright Interarbor Solutions, LLC, 2005-2015. All rights reserved.

You may also be interested in:


Tuesday, April 07, 2015

ITIL-ITSM Tagteam Boosts Mexican ISP INFOTEC's Service Desk and Monitoring Performance

Transcript of a BriefingsDirect podcast on how an IT provider in Mexico uses ITSM tools to improve service to customers.

Listen to the podcast. Find it on iTunes. Get the mobile app for iOS or Android. Download the transcript. Sponsor: HP.

Dana Gardner: Hello, and welcome to the next edition of the HP Discover Podcast Series. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for this ongoing sponsored discussion on IT innovation and how it’s making an impact on people’s lives.

Gardner
Once again, we're focusing on how companies are adapting to the new style of IT to improve IT performance and deliver better user experiences, as well as better business results.

Our next innovation case study interview highlights how INFOTEC in Mexico City improves its service desk and monitoring operations and enjoys impressive results from those efforts.

To learn more, we're joined by Victor Hugo Piña García, the Service Desk and Monitoring Manager at INFOTEC. Welcome.
Your guide to the "new style of IT"
Register now
For the HP Toolkit for Service Management
Victor Hugo Piña García: Hello. Thank you.

Gardner: Tell us about INFOTEC, what your organization is and does.

Piña: INFOTEC is a Government Research Center. We have many activities. The principal ones are teaching, innovation technology, and IT consulting. The goal is to provide IT services. We have many IT services like data centers, telecommunications, service desk, monitoring, and manpower.

Gardner: This is across Mexico, the entire country?

Piña: Yes, it covers all the national territory. We have two locations. The principal is in Mexico City; San Fernando, and the Aguascalientes City is the other point we offer the services.

Gardner: Explain your role as the Service Desk and Monitoring Manager. What are you responsible for?

Three areas

Piña: My responsibility is in three areas. The first is the monitoring, to review all of the service, the IT components for the clients.

Piña
The second is the service desk, management of incidents and problems. Third is the generation of the deliveries of all the services of INFOTEC. We make deliveries for the IT service managers and service delivery.

Gardner: So it's important for organizations to know their internal operations, all the devices, and all the assets and resources in order to create these libraries. One of the great paybacks is that you can reduce time to resolution and you can monitor and have much greater support.

Give us a sense of what was going on before you got involved with ITIL and IT service management (ITSM), so that we can then better understand what you got as a benefit from it. What was it like before you were able to improve on your systems and operations?

Piña: We support the services with HP tools, HP products. We have many types of assets for adaptation and for solution. Then we create a better process. We align the process with the HP tools and products. Within two years we began to see benefits to service a customer.
That reduces considerably the time to repair. As a consequence, users have a better level of service.

We attained a better service level in two ways. First is the technical report, the failures. And second, the moment the failure is reported, we send specialists to attend to the failure. That reduces considerably the time to repair. As a consequence, users have a better level of service. Our values changed in the delivery of the service.

Gardner: I see that you have had cost reductions of up to one third in some areas, a 40 percent reduction in time to compliance, with service desk requests going from seven or eight minutes previously down to five minutes. It’s a big deal, an incident reduction of more than 20 percent. How is this possible? How were these benefits generated? Is it the technology, people, process, all the above?
Your guide to the "new style of IT"
Register now
For the HP Toolkit for Service Management
Piña: Yes, we consider four things. The people with their service is the first. The process with innovative mindset, the technology, is totally enabled to align with the previous two points, and the fourth, consistent and integral to the work in terms of the above three points.

Gardner: It sounds to me as if together these can add up to quite a bit of cost savings, a significant reduction in the total cost of operations.

Piña: Yes, that’s correct.

Gardner: Is there anything in particular that you're interested in and looking for next from HP? How could they help you do even more?

New concept and model

Piña: I've discovered many things. First, we need to know better and think about how we take these to generate a new concept, a new model, and a new process to operate and offer services.

There have been so many ideas. We need to process that and understand it, and we need to support HP Mexico to know how to deal with these new things.

Gardner: Are there any particular products that you might be going to, now that you've been able to attain a level of success? What might come next, more ITIL, more configuration management, automation, business service management? Do you have any  thoughts about your next steps?

Piña: Yes. We use ITIL methodology to make changes. When we present a new idea, we're looking for the impact -- economic, social, and political -- when the committee has a meeting to decide.
We need to know better and think about how we take these to generate a new concept, a new model, and a new process to operate and offer services.

This is a good idea. This has a good impact. It's possible and proven, and then right there, we make it the new model of business for delivering our new service. We're thinking about the cloud, about big data, and about security. I don’t want to promise anything.

Gardner: Very good. I'm afraid we will have to leave it there. We've been learning how INFOTEC in Mexico City has been improving on their helpdesk and monitoring and had some impressive reductions in costs and time to compliance with service requests and overall incident reduction of more than 20 percent.

I'd like to thank our guest. We've been joined by Victor Hugo Piña García, the Service Desk and Monitoring Manager at INFOTEC. Thank you so much.
Your guide to the "new style of IT"
Register now
For the HP Toolkit for Service Management
Piña: Thank you very much.

Gardner: And thank you to our audience for joining us for this special new style of IT discussion.

I'm Dana Gardner; Principal Analyst at Interarbor Solutions, your host for this ongoing series of HP-sponsored discussions. Thanks again for listening, and come back next time.

Listen to the podcast. Find it on iTunes. Get the mobile app for iOS or Android. Download the transcript. Sponsor: HP.

Transcript of a BriefingsDirect podcast on how an IT provider in Mexico uses HP ITSM tools to improve service to customers. Copyright Interarbor Solutions, LLC, 2005-2015. All rights reserved.

You may also be interested in:

Tuesday, March 31, 2015

Novel Consumer Retail Behavior Analysis From InfoScout Relies on Big Data Chops from HP Vertica

Transcript of a BriefingsDirect discussion on how a consumer research and data analysis firm gleans rich marketing data from customers' shared sales receipts.

Listen to the podcast. Find it on iTunes. Download the transcript. Get the mobile app for iOS or Android. Sponsor: HP.

Dana Gardner: Hello, and welcome to the next edition of the HP Discover Podcast Series. I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for this ongoing sponsored discussion on IT innovation and how it’s making an impact on people's lives.

Gardner
Our next big data innovation case study interview highlights how InfoScout in San Francisco gleans new levels of accurate insights into retail buyer behavior by collecting data directly from consumers’ sales receipts.

In order to better analyze actual retail behaviors and patterns, InfoScout provides incentives for buyers to share their receipts, but InfoScout is then faced with the daunting task of managing and cleansing that essential data to provide actionable and understandable insights.

To learn more about how big -- and even messy -- data can be harnessed for near real time business analysis benefits, please join me in welcoming our guests, Tibor Mozes, Senior Vice President of Data Engineering at InfoScout. Welcome, Tibor.
Become a member of myVertica
Register now
Gain access to the HP Vertica Community Edition
Tibor Mozes: Good morning. Thanks for having us.

Gardner: I'm glad you're with us. We're also joined today by Jared Schrieber, the Co-founder and CEO at InfoScout, based in San Francisco. Welcome, Jared.

Jared Schrieber: Glad to be here.

Gardner: Jared, let’s start with you. We don’t often get the option of choosing how the best data comes to us. In your business, you've been able to uniquely capture strong data, but you need to treat it a lot to use it and you also need a lot of that data in order to get good trend analysis. So the payback is that you get far better information on essential buyer behaviors, but you need a lot of technology to accomplish that.

Tell us why you wanted to get to this specific kind of data and then your novel way of acquiring it, please.

Consumer panels

Schrieber: A quick history lesson is in order. In the market research industry, consumer purchase panels have been around for about 50 years. They started with diaries in people’s homes, where they had to write down exactly every single product that they bought, day-in day-out, in this paper diary and mail it in once a month.

Schrieber
About 20 years ago, with the advent of modems in people’s homes, leading research firms like Nielsen would send a custom barcode scanner into people’s homes and ask them to scan each product they bought and then thumb into the custom scanner the regular price, the sales price, any coupons or deals that they got, and details about the overall shopping trip, and then transfer that electronically. That approach has not changed in the last 20 years.

With the advent of smartphones and mobile apps, we saw a totally new way to capture this information from consumers that would revolutionize how and why somebody would be willing to share their purchase information with a market research company.

Gardner: Interesting. What is it about mobile that is so different from the past, and why does that provide more quality data for your purposes?

Schrieber: There are two reasons in particular. The first is, instead of having consumers scan the barcode of each and every item they purchase and thumb in the pricing details, we're able to simply have them snap a picture of their shopping receipt. So instead of spending 20 minutes after a grocery shopping trip scanning every item and thumbing in the details, it now takes 15 seconds to simply open the app, snap a picture of the shopping receipt, and be done.

Mozes
The second reason is why somebody would be willing to participate. Using smartphone apps we can create different experiences for different kinds of people with different reward structures that will incentivize them to do this activity.

For example, our Shoparoo app is a next-generation school fundraiser akin to Box Tops for Education. It allows people to shop anywhere, buy anything, take a picture of their receipt, and then we make an instant donation to their kid’s school every time.

Another app is more of a Tamagotchi game called Receipt Hog, where if you download the app, you have adopted a virtual runt. You feed it pictures of your receipt and it levels-up into a fat and happy hog, earning coins in a piggy bank along the way that you can then cash-out from at the end of the day.

These kinds of experiences are a lot more intrinsically and extrinsically rewarding to the panelists and have allowed us to grow a panel that’s many times larger than the next largest panel ever seen in the world, tracking consumer purchases on a day-in day-out basis.

Gardner: What is it that you can get from these new input approaches and incentivization through an app interface? Can you provide me some sort of measurement of an improved or increased amount of participation rates? How has this worked out?

Leaps and bounds

Schrieber: It's been phenomenal. In fact, our panel is still growing by leaps and bounds. We now have 200,000 people sharing with us their purchases on a day-in day-out basis. We capture 150,000 shopping trips a day. The next largest panel in America captures just 10,000 shopping trips a day.

In addition to the shopping trip data, we're capturing geolocation information, Facebook likes and interests from these people, demographic information, and more and more data associated with their mobile device and the email accounts that are connected to it.

Gardner: So yet another unanticipated consequence of the mobility trend that’s so important today.

Tibor, let’s go to you. The good news is that Jared has acquired this trove of information for you. The bad news is that now you have to make sense of it. It’s coming in, in some interesting ways, as almost a picture or an image in some cases, and at a great volume. So you have velocity, variability, and volume. So what does that mean for you as the Vice President of Data Engineering?

Mozes: Obviously this is a growing panel. It’s creating a growing volume of data that has created a massive data pipeline challenge for us over the years, and we had to engineer the pipeline so that is capable of processing this incoming data as quickly as possible.
It’s creating a growing volume of data that has created a massive data pipeline challenge for us over the years.

As you can imagine, our data pipeline has gone through an evolution. We started out with a simple solution at the beginning with MySQL and then we evolved it using Elastic Map Reduce and Hive.

But we felt that we wanted to create a data pipeline that’s much faster, so we can bring data to our customers much faster. That’s how we arrived at Vertica. We looked at different solutions and found Vertica a very suitable product for us, and that’s what we're using today.

Gardner: Walk me through the process, Tibor. How does this information come in, how do you gather it, and where does the data go? I understand you're using the HP Vertica platform as a cloud solution in the Amazon Web Services Cloud. Walk me through the process for the data lifecycle, if you will.

Mozes: We use AWS for all of our production infrastructure. Our users, as Jared mentioned, typically download one of our several apps, and after they complete a receipt scan from their grocery purchases, that receipt is immediately uploaded to our back-end infrastructure.

We try to OCR that image of the receipt, and if we can’t, we use Amazon Mechanical Turk to try to make sense of the image and turn that image into text. At the end of the day, when an image is processed, we have a fairly clean version of that receipt in a text format.

Next phase

In the next phase, we have to process the text and try to attribute various items on the receipt and make the data available in our Vertica data warehouse.

Then, our customers, using a business intelligence (BI) platform that we built especially for them, can analyze the data. The BI platform connects to Vertica, so our customers can analyze various metrics of our users and their shopping behavior.

Gardner: Jared, back to you. There's an awful lot of information on a receipt. It’s supposed to be very complex, given not just the date and the place and the type of retail organization, but all the different SKUs, every item that’s possibly being bought. How do you attack that sort of a data problem from a schema and cleansing and extract, transform, load (ETL) and then making it therefore useful?

Schrieber: It’s actually a huge challenge for us. It's quite complex, because every retailer’s receipt is different. The way that they structure the receipt, the level of specificity about the items on the receipt, the existence of product codes, whether they are public product codes like the kind of you see on a barcode for a soda product versus an internal product code that retailers use as a stock keeping unit internally versus just a short description on the receipt.

One of our challenges as a company is to figure out the algorithmic methods that allow us to identify what each one of those codes and short descriptions actually represent in terms of a real world product or category, so that we can make sense of that data on behalf of our client. That’s one of the real challenges associated with taking this receipt-based approach and turning that into useful data for our clients on a daily basis.
One of our challenges as a company is to figure out the algorithmic methods that allow us to identify what each one of those codes and short descriptions actually represent.

Gardner: I imagine this would be of interest to a lot of different types of information and data gathering. Not only are pure data formats and text formats being brought into the mix, as has been the case for many years, but this image-based approach, the non-structured approach.

Any lessons learned here in the retail space that you think will extend to other industries? Are we going to be seeing more and more of this image-based approach to analysis gathering?

Schrieber: We certainly are. As an example, just take Google Maps and Google Street View, where they're driving around in cars, capturing images of house and building numbers, and then associating that to the actual map data. That’s a very simple example.

A lot of the techniques that we're trying to apply in terms of making sense of short descriptions for products on receipts are akin to those being used to understand and perform social-media analytics. When somebody makes a tweet, you try to figure out what that tweet is actually about and means, with those abbreviated words and shortened character sets. It’s very, very similar types of natural language processing and regular expression algorithms that help us understand what these short descriptions for products actually mean on a receipt.

Gardner: So we've had some very substantial data complexity hurdles to overcome. Now we have also the basic blocking and tackling of data transport, warehouse, and processing platform.

Going back to Tibor, once you've applied your algorithms, sliced and diced this information, and made it into something you can apply to a typical data warehouse and BI environment, how did you overcome these issues about the volume and the complexity, especially now that we're dealing with a cloud infrastructure?

Compression algorithms

Mozes: One of the benefits of Vertica, as we went into the discovery process, was the compression algorithms that Vertica is using. Since we have a large volume of data to deal with and build analytics from, it has turned out to be beneficial for us that Vertica is capable of compressing data extremely well. As a result of that, some of our core queries that require a BI solution can be optimized to run super fast.

You also talked about the cloud solution, why we went into the cloud and what is the benefit of doing that. We really like running our entire data pipeline in AWS because it’s super easy to scale it up and down.

It’s easy for us to build a new Vertica cluster, if we need to evaluate something that’s not in production yet, and if the idea doesn’t work, then we can just pull it down. We can scale Vertica up, if we need to, in the cloud without having to deal with any sort of contractual issues.
Become a member of myVertica
Register now
Gain access to the HP Vertica Community Edition
Schrieber: To put this in context, now we're capturing three times as much data every day as we were six months ago. The queries that we're running against this have probably gone up 50X to a 100X in that time period as well. So when we talk about needing to scale this up quickly, that’s a prime example as to why.

Gardner: What has happened in just last six months that’s required that ramp up? Is it just because of the popularity of your model, the impactfulness and effectiveness of the mobile app acquisition model, or is it something else at work here?

Schrieber: It’s twofold. Our mobile apps have gotten more and more popular and we've had more and more consumers adopt them as a way to raise money for their kid’s school or earn money for themselves in a gamified way by submitting pictures of their receipts. So that’s driven massive growth in terms of the data we capture.

Also, our client base has more than tripled in that time period as well. These additional clients have greater demands of how to use and leverage this data. As those increase, our efforts to answer their business questions multiplies the number of queries that we are running against this data.

Gardner: That, to me, is a real proof point of this whole architectural approach. You've been able to grow by a factor of three in your client base in six months, but you haven’t gone back to them and said, "You'll have to wait for six months while we put in a warehouse, test it, and debug it." You've been able to just take that volume and ramp up. That’s very impressive.

Schrieber: I was just going to say, this is a core differentiator for us in the marketplace. The market research industry has to keep up with the pace of marketing, and that pace of marketing has shifted from months of lead time for TV and print advertising down to literally hours of lead time to be able to make a change to a digital advertising campaign, a social media campaign, or a search engine campaign.

So the pace of marketing has changed and the pace of market research has to keep up. Clients aren’t willing to wait for weeks, or even a week, for a data update anymore. They want to know today what happened yesterday in order to make changes on-the-fly.

Reports and visualization

Gardner: We've spoken about your novel approach to acquiring this data. We've talked about the importance of having the right platform and the right cloud architecture to both handle the volume as well as scale to a dynamic rapidly growing marketplace.

Let’s talk now about what you're able to do for your clients in terms of reports, visualization, frequency, and customization. What can you now do with this cloud-based Vertica engine and this incredibly valuable retail data in a near real-time environment for your clients?

Schrieber: A few things on the client side. Traditional market research providers of panel data have to put a very tight guardrails on how clients can access and run reports against the data. These queries are very complex. The numerators and denominators for every single record of the reports are different and can be changed on-the-fly.

If, all of a sudden, I want to look at anyone who shopped at Walmart in the last 12 months that has bought cat food in the last month and did so at a store other than Walmart, and I want to see their purchase behavior and how they shop across multiple retailers and categories, and I want to do that on-the-fly, that gets really complex. Traditional data warehousing and BI technologies don't support allowing general business-analyst users to be able to run those kinds of queries and reports on-demand, yet that’s exactly what they want.

They want to be able to ask those business questions and get answers. That’s been key to our strategy, which is to allow them to do so themselves, as opposed to coming back to them and saying, "That’s going to be a pretty big project. It will require a few of our engineers. We'll come back to you in a few weeks and see what we can do." Instead, we can hand them the tools directly in a guided workflow to allow them to do that literally on-the-fly and have answers in minutes versus weeks.
They want to be able to ask those business questions and get answers. That’s been key to our strategy.

Gardner: Tibor, how does that translate into the platform underneath? If you're allowing for a business analyst type of skill set to come in and apply their tools, rather than deep SQL queries or other more complex querying tools, what is it that you need from your platform in order to accommodate that type of report, that type of visualization, and the ability to bring a larger set of individuals into this analysis capability?

Mozes: Imagine that our BI platform can throw out very complex SQL queries. Our BI platform essentially is using, under the hood, a query engine that's going to run queries against Vertica. Because, as Jared mentioned, the questions are so complex, some of the queries that we run against Vertica are very different than your typical BI use cases. They're very specialized and very specific.

One of the reasons we went with Vertica is its ability to compute very complex queries at a very high speed. We look at Vertica not as simply another SQL database that scales very well and that’s very fast, but we also look at it as a compute engine.

So as part of our query engine, we are running certain queries and certain data transformations that would be very complicated to run outside Vertica.

We take advantage of the fact that you can create and run custom UDFs that is not part of the ANSI 99 SQL. We also take advantage some of the special functions that are built into Vertica allowing data to be sessionized very easily.

Analyzing behavior

Jared can talk about some of the use cases where we like to analyze user’s entire shopping trips. In order to do that, we have to stitch together different points in time that the user has gone through and shopped at various locations. And using some of the built –in functions in Vertica that’s not standard SQL, we can look at shopping journeys, we call them trip circuits, and analyze user behavior along the trip.

Gardner: Tibor, what other ways can you be using and exploiting the Vertica capabilities in the deliverables for your clients?

Mozes: Another reason we decided to go with Vertica is its ability to optimize very complex queries. As I mentioned, our BI platform is using a query engine under the hood. So if a user asks a very complicated business question, our BI platform turns that question into a very complicated query.

One of the big benefits of using Vertica is to be able to optimize these queries on the fly. It’s easy to do this with running the database optimizer to build custom projections, making queries running much faster than we could do before.
Another reason we decided to go with Vertica is its ability to optimize very complex queries.

Gardner: I always think more impactful for us to learn through an example rather than just hear you describe this. Do you have any specific InfoScout retail client use cases where you can describe how they've leveraged your solution and how some of these both technical and feature attributes have benefited them -- an example of someone using InfoScout and what it's done for them?

Schrieber: We worked with a major retailer this holiday season to track in real time what was happening for them on Thanksgiving Day and Black Friday. They wanted to understand their core shoppers, versus less loyal shoppers, versus non-core shoppers, how these people were shopping across retailers on Thanksgiving Day and Black Friday, so that the retailer could try to respond in more real time to the dynamics happening in the marketplace.

You have to look at what it takes to do that, for us to be able to get those receipts, process them, get them transcribed, get that data in, get the algorithms run to be able to map it to the brands and categories and then to calculate all kinds of metrics. The simplest ones are market share; the most complex ones have to do with what Tibor had mentioned: the shopper journey or the trip circuit.

We tried to understand, when this retailer was the shopper's first stop, what were they most likely to buy at that retailer, how much were they likely to spend, and how is that different than what they ended up buying and spending at other retailers that followed? How does that contrast to situations where that retailer was the second stop or the last stop of the day in that pivotal shopping day that is Black Friday?

For them to be able to understand where they were winning and losing among what kinds of shoppers who were looking for what kinds of products and deals was an immense advantage to them -- the likes of which they never had before.

Decision point

Gardner: This must be a very sizable decision point for them, right? This is going to help you decide where to build new retail outlets, for example, or how to structure the experience of the consumer walking through that particular brick-and-mortar environment.

When we bring this sort of analysis to bear, this isn’t refining at a modest level. This could be a major benefit to them in terms of how they strategize and grow. This could be something that really deeply impacts their bottom line. Is that not the case?

Schrieber: It has implications as to what kinds of categories they feature in their television, display advertising campaigns, and their circulars. It can influence how much space they give in their store to each one of the departments. It has enormous strategic implications, not just tactical day-to-day pricing decisions.

Gardner: Now, that was a retail example. I understand you also have clients that are interesting in seeing how a brand works across a variety of outlets or channels. Is there another example you can provide on somebody who is looking to understand a brand impact at a wider level across a geography for example?
It has enormous strategic implications, not just tactical day-to-day pricing decisions.

Schrieber: I'll give you another example that relates to this. A retailer and a brand were working together to understand why the brand sales were down at this particular retailer during the summer time. To make it clear for you, this is a brand of ice-cream. Ice cream sales should go up during the summer, during the warmer months, and the retailer couldn’t understand why their sales were underperforming for this brand during the summer.

To figure this out, we had to piece-together, along the shopper journey over time, not only in the weeks during the summer months, but year round to understand this dynamic of how they were shopping. What we were able to help the client quickly discover was that during the summer months people eat more ice-cream. If they eat more ice-cream, they're going to want larger pack sizes when they go and buy that ice-cream. This particular retailer tended to carry smaller pack sizes.

So when the summer months came around, even though people has been buying their ice-cream at this retailer in the winter and spring, they now wanted larger pack sizes and they were finding them at other retailers, and switching their spend over to these other retailers.

So for the brand, the opportunity was a selling story to the retailer to give the brand more freezer space and to carry an additional assortment of products to help drive greater sales for that brand, but also to help the retailer grow their ice cream category sales as well.

Idea of architecture

Gardner: So just that insight could really help them figure that out. They probably wouldn’t have been able to do it any other way.

We've seen some examples of how impactful this can be and how much a business can benefit from it. But let’s go back to the idea of the architecture. For me, one of my favorite truths in IT is that architecture is destiny. That seems to be the case with you, using the combination of AWS and HP Vertica.

It seems to me that you don’t have to suffer the costs of a large capital outlay of having your own data center and facilities. You're able to acquire these very advanced capabilities at a price point that's significantly less from a capital outlay and perhaps predictable and adjustable to the demand.

Is that something you then can pass along? Tell me a little bit about the economics of how this architectural approach works for you?

Mozes: One of the benefits of using AWS is that it’s very easy for us to adjust our infrastructure on demand, as we see fit. Jared has referred to some of the examples that we had before. We did a major analysis for a large retailer on Black Friday, and we had some special promotions to our mobile app users going on at that point. Imagine that our data volume would grow tremendously from one day to the next couple of days, and then after when the promotion is over and the big shopping season is over, our volume would come down somewhat.
It’s very cost efficient to run an operation where you can just add additional computing power as you need, and then when you don’t need that anymore, you can scale it down.

When you run an infrastructure in the cloud in combination with online data storage and data engine, it's very easy to scale it up and down. It’s very cost efficient to run an operation where you can just add additional computing power as you need, and then when you don’t need that anymore, you can scale it down.

We did this during a time period, when we had to bring a lot fresh data online quickly. We could just add additional nodes, and we saw very close to linear scalability by increasing our cluster size.

Schrieber: On the business side, the other advantage is we can manage our cash flows quite nicely. If you think about running a startup, cash is king, and not having to do large capital outlays in advance, but being able to adjust up and down with the fluctuations in our businesses, is also valuable.

Gardner: We're getting close to the end of our time. I wonder if you have any other insights into the business benefits from an analytics perspective of doing it this way. That is to say, incentivizing consumers, getting better data, being able to move that data and then analyze it at an on-demand infrastructure basis, and then deliver queries in whole new ways to a wider audience within your client-base.

I guess I'm looking for how this stands up both to the competitive landscape, but also to the past. How new and how innovative is this in marketing? Then we'll talk about where we go next? Let’s try to get a level set as to how new and how refreshing this is, given what the technology enables both at cloud basis and the mobility basis and then the core stuff, the underlying analytics platform basis.

Product launch

Schrieber: We have an example that's going on right now around a major new product launch for a very large consumer goods company. They chose us to help monitor this launch, because they were tired of waiting for six months for any insight in terms of who is buying it, how they were discovering it, how they came about choosing it over the competition, how their experience was with the product, and what it meant for their business.

So they chose to work with us for this major new brand launch, because we could offer them visibility within days or weeks of launching that new product in the market to help them understand who were the people who were buying, was it the target audience that they thought it was going to be, or was it a different demographic or lifestyle profile than they were expecting. If so, they might need to change their positioning or marketing tactics and targeting accordingly.

How are these people discovering the products? We're able to trigger surveys to them in the moment, right after they've made that purchase, and then flow that data back through to our clients to help them understand how these people are discovering it. Was it a TV advertisement? Was it discovered on the shelf or display in the store? Did a friend tell them about it? Was their social media marketing campaign working?
Often, hundreds of millions of dollars spent by major consumer goods companies on new brand launches to get this quick feedback in terms of what’s working and what’s not.

We're also able to figure out what these people were buying before. Were they new to this category of product? Or did they not use this kind of product before and were just giving it a try? Were they buying a different brand and have now switched over from that competitor? And, if so, how did they like it by comparison, and will they repeat purchase? Is this brand going to be successful? Is this meeting needs?

These are enormous decisions. Often, hundreds of millions of dollars spent by major consumer goods companies on new brand launches to get this quick feedback in terms of what’s working and what’s not, who to target with what kind of messaging, and what it’s doing to the marketplace in terms of stealing share from competitors.

Driving new people to the product category can influence major investment decisions along the lines of whether we need to build the new manufacturing facility, do we need to change our marketing campaigns, or should we go ahead and invest in that TV Super Bowl ad, because this really has a chance to go big?

These are massive decisions that these companies can now make in a timely manner, based on this new approach of capturing and making use of the data, instead of waiting six months on a new product launch. They're now waiting just weeks and are able to make the same kinds of decisions as a result.

Gardner: So, in a word it’s unprecedented. You really just haven’t been able to do this before.

Schrieber: It’s not been possible before at all, and I think that’s really what’s fueling the growth in our business.

Look to the future

Gardner: Let’s look to the future quickly. We hear a lot about the Internet of Things. We know that mobile is only partially through its evolution. We're going to see more smart phones in more hands doing more types of transactions around the globe. People will be using their phones for more of what we have thought of as traditional business in commerce. So that opens up a lot more information that’s generated and therefore need to gather and then analyze.

So where do we go next? How does this generate additional novel capabilities, and then where do we go perhaps in terms of verticals? We haven’t even talked about food or groceries, hospitality, or even health care.

So without going too far -- this could be another hour conversation in itself -- maybe we could just tease the listener and the reader with where the potential for this going forward is.

Schrieber: If you think about Internet of Things as it relates to our business, there are a couple of exciting developments. One is the use of things like beacons inside of stores. Now we can know exactly which aisle people have walked down and what shelf they’ve stood in front of, and what product they've interacted with. That beacon is communicating with their smartphone and that smartphone is tied to our user account in a way that we're surveying these individuals or triggering surveys to them, in-the-moment, as they shop.
That will open up entirely new fields of research and consumer understanding about how people shop and make decisions at the shelf.

That’s not something that’s been doable before. It’s something that the Internet of Things, and very specifically beacons linking with smartphones, will allow us to do going forward. That will open up entirely new fields of research and consumer understanding about how people shop and make decisions at the shelf.

The same is true inside the home. We talk about the Internet of Things as it relates to smart refrigerators or smart laundry machines, etc. Understanding daily lifestyle activities and how people make the choice of which product to use and how to use them inside their home is a field of research that is under-served today. The Internet of Things is really going to open up in the years to come.

Gardner: Just quickly, what are other retail sectors or vertical industries where this would make a great deal of sense.

Schrieber: I have a friend who runs an amazing business called Wavemark, which is basically an Internet of Things for medical devices and medical consumables inside of hospitals and care facilities, with the ability to track inventory in real time, tying it to patients and procedures, tying it back to billing and consumption.

Making all of that data available to the medical device manufacturers, so that they can understand how and when their products are being used in the real world in practice, is revolutionizing that industry. We're seeing it in healthcare, and I think we're going to see it across every industry.

Engineering perspective

Gardner: Last word to you, Tibor. Given what Jared just told us about the greater applicability. The model, the architecture comes back to mind for me, the cloud, the mobile device, the data, the engine, the ability to deal with that velocity, volume, and variability at a cost point that is doable and scales up and down. Are there any thoughts about this from an engineering perspective and where we go next?

Mozes: We see that with all these opportunities bubbling up, the amount of data that we have to process on a daily basis is just going to continually grow at an exponential rate. We continue to get additional information on shopping behavior and more data from external data sources. Our data is just going to grow. We will need to engineer everything to be as scalable as possible.

Gardner: Very good. I'm afraid we will have to leave it there. We've been learning about how InfoScout in San Francisco gleans new levels of accurate insights into consumer behavior by collecting data directly from sales receipts.

In order to better analyze that data and use it, we have seen how they have used an architecture based on the AWS public cloud, the infrastructure as a service and data as a service capability, but built on HP Vertica as the engine for analytics and for delivery of the analysis.

InfoScout is faced with the daunting task of managing and cleansing this data and they've been able to scale very impressively over the past six months using Vertica in the cloud.
Become a member of myVertica
Register now
Gain access to the HP Vertica Community Edition
To learn more, we've been here with our two guests, and I’d really like to thank them. Tibor Mozes, Senior Vice President of Data Engineering at InfoScout. Thank you so much, Tibor.

Mozes: Thank you.

Gardner: And also Jared Schrieber, Co-founder and CEO at InfoScout. Thank you so much, Jared.

Schrieber: Pleasure, Dana. Thank you.

Gardner: And a big thank you as well to our audience for joining us for this special new style of big data discussion.

I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host for this ongoing series of HP-sponsored discussions. Thanks again for joining, and don’t forget to come back next time.

Listen to the podcast. Find it on iTunes. Download the transcript. Get the mobile app for iOS or Android. Sponsor: HP.


Transcript of a BriefingsDirect discussion on how a consumer research and data analysis firm gleans rich marketing data from customers' shared sales receipts. Copyright Interarbor Solutions, LLC, 2005-2015. All rights reserved.

You may also be interested in: