Showing posts with label BI. Show all posts
Showing posts with label BI. Show all posts

Friday, December 06, 2013

As Big Data Pushes Enterprises into Seeking More Data Types, Standard and Automated Integrations Far Outweigh Coded Connections

Transcript of a BriefingsDirect podcast on how creating big-data capabilities are new top business imperatives in dealing with a flood of data from disparate sources.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: Scribe Software.

Dana Gardner: Hi, this is Dana Gardner, Principal Analyst at Interarbor Solutions, and you’re listening to BriefingsDirect.

Gardner
Today, we present a sponsored podcast discussion on the top new business imperatives: Creating big-data capabilities and becoming a data-driven organization.

We’ll examine how business-intelligence (BI) trends are requiring access and automation across data flows from a variety of sources, formats, and from many business applications.

Our discussion focuses on ways that enterprises are effectively harvesting data in all its forms, and creating integration that fosters better use of data throughout the business process lifecycle.

Here now to share their insights into using data strategically by exploiting all of the data from all of the applications across business ecosystems, we’re joined by Jon Petrucelli, Senior Director of Hitachi Solution Dynamics, CRM and Marketing Practice, based in Austin, Texas. Welcome, Jon.

Jon Petrucelli: Thanks, Dana.

Gardner: We’re also here with Rick Percuoco, Senior Vice President of Research and Development at Trillium Software in Bedford, Mass. Welcome, Rick.

Rick Percuoco: Hi, Dana. Thank you.

Gardner: And we're also joined by Betsy Bilhorn, Vice President of Product Management at Scribe Software in Manchester NH. Welcome, Betsy. [Disclosure: Scribe Software is a sponsor of BriefingsDirect podcasts.]

Betsy Bilhorn: Thank you, Dana.

Gardner: Betsy, let me start with you. We know that more businesses are trying to leverage and exploit their data, helping them to become more agile, predictive, and efficient. What's been holding them back from gaining access to the most relevant data? What's the roadblock here?

Bilhorn: There are a couple of things. One is the explosion in the different types and kinds of data. Then, you start mixing that with legacy systems that have always been somewhat difficult to get to. Bringing those all together and making sense of that are the two biggest ones. Those have been around for a long, long time.

Bilhorn
That problem is getting exponentially harder, given the variety of those data sources, and then all the different ways to get into those. It’s just trying to put all that together. It just gets worse and worse. When most people look at it today, it almost seems somewhat insurmountable. Where do you even start?

Gardner: Jon, how about your customers, at Hitachi? What are you seeing in terms of the struggle that they're facing in getting better data for better intelligence and analytics?

Legacy systems

Petrucelli: We work with a lot of large enterprise, global-type customers. To build on what Betsy said, they have a lot of legacy systems. There's a lot of data that’s captured inside these legacy systems, and those systems were not designed to be open architected, with sharing their data with other systems.

When you’re dealing with modern systems, it's definitely getting easier. When you deal with middleware software like Scribe, especially with Scribe Online, it gets much easier. But the biggest thing that we encounter in the field with these larger companies is just a lack of understanding of the modern middleware and integration and lack of understanding of what the business needs. Does it really need real-time integration?

Petrucelli
Some of our customers definitely have a good understanding of what the business wants and what their customers want, but usually the evaluator, decision-maker, or architect doesn’t have a strong background in data integration.

It's really a people issue. It's an educational issue of helping them understand that this isn't as hard as they think it is. Let's scope it down. Let's understand what the business really needs. Usually, that becomes something a lot more realistic, pragmatic, and easier to do than they originally anticipated going into the project.

In the last 5 to 10 years, we've seen data integration get much easier to do, and a lot of people just don’t understand that yet. That’s the lack of understanding and lack of education around data integration and how to exploit this big-data proliferation that’s happening. A lot of users don't quite understand how to do that, and that’s the biggest challenge. It’s the people side of it. That’s the biggest challenge for us.

Gardner: Rick Percuoco at Trillium, tell us what you are seeing when it comes to the impetus for doing data integration. Perhaps in the past, folks saw this as too daunting and complex or involved skill sets that they didn't have. But it seems now that we have a rationale for wanting to have a much better handle on as much data as possible. What's driving the need for this?

Percuoco: I would definitely agree with what Betsy and Jon said. In dealing with that kind of client base, I can see that a lot of the principles and a lot of the projects are in their infancy, even with some of the senior architects in the business. Certain companies, by their nature, deal with volume data. Telecom providers or credit card companies are being forced into building these large data repositories because the current business needs would support that anyway.

Percuoco
So they’re really at the forefront of most of these. What we have are large data-migration projects. There are disparate sources within the companies, siloed bits of information that they want to put into one big-data repository.

Mostly, it's used from an analytics or BI standpoint, because now you have the capability of using big-data SQL engines to link and join across disparate sources. You can ask questions and get information, mines of information, that you never could before.

The aspect of extract, transform, load (ETL) will definitely be affected with the large data volumes, as you can't move the data like you used to in the past. Also, governance is becoming a stronger force within companies, because as you load many sources of data into one repository, it’s easier to have some kind of governance capabilities around that.

Higher scales

Gardner: Betsy, it sounds that as if the technology has moved in such a way that the big-data analytics, the platform for doing analysis, has become much more capable in dealing at higher scales, faster speeds at lower costs. But we still come back to that same problem of getting to the data, putting it in a format that can be used, directing it, managing that flow, automating it, and then, of course, dealing with the compliance, governance, risk, and security issues.

Is that the correct read on this, that we've been able to move quite well in terms of the analytics engine capability, but we're still struggling with getting the fuel to that engine?

Bilhorn: I would absolutely agree with that. When you look at the trends out there, when we talk about big data, big analytics and all of that, that's moved much faster than capturing those data sources and getting them there. Again, it goes back to all of these sources Jon was referring to. Some of these systems that we want to get the data from were never built to be open. So there is a lot of work just to get them out of there.

The other thing a lot of people like to talk about is an application programming interface (API) economy. "We will have an API and we can get through web services at all this great stuff," but what we’ve seen in building a platform ourselves and having that connectivity, is that not all of those APIs are created equal.

The vendors who are supplying this data, or these data services, are kind of shooting themselves in the foot and making it difficult for the customer to consume them, because the APIs are poorly written and very hard to understand, or they simply don’t have the performance to even get the data out of the system.
The vendors who are supplying this data, or these data services themselves, are kind of shooting themselves in the foot and making it difficult for the customer to consume them.

On top of that, you have other vendors who have certain types of terms of service, where they cut off the service or they may charge you for it. So when they talk about how it's great that they can do all these analytics, in getting the data in there, there are just so many show stoppers on a number of fronts. It's very, very challenging.

Gardner: Let's think about what we are doing in terms of expanding the requirements for business activities and values here. Customer relationship management (CRM), I imagine, paved the way where we’re trying to get a single view of the customer across many different data type of activities. But now, we’re pushing the envelope to a single view of the patient across multiple healthcare organizations or a single view of a process that has a cloud part, an on-premises part, and an ecosystem supply-chain part.

It seems as if we’ve moved in more complexity here. Jon Petrucelli, how are the systems keeping up with these complex demands, expanding concentric circles of inclusion, if you will, when it comes to a single view of an object, individual, or process?

Petrucelli: That’s a huge challenge. Some people might call it data taxonomy, data structuring, or data hygiene, but you have to be able to define a unique identifier for your primary object in the data. That’s what we see. Sometimes, businesses have a hard time deciding on that, but usually it jumps out at you.

The only things that will transact business with you in the world are people or organizations, generally speaking. A dog, a tree, or an asset is not going to actually transact business with you.

Master key

We have specialists on our team that do this taxonomy, architects that help our organizations, figure out what a master key is, a master global unique identifier for an object. Then, you come up with a schema that allows you to either use one that’s existing or you concatenate a bunch of the data together to create one. That becomes the way that you relate all of the objects to each other that sets the foreign key that they hook up to.

Gardner: I think that helps illustrate how far you can go with this. It seems, though, as if you have to get your own house in order -- your own legacy applications, your own capabilities -- before you can start to expand and gain some of these competitive advantages. It seems that the more data you can bring it to bear on your analytics, the more predictive, the more precise, and the more advantageous your business decisions will be.

I think we understand the complexity, but let's take it back inside the organization. Rick, tell us first about what Trillium Software does and how you're seeing organizations take the steps to begin to get the skills, expertise, and culture to make data integration and data lifecycle management happen better.

Percuoco: Trillium Software has always been a data-quality company. We have a fairly mature and diverse platform for data that you push through. Because for analytics, for risk and compliance, or for anything where you need to use your data to calculate some kind of risk quotient ratios or modeling whereby you run your business, the quality of your data is very, very important.
With the advent of big data and the volume of more and varied unstructured data, the problem of data quality is like on steroids now.

If you’re using that data that comes in from multiple channels to make decisions in your business, then obviously data quality and making that data the most accurate that it can be by matching it against structured sources is a huge difference in terms of whether you'll be making the right decisions or not.

With the advent of big data and the volume of more and varied unstructured data, the problem of data quality is on steroids now. You have a quality issue with your data. If anybody who works in any company is really honest with themselves and with the company, they see that the integrity of the data is a huge issue.

As the sources of data become more varied and they come from unstructured data sources like social media, the quality of the data is even more at risk and in question. There needs to be some kind of platform that can filter out the chatter in social media and the things that aren't important from a business aspect.

Gardner: Betsy Bilhorn, tell us about Scribe Software and how what Trillium and Hitachi Solutions are doing helps data management.

Bilhorn: We look at ourselves as the proverbial PVC pipe, so to speak, to bring data around to various applications and the business processes and analytics. Where folks like Hitachi leverage our platform is in being able to make that process as easy and as painless as possible.

We want people to get value out of their data, increase the pace of their business, and increase the value that they’re getting out of their business. That shouldn’t be a multi-year project. It shouldn’t be something that you’re tearing your hair out over and running screaming off a bridge.

As easy as possible

Our goal here at Scribe is to make that data integration and to get that data where it needs to go, to the right person, at the right time, as easily and simply as possible for companies like Hitachi and their clients.

Working with Trillium, one of the great things with that partnership is obviously that there is the problem of garbage in/garbage out. Trillium provides that platform by which not only can you get your data where you need it to go, but you can also have it clean and you can have it deduped. You can have a better quality of data as it's moving around in your business. When you look at those three aspects together, that’s where Scribe sits in the middle.

Petrucelli: We used to do custom software integration. With a lot of our customers we see lot of custom .NET code or other types of codesets, Java for example, that do the integration. They used to do that, and we still see some bigger organizations that are stuck on that stuff. That’s a way to paint yourself into a corner and make yourself captive to some developer.

We highly recommend that people move away from that and go to a platform-based middleware application like Scribe. Scribe is our preferred platform middleware, because that makes it much more sustainable and changeable as you move forward. Inevitably, in integration, someone is going to want to change something later on.

When you have a custom code integration someone has to actually crack open that code, take it offline, or make a change and then re-update the code and things like -- and its all just pure spaghetti code.
We highly recommend that people move away from that and go to a platform-based middleware application like Scribe.

With a platform like Scribe, its very easy to pick up industry-standard training available online. You’re not held hostage anymore. It’s a graphical user interface (GUI). It's literally drag-and-drop mappings and interlock points. That’s really amazing, being this nice capability in their Scribe Online service. Even children can do an integration. It’s a teaching technique that was developed at Harvard or MIT about how to put puzzle pieces together through integration. If it doesn’t work, the puzzle pieces don’t fit.

They’ve done a really amazing job of making integration for rest of us, not just for developers. We highly recommend people to take a look at that, because it just brings the power back to the business and takes it away from just one developer, a small development shop, or an outsourced developer.

That’s one thing. The other thing I want to add is that we see integration as critical to all of the successor projects at the high levels of adoption and return on investment (ROI). Adoption by the users and then ultimately ROI by the businesses is important, because integration is like gas in the sports car. Without the gas, it's not going to go.

We want to give them one user experience or one user interface to productive users -- especially sales reps in the CRM world and customer service reps. You don’t want them all tabbing between a bunch of different systems. So we bring them into one interface, and with a platform like Microsoft CRM, they can use their interface of choice.

They can move from a desktop, to a laptop, to a tablet, to a mobile device and they’re seeing one version of the truth, because they’re all looking into windows looking into the same realm. And in that realm, what is tunneled in comes through pipes that are Scribe.

Built-in integration

What we do for a lot of customers is intentionally build integration into it using Scribe, because we know that if we can take them down from five different interfaces, you're looking at getting a 360-degree view of the customer that’s calling them or that they’re about to call on. We can take that down to one interface from five.

They’re really going to like that. Their adoption is going to be higher and their  productivity is going to be higher. If you can raise the productivity of the users, you can raise the top line of the company when you’re talking about a sales organization. So, integration is the key to drive high level of adoption and high level of ROI and high levels of productivity.

Gardner: Let's talk about some examples of how organizations are using these approaches, tools, methods, and technologies to improve their business and their data value. I know that you can’t always name these organizations, but let's hear a few examples of either named or non-named organizations that are doing this well, doing this correctly, and what it gets for them.
If you can raise the productivity of the users, you can raise the top line of the company when you’re talking about a sales organization.

Petrucelli: One that pops to mind, because I just was recently dealing with them, is the Oklahoma City Thunder NBA basketball team. I know that they’re not a humongous enterprise account, but sometimes it's hard for people to understand what's going on inside an enterprise account.

Most people follow and are aware of sports. They have an understanding of buying a ticket, being a season ticket holder, and what those concepts are. So it's a very universal language.

The Thunder had a problem where they were using a ticketing system that would sell the tickets, but they had very little CRM capabilities. All this ticketing was done at the industry standard for ticketing and that was great, but there was no way to track, for example, somebody's preferences. You’d have this record of Jon Petrucelli who buys season tickets and comes to certain games. But that’s it; that’s all you’d have.

They couldn’t track who my favorite player was, how many kids I have, if I was married, where I live, what my blog is, what my Facebook profile  is. People are very passionate about their sports team. They want to really be associated with them, and they want to be connected with those people. And the sports teams really want to do that, too.

So we had a great project, an award winning project. It's won a Gartner award and Microsoft awards. We helped the Oklahoma City Thunder to leverage this great amount of rich interaction data, this transactional data, the ticketing data about every seat they sat in, and every time they bought.

Rich information

That’s a cool record and that might be one line in the database. Around that record, we’re now able to wrap all the rich information from the internet. And that customer, that season ticket holder, wants to share information, so they can have a much more personalized experience.

Without Scribe and without integration we couldn’t do that. We could easily deploy Microsoft CRM and integrate it into the ticketing system, so all this data was in one spot for the users. It was a real true win-win-win, because not only did the Oklahoma City Thunder have a much more productive experience, but their season ticket account managers could now call on someone and could see their preferences. They could see everything they needed to track about them and see all of their ticketing history in one place.

And they could see if they’re attending, if they are not attending, everything about what's going on with that very high-value customer. So that’s a win for them. They can deliver personalized service. On the other end of it, you have the customer, the season ticket holder and they’re paying a lot of money. For some of them, it’s a lifelong dream to have these tickets or their family has passed them down. So this is a strong relationship.

Especially in this day and age, people expect a personalized touch and a personalized experience, and with integration, we were able to deliver that. With Scribe, with the integration with the ticketing system, putting that all in Microsoft CRM where it's real-time, it's accessible and it's insightful.

It’s not just data anymore. It's real time insights coming out of the system. They could deliver a much better user experience or customer experience, and they have been benchmarked against the best customer organizations in the world. The Oklahoma City Thunder are now rated as the top professional sports fan experience. Of all professional sports, they have the top fan experience -- and it's directly relatable to the CRM platform and the data being driven into it through integration.
It’s not just data anymore. It's real time insights coming out of the system.

Gardner: Great. You can actually see where there is transformational benefit. They’re not just iterative or nice to have. It really changes their business in a major way. Rick Percuoco, any thoughts there at Trillium Software of some examples that exemplify why these approaches are so powerful?

Percuoco: I’ve seen a couple of pretty interesting use cases. One of them is with one of our technical partnerships. They have a data platform also where they use a behavior account-sharing model. It's very interesting in that they take multiple feeds of different data, like social media data, call-center data, data that was entered into a blog from a website. As Jon said, they create a one-customer view of all of those disparate sources of data including social media and then they map for different vertical industries behavioral churn models.

In other words, before someone churns their account or gets rid of their account within a particular industry -- like insurance, for example -- what steps do they go through before they churn their account? Do they send an e-mail to someone? Do they call the call center? Do they send social media messages? Then, through statistical analysis, they build these behavioral churn models.

They put data through these models of transactional data, and when certain accounts or transactional data fall out at certain parts, they match that against the strategic client list and then decide what to do at the different phases of the account churn model.

I've heard of companies, large companies, saving as much as $100 million in account churn by basically understanding what the clients are doing through these behavioral churn models.

Sentiment analysis

Probably the other most prevalent that I've seen with our clients is sentiment analysis. Most people are looking at social media data, seeing what people are saying about them on social media channels, and then using all different creative techniques to try and match those social media personas to client lists within the company to see who is saying what about them.

Sentiment analysis is probably the biggest use case that I've seen, but the account churn with the behavioral models was very, very interesting, and the platform was very complex. On top, it had a productive analytics engine that had about 80 different modeling graphs and it also had some data visualization tools. So it was very, very easy to create shots and graphs and it was actually pretty impressive.

Gardner: Betsy, do you have any examples that also illustrate what we're talking about when it comes to innovation and value around data gathering analytics and business innovation.

Bilhorn: I’m going to do a little bit of a twist here on that problem. We have had a recent customer, who is one of the top LED lighting franchisors in United States, and they had a different bit of a problem. They have about 150 franchises out there and they are all disconnected.
Sentiment analysis is probably the biggest use case that I've seen.

So, in the central office, I can't see what my individual franchises are doing and I can't do any kind of forecasting or business reporting to be able to look at the health of all my franchises all over the country. That was the problem.

The second problem was that they had decided on a standardized NetSuite platform and they wanted all of their franchises to use these. Obviously, for the individual franchise owner, NetSuite was a little too heavy for them and they said overwhelmingly they wanted to have QuickBooks.

This customer came to us and said, “We have a problem here. We can't find anybody to integrate QuickBooks to our central CRM system and we can't report. We’re just completely flying blind here. What can you do for us?”

Via integration, we were able to satisfy that customer requirement. Their franchises can use QuickBooks, which was easy for them, and then through all of that synchronized information back from all of these franchises into central CRM, they were able to do all kinds of analytics and reporting and dashboarding on the health of the whole business.

The other side benefit, which also makes them very competitive, is that they’re able to add franchises very, very quickly. They can have their entire IT systems up and running in 30 minutes and it's all integrated. So the franchisee is ready to go. They have everything there. They can use a system that’s easy for them to use and this company is able to have them up and are getting their data right away.

Consistency and quality

So that’s a little bit different. Big data is not social, but it’s a problem that a lot of businesses face. How do I even get these systems connected so I can even run my business? This rapid repeatable model for this particular business is pretty new. In the past, we’ve seen a lot of people try to wire things up with custom codes, or every thing is ad hoc. They’re able to stand up full IT systems in 30 minutes, every single time over and over again with a high level consistency and quality.

Gardner: Well we have to begin to wrap it up, but I wanted to take a gauge of where we are on this. It seems to me that we’re just scratching the surface. It’s the opening innings, if you will.

Will we start getting these data visualizations down to mobile devices, or have people inputting more information about themselves, their devices, or the internet of things? Let's start with you, Jon. Where are we on the trajectory of where this can go?

Petrucelli: We’re working on some projects right now with geolocation, geocaching, and geosensing, where when a user on a mobile device comes within a range of a certain store, it will serve that user up, if they have downloaded the app. It will be an app on their smartphone and they have opted into those. It will serve them up special offers to try to pull them into the store the same way in which, if you’re walking by a store, somebody might say, “Hey, Jon.” They know who I am and know my personalization, when I come in a range, it now knows my location.
Integration is really the key to drive high levels of adoption, which drives high levels of productivity.

This is somebody who has an affinity card with a certain retailer, or it could be a sports team in the venue that the organization knows during the venue, it knows what their preferences are and it puts exactly the right offer in front of the right person, at the right time, in the right context, and with the right personalization.

We see some organizations moving to that level of integration. With all of the available technology, with the electronic wallets, now with Google Glass, and with smart watches, there is a lot of space to go. I don’t know if it's really relevant to this, but there is a lot of space now.

We’re more in the business app side of it, and I don’t see that going away. Integration is really the key to drive high levels of adoption, which drives high levels of productivity which can drive top line gain and ultimately a better ROI for the company that’s how we really look it integration.

Gardner: Where are we on the trajectory here for using these technologies to advance business?

Percuoco: You mentioned specifically location information, and, as Jon mentioned, it is germane to this discussion. There’s the concept of digital marketing, marketing coupons to people in real-time over their smartphones as they’re walking by businesses, and so forth. That’s definitely one of the very prevalent use cases for location objects.

Shopping patterns

There’s also an interesting one that kind of goes on top of that, where you evaluate web traffic shopping patterns of people, using Google location objects. For large ticket items, you can actually email them, in real time, competitor coupons. For example, a mile down the street, this one company has something for $100 or $200 less.

It's another interesting use case kind of intelligent marketing through digital media in the mobile market. I also see the mobile delivery of information being critical as we move forward.

Pretty much all data integration or BI professionals are basically working parents. It’s very, very important to be able to deliver that information, at least in a dashboard format or a summary format on all the mobile devices. You could be at your kid’s Little League game or you could be out to dinner with your wife, but you may have to check things.

The delivery of information through the mobile market is critical, although the user experience has to be different. There needs to be a bunch of work in terms of data visualization, the user experience, and what to deliver. But the modern family aspects of life and people working are forcing the mobile market to come up to speed.
It’s very, very important to be able to deliver that information, at least in a dashboard format or a summary format on all the mobile devices.

The other thing that I would say is in terms of integration methods and what Jon was talking about. You do have to watch out for custom APIs. Trillium has a connectivity business as does Scribe.

As long as you stick with industry-standard handshaking methods, like XML or JSON or web services and RESTful APIs, then usually you can integrate packages fairly smoothly. You really need to make sure that you're using industry-standard hand-offs for a lot of the integration methods. You have four or five different ways to do that, but it’s pretty much the same four or five.

Those would be my thoughts on the future. I also see cloud computing, platform as a service (PaaS), and software as a service (SaaS) really taking hold of the market. Even Microsoft and some of the other platform tools like Office 365 and the email systems in CRM, are all cloud-based applications now, and to be honest, they’re better. The service is better, and there’s no on-premise footprint. I really see the market moving toward PaaS and SaaS to the cloud computing market.

Gardner: What is Scribe Software's vision, and what are the next big challenges that you will be taking your technology to?

Bilhorn: Ideally, what I would like to see, and what I’m hoping for, is that with mobile and consumerization of IT you’re beginning to see that business apps act more like consumer apps, having more standard APIs and forcing better plug and play. This would be great for business. What we’re trying to do, in absence of that, is create that plug-and-play environment to, as Jon said, make it so easy a child can do it.

Seamless integration

Our vision in the future is really flattening that out, but also being able to provide seamless integration experience between this break systems, where at some point you wouldn’t even have to buy middleware as an individual business or a consumer.

The cloud vendors and legacy vendors could embed integration and then be able to have really a plug and play so that the individual user could be doing integration on their own. That’s where we would really like to get to. That’s the vision and where the platform is going for Scribe.

Gardner: Well, great. I’m afraid we’ll have to leave it there. We've been listening to a sponsored BriefingsDirect podcast discussion on how business intelligence and big-data trends are requiring improved access and automation to data flows from a variety of sources.

We've learned of ways that enterprises are effectively harvesting data in all it's forms and creating integrations that foster better use of data throughout the entire lifecycle. The result has been the ability to exploit data strategically among more aspects of enterprise businesses and across more types of applications and processes.

So a huge thanks to our guests Jon Petrucelli, Senior Director of Hitachi Solutions Dynamic CRM and Marketing Practice. Thanks so much Jon.

Petrucelli: Thank you, glad to be here.

Percuoco: Also Rick Percuoco, Senior Vice President of Research and Development at Trillium Software. Thank you so much, Rick.

Percuoco: You’re welcome, Dana.

Gardner: And Betsy Bilhorn, Vice President of Product Management at Scribe Software. Thank you, Betsy.

Bilhorn: Thank you again, Dana.

Gardner: And also a huge thank you to our audience for joining this insightful discussion. This is Dana Gardner, Principal Analyst at Interarbor Solutions. Don’t forget to come back and listen next time.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: Scribe Software.

Transcript of a BriefingsDirect podcast on how creating big-data capabilities are new top business imperatives in dealing with a flood of data from disparate sources. Copyright Interarbor Solutions, LLC, 2005-2013. All rights reserved.

You may also be interested in:

Tuesday, August 06, 2013

HP Vertica General Manager Sets Sights on Next Generation of Anywhere Analytics Platform

Transcript of a BriefingsDirect podcast on how HP Vertica is evolving to meet the needs of enterprises as data continues to grow.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: HP.

Dana Gardner: Hello, and welcome to the next edition of the HP Discover Performance Podcast Series. I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your moderator for this ongoing discussion of IT innovation and how it’s making an impact on people’s lives.

Gardner
Once again, we’re focusing on how IT leaders are improving their business performance for better access, use and analysis of their data and information. This time we’re coming to you directly from the HP Vertica Big Data Conference in Boston and we're delighted to welcome the General Manager of HP Vertica to his debut on BriefingsDirect.

Please join me in welcoming Colin Mahony, General Manager at HP Vertica. Good to have you with us, Colin. [Follow Colin on Twitter.] [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

Colin Mahony: Thanks, Dana. It’s great to be here. I appreciate you having me.

Gardner: Well, it's been well over two years since HP acquired Vertica and, as we begin the inaugural 2013 Big Data Conference, how would you best characterize how Vertica has evolved since its founding back in 2005?

Mahony: Oh, wow. We’ve evolved quite a bit. It’s been a busy couple of years here, certainly post the acquisition. But I think at a high level, we’ve really shifted and expanded from being an MPP column store, very narrowly-focused database company, really into an analytic platform company.

With that comes several developments, obviously on the product side, but also as an organization, going through that maturation in terms of being able to operate at a global scale across the spectrum of what you would expect an analytics provider to offer.

Gardner: And how do you characterize the difference between a store and a platform? Are there many ecosystem players or is this an organic evolution of your capabilities or both?

Mahony: It’s both, the ecosystem and the tools that you interact with. And of course, we support a very rich and vibrant ecosystem of business-intelligencve (BI) tools, extract, transform and load (ETL) tools, and other types of management tools. Not just the ecosystem around it, but also looking within our own products.

Mahony
So it's adding a lot of the capabilities like backup and recovery, additional analytics capabilities beyond just standard SQL with the SDKs that Vertica supports, the ability to run both the procedural and the other types of code within the product, being able to express things like MapReduce beyond what a traditional database system would do.

Since the founding of the company, we've tried to take the best part of the database world and the best parts of the SQL world, but address the most challenging issues that traditional databases have had. So whether it is scalability or it’s being able to run things beyond SQL or it’s just the performance, those are all the things that we have taken into account while we built Vertica, and I think we have always been on the fast track to a platform.

We knew it would be a journey and we knew that building a product and a platform from the bottom up is not an easy thing, but we also knew that once we got there, once we sort of crossed that chasm, if you will, then all those decisions that made in the beginning about this product and building an engine from the bottom up would pay off.

Platform modularity

For probably the last year, that's where we’ve been. Right now, we're seeing that it’s easy to add functionality to the platform because of the modularity of the platform, and we can add that functionality without giving up any of the performance.

For me, it’s probably the most exciting time. Being part of HP offers us so many things that make it a lot easier to become a platform, not only on the development side, but a much greater ecosystem, a global scale, being able to support customers globally 24/7.

Gardner: This is a large conference. I'm pretty impressed with the attendance, but for our audience, this might be an introduction. Tell our listeners and readers a bit more about yourself and your background?

Mahony: I've been with Vertica since the beginning. In fact, long before Vertica, my background has always been databases. I've always loved computer science, and had a minor in computer science in my undergraduate degree. In my first job out of school, I was taking databases -- it's one of our competitors now, so I won't name them -- but I was using their database, and working with civilian US Government clients, and getting a lot of information published up to the web in the earliest days of the web.

I had a couple of other roles, but they were always very technology focused. Then I got my MBA on the business side and went into venture capital for seven years. That's where I met Mike Stonebraker, the founder of Vertica.
Those are all the things that we have taken into account while we built Vertica, and I think we have always been on the fast track to a platform.

I just loved the idea, everything I knew about databases and the challenges of traditional database and everything I knew about the new world order of information -- at the time we didn’t even talk about the term big data -- it just seemed to align really well.

So I decided to leave the dark side of venture capital and I jumped into something that I have been incredibly passionate about. If you look at that lifecycle even my own background with Vertica and where we’ve come, it’s just been a great. The timing was great and as always it takes a lot more than just great technology and great people.

There is definitely a lot of luck and timing, and I had the fortune of stepping into the right market at the right time, being part of a great team, and learning from a lot of great people along the way.

This is our first user conference. It’s ironic that we've never had one before, but I think also this is a testament to that scale I was referring to with what HP can bring. We have wanted a user conference since the beginning. Obviously, it takes some critical mass to get there which we now have, but also it takes the support of an organization that knows how to do these conferences and understand the value of them.

So it's just wonderful to be here. It’s wonderful to see all of these partners, customers, employees and friends of Vertica and HP here in Boston, of course Vertica’s hometown, so truly exciting.

Gardner: You mentioned the marketplace and the timing. I have to go back to that because in 2005, while scale and performance were very important. This whole notion of big data being so prevalent in the market really hadn't happened yet. What’s the state of the union, if you will, with this marketplace? Do more and more IT functions and business functions begin and end with Big data? It seems to be at the center of so many things.

Exponential growth

Mahony: It is. To go back to the founding of Vertica, I remember when Mike Stonebraker was giving the early presentations on the need for it. He talked a lot about the exponential growth of data and how that was outpacing any laws like Moore’s law or other hardware laws. So much information was being created, there was no way that just using more paralyzed hardware was going to be able to address the issue.

The state of the union back then was, just as you said, there was no such thing as big data, but I think Mike, as a visionary, knew what was going to happen in the industry. And it has happened.

It wasn’t a long time ago, but I remember that I was trying to find our first sample dataset that was over a terabyte and we had a difficult time finding it. When we would talk to the early customers, they looked at us like we were crazy when we were asking about a terabyte.

We have an easy time now finding terabytes of data. The state of the union today is that what's driving so much around big data is that you have obviously the volume, variety, and velocity that we talk about often, but what's really driving those three things is human information, whether it's social media, tweets, or expressive content that’s just so prevalent right now, as well machine information.

If you look at the traditional structured database market by any number, it’s a small percentage of the amount of data that’s out there. The strength of Vertica, and really the strength of HP overall, is that we have the best assets for the unstructured human information in Autonomy, as well as the best assets when it comes to machine information and large data.
When we would talk to the early customers, they looked at us like we were crazy when we were asking about a terabyte.

That has some structure. It’s semi-structured information, but it’s not your traditional transaction system. The power of all of that data comes together when you can have an engine that applies some structure to it and then is able to deliver the analytics that the organization needs. It's both IT as well as line of business, and even this new category we often talk about, which is the data scientist.

One of the great things about this show here is that we’ve got Billy Beane of Moneyball fame as our keynote speaker. The reason that we wanted Billy to come speak here is that Moneyball is exactly what’s happening right now in the world when it comes to big data.

You have the data scientist or the statistician, you have the line of business folks, and you have IT. They all have a part to play in the success of how information is used in companies. By bringing them together and by making the software that much easier for them to come together and solve these problems, you can create very real and differentiated value within organization.

So Moneyball is exactly what’s happening, certainly in corporate America, but also in government and in many other institutions that want to leverage information to be more efficient and create a competitive advantage.

Gardner: Before we delve into the latest and greatest with Vertica, let’s put some context around this. It’s only been a few months since the HP Discover 2013 Conference in Las Vegas where the HAVEn Initiative was announced. This puts Vertica in a very prominent place among other HP properties, technologies, platforms and approaches to solving this big data issue. Recap for us, if you would, what HAVEn is and why Vertica formed such an important pillar for this larger HP initiative?

Big-data lake

Mahony: What companies are looking for is this notion of the big-data lake. To me, it can mean many different things, but at the end of the day, companies want to take all the information assets that they have and they want to put them into a safe place, but a place where access to that information can be used by many different constituencies, whether it's IT, line of business, or data scientist.

So the notion of having a safe place, a harbor, or a port is what we announced as HP HAVEn, which is HP’s big data platform. It is primarily for analytics, but it can be used for just about anything when it comes to information and data.

What's so important about information right now is that there are different constituencies in the companies that want to take the information. First of all they want to capture all the information, not just structured, not just unstructured, but 100 percent of their information.

They want to get it to a place where they can leverage it and use it for a lot of different use cases, but the first part is get that information into the right place. For us, that is one of three components of HAVEn, which is the connectors.

We have over 700 connectors as part of HAVEn coming from Autonomy, coming from our Enterprise Security Group, the ArcSight core Logger and those connectors. That can be human information, extreme log information, or traditional database structured information.
They're driven by vast volumes of information and they close the loop, meaning that the experiences that are happening with an application.

Step one is the connectors to get these components. Step two is to put that data into the best engine for that data. Vertica obviously is one component, but you also have the Autonomy IDOL Engine, you have the ArcSight Logger engine, and also open-source technologies like Hadoop, which is actually the HP HAVEn. So we’ve got a place to put the information.

Step three is any N number of applications. What I'm seeing happening in the industry right now is just like we went from mainframe to client-server, and client-server to LAN, we're in a period now where applications are being developed. They're certainly web-based and distributed, but they're also analytical in nature.

They're driven by vast volumes of information and they close the loop, meaning that the experiences that are happening with an application, if you're driving a car, or whatever it might be, information is being passed, closed loop, back to a system that can then optimize the experience. That is creating a new class of applications.

For that new class of applications, you need the platform to be able to drive those. What we're bringing together in HAVEn is Hadoop, Autonomy, Vertica, Enterprise Security, core assets, and the N number of applications.

At Discover, we announced some of our own internal applications, which are powered by the HAVEn platforms. We announced our HP Analytics offering, which is built using Hadoop, Vertica, Enterprise Security, and Autonomy assets.

About community

We're making some of our own applications, but this is about the community and getting people to be able to build new set of applications that can use these components to really change how people are interacting with their data.

That’s HAVEn, and I am always careful to point out to people that HAVEn itself is not a product, but it's a platform and it’s a broader platform than the one that is just Vertica, Autonomy, or Enterprise Security. It’s a platform where 1+1+1+1+1, instead of equaling 5, should equal 8 or 10 or 12, and that's the goal. Of course, it's also a roadmap into areas that each of these components are working on to bring those closer together. So it’s exciting.

Gardner: Let’s look a bit more specifically at Vertica and try to factor why it’s differentiated in the market, but then also get a sense of where it’s going.

One of the things that strikes me about the market nowadays is that there seems to be a sense of tradeoffs going on when organizations are trying to pick their data engine or their platform. They have a set of value on one side, but it’s opposed by value on the other. They can’t have everything. One size does not fit all.

So how are you at Vertica able to help people deal with these tradeoffs that they're facing when it comes to a next-generation data platform?
Vertica was founded on the premise that one size does not fit all.

Mahony: Before I explain the tradeoffs, I couldn’t agree with you more, Dana. In fact, Vertica was founded on the premise that one size does not fit all. Using a single OLTP transactional database to do everything, including analytics, just doesn't make a lot of sense.

If you think about the areas that the people have to trade off, usually it’s scale for performance or analytics functionality for performance. One of things that I've spent a lot of time looking at is, especially over the last couple of years, is just some of the alternative platforms, not just for analytics, but for all of the different data needs.

You can take something like Hadoop as an example. Hadoop really is a distributed file system and has capabilities to run rudimentary analytics and transform processed data. But I think what people love about Hadoop is that it's really easy to load data into Hadoop. You don't have to define the schema or anything.

Instead of schema on write or load time, it’s schema on read time. People like that. They also like at least the perception that it is free and the scalability of it. On the database side, what people love about the database is that you're going to get really good performance, because the data is structured. If you're using a NexGen MPP platform like Vertica, you'll get the performance of the scalability.

So what we’re trying to do and what we've always done a pretty good job of at Vertica is look at the things that would make sense for Vertica to do. We look at expanding the platform in ways that, number one, we have the expertise and the capability to do, not only from the development standpoint, but from the support standpoint. And number two, we have the ability to create something differentiated. If we don't, or it’s not core, then we won’t do it, sticking to the purity of one size doesn’t fit all.

Hadoop-like

We've been doing a lot of work in areas like making it easier to get the data into the platform, doing more with it, making it seem much more like a Hadoop-like environment. You can look at our past releases and see that there's been a lot of work done on that and we continue to make those investments.

One thing has been consistent at Vertica since the beginning. What we focus on is to make it really easy for people to get information onto the platform. Then, we make sure we continue to deliver new capabilities, performance, and functionality within the platform.

We make sure we’re enabling our customers and partners to deploy Vertica anywhere and everywhere, whether it’s cloud appliances, software, or the like. Those are the three tenets of the company. It’s all around this notion of making data matter and help people make better decisions that lead to better outcomes with superior information.

There's so much that can be done in this space, but I think the key for us is to focus on the things that we know we do really well. The good news is that it's such a large space with so many demands that we know we can make a huge impact without trying to take on the world. We know we can make a huge impact in what we’re doing.

I think you'll continue to see some interesting developments along the lines of what I'm describing, and it's very much in line with where we've been.
No matter what on-ramp they take, they tend to find a lot of the other capabilities once they get on.

Gardner: While we're at the user conference, there are some great use cases and some examples. It's one of my favorite points of communication that it's always better to show than to tell.

Of the various user organizations and use cases here, are there are any that jump at you personally when you think about what Vertica started out as and what it became? Are there any ways that some users are putting this to work to really capture, "This is what we intended, and this is what we went through those paces to allow, to encourage, and to now see the fruits of?"

So, from all of the happenings here with the conference, what sort of gets your blood flowing?

Mahony: One thing I've certainly noticed over the years with our customers is that the shiny object of why a customer chooses Vertica may look very different across our customers. For some, it's the price. For some, it's the performance and the scale, massive volumes. For some it's a particular analytic function or several pattern matching capabilities. And for others, it's something entirely different.

But what's so exciting, especially about this conference, is that no matter what on-ramp they take, they tend to find a lot of the other capabilities once they get on. Hopefully, here at the conference, we're going to accelerate some of that just by getting our customers and our partners together in an environment where they can share stories.

Partners and customers

In fact, if you look at the agenda for the conference, it's very light on Vertica presentations. It's very heavy on partner and customer presentations, because this is the time that we want our partners and our customers to learn from each other. We want them to talk about how they are using it.

To answer your question directly, what gets me most jazzed up is when a customer is taking advantage of nearly everything that we do. Again, it's a cycle. It's not something that can happen immediately.

There are so many customers here that have been with us for four or five years and had just been great partners for the Vertica organization in terms of the feature we are developing and the direction that we are taking the product. They tend to be the ones who are using just about every feature in the product. So it gets me really excited.

I have got a customer that's got massive volumes of information, lot of diversity in the information, many different lines of business constituents who are accessing the information, data scientists, DBAs, programmers, different people who are creating applications and keeping the system up and through all that change in the organization.

Sometimes it's not only change in the organization, but potentially change in the industry and changing the way that people are interacting with data and may be changing healthcare outcomes, or drastically improving the quality of mobile phone service or other types of services.
It is about the connection between our customers and our partners, so that they can talk to each other.

So there isn't any one customer of whom I'd say, "You have to go see these guys." The reality is that you should see all of our customers and hear what they have to say. For me, that's the most important part of this conference.

It is about the connection between our customers and our partners, so that they can talk to each other. We can just be a fly on the wall and listen to some of the things that they're saying, good, bad, or ugly -- hopefully very good. But we can even hear things that they want us to improve. That's an important part of any company, certainly a software company, and that's what we're hoping to get out of it. For our customers and partners, they're going to get a lot of out of this just by talking to each other.

Gardner: Colin, what about the notion of business transformation. We've been hearing about this for 30 years. It's been big part of the academic work in business schools. Process re-engineering has evolved into balanced scorecards, and the flavor of the day is about how to change the nature of companies.

But it strikes me that this whole greater than the sum of the parts that you alluded to earlier, where data and analytics is made more available across easier applications to morph that, is inside the company that can then access more types of information across the boundaries of the organization into supply chain and ecosystems.

Getting more detailed information in real time about the customers and the marketplace probably has as much or more of a opportunity to transform businesses than just about anything else that's happened, with the possible exception of the Internet itself, over the past 20 years.

More than technology

So without going too much into a hype curve, the interest of the incredible amount of attention paid to big data in the past few years is about more than the technology. It's really about an empirical data-driven approach, a cultural shift if you will, within businesses. How you have been seeing that manifest itself here at the conference?

Mahony: It's an enormous opportunity for business transformation and definitely the  whole is greater than the sum of the parts. What makes companies really successful with information is not trying to boil the ocean, not trying to do a traditional enterprise data warehouse project that's going to take 24 months, if you're lucky, 36 most likely.

They’ll end up with some monolithic inflexible platform that will probably be outdated by the time it gets deployed. What is making a lot of companies successful is they find a particular use, they find a problem area that they want to drill down on, and they mobilize to do it.

For that, they need a solution that is quickly deployed, but also has that capability to become something much larger. Whether it's Vertica, Talend, or any of the other portfolios that we offer, we strive to make sure that somebody can get up and running quickly, whether it's Autonomy and human information analytics, Vertica and machine data or other types of transactional structured data.

The most important thing is that you find that business case, you focus on it, and prove very quickly. There's something we refer to as “Time to Terabyte,” which is less than a month, typically for Vertica. You get a return on investment (ROI) in less than a month for the investments that you made. If you prove that out, then everybody in the organization is happy, the line of business, the technology folks in IT, even the statisticians, data scientists.
It's not just about faster speeds and feeds. It's about fundamentally stepping back and asking how we're running this business.

From there, you start expanding the project, and that's exactly how we win most of our customers. We very rarely go in and say, "Buy an enterprise license for our product across the company." We certainly do those, but more typically we get into a business unit, we find the acute pain, and we solve that problem.

What they're betting on is the ability for us to expand and for them to expand in this platform. That's why we are, on the one hand, all about the platform and the integration, but on the other hand, not about to lose the flexibility and the modularity of what we do, because that's also a huge differentiator for HP's portfolio.

I think that this is a wonderful time in the world of business transformation, and I think, unlike what has been talked about for the last 30 years, you now have the data that can back it up and prove it in real-time to the organization.

That's the big difference. You gave the balanced scorecard as an example. If you look at the balance scorecard methodology, you can take that methodology and drill down into a thousand fields of detail and be able to get that information in real time. That's the opportunity here, and that's I think why this market is so huge.

It's not just about faster speeds and feeds. It's about fundamentally stepping back and asking how we're running this business. What assets, especially information assets, do we have that could dramatically boost the productivity to the same extent that computers, when they were first introduced, boosted productivity. That's the goal that everybody is looking for when it comes to information.

Cloud and hybrid

Gardner: For our last item today, I wonder if we could take out our crystal ball apparatus and try to do a little blue-sky thinking. One of the other big trends these days of course is cloud computing and hybrid models for the distribution of workloads for applications, but also for data. I'm wondering, as we go down this journey over the next year or two, how do big data and cloud computing come together?

There's this notion of an analytics platform-as-a-service (PaaS) deploy for developers, but now maybe more for data scientists and for those that are doing BI and other analytic chores. How do you foresee some of this whole greater than the sum of the parts extending beyond the technical capabilities into the deployment models and what is that portend, for  additional paybacks or payoffs?

Mahony: As I mentioned in terms of the three things that we are focused on, number one is make it easy to get data into the platform. Number two is do a lot more with the platform, so that there is better analytic capabilities, better pattern matching, and better analytics packs on top of it.

Number three is make sure you can deploy Vertica everywhere, and in the everywhere and anywhere categories, the cloud is certainly the first name that comes to mind. That is absolutely the future of computing. In some ways, I guess, it's the past, but it's interesting how the past repeats itself.
All these activities that are happening up on the cloud are generating a lot of information, information that will be analyzed, I'm sure, in many different ways.

We do run Vertica on hosted environments like Amazon cloud. We're in a private beta on the HP Cloud Service. So there are definitely offerings and developments that that has been underway here at Vertica for a while.

We embrace that, and to us, it's not mutually exclusive. What you described in the hybrid environment where you can run certain things locally. You can burst up to the cloud to do other workloads, especially if you're looking to pull some quick processing power and storage. That's going to be the future and that's the way, just like any other utilities, that we're going to consume some of these capabilities.

This is one of the strengths of a company the size and scale of HP. We have these offerings, whether it's software only, appliance, or cloud. We have the ability to deliver however the customer wants it, and we can also provide not only the flexible technologies, but the flexible business capabilities to make that happen with a lot of ease.

It's an exciting time. If you look at the pillars of the HP, we have cloud, mobility, big data, and security. All four of those pillars tie well into one another, because they're all related. Of course, all these activities that are happening up on the cloud are generating a lot of information, information that will be analyzed, I'm sure, in many different ways.

So it's something that kind of feeds on itself, the same way the mobility does. All of that is a good thing for the analytic space, wherever it is. The final thing I would say is that  the most important thing about analytics is that you do want it embedded into the various applications, just like when you are driving a car, you just want the GPS system to tell you where you are going.

Analytics is the same. You want it within the context of whatever it is that you are doing. Given that so many things are going to be served off the cloud, it's natural that that's the place that will host some of the analytics as well.

So it's an incredibly exciting time, and we're looking forward to having many more of these User Conferences and are certainly going to enjoy the rest of the show this week.

Gardner: Well great. I'm afraid we will have to leave it there. We've been learning more about the ongoing evolution of the HP Vertica platform and its capabilities, and we've developed better understanding about Vertica's growing role and making among the most challenging big data analytic chores more successful and impactful.

So, join me in extending a huge thank you to our special guest Colin Mahony, General Manager at HP Vertica. Thanks so much.

Mahony: Thank you, Dana. [Follow Colin on Twitter.]

Gardner: And also thank you to our audience for joining us for this special HP Discover Performance podcast, coming to you from the HP Vertica Big Data Conference in Boston.

I'm Dana Gardner, Principal Analyst at Interarbor Solutions; your host for this ongoing series of HP sponsored discussions. Thanks again for listening and come back next time.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: HP.

Transcript of a BriefingsDirect podcast on how HP Vertica is evolving to meet the needs of enterprises as data continues to grow. Copyright Interarbor Solutions, LLC, 2005-2013. All rights reserved.

You may also be interested in: