Friday, December 06, 2013

As Big Data Pushes Enterprises into Seeking More Data Types, Standard and Automated Integrations Far Outweigh Coded Connections

Transcript of a BriefingsDirect podcast on how creating big-data capabilities are new top business imperatives in dealing with a flood of data from disparate sources.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: Scribe Software.

Dana Gardner: Hi, this is Dana Gardner, Principal Analyst at Interarbor Solutions, and you’re listening to BriefingsDirect.

Gardner
Today, we present a sponsored podcast discussion on the top new business imperatives: Creating big-data capabilities and becoming a data-driven organization.

We’ll examine how business-intelligence (BI) trends are requiring access and automation across data flows from a variety of sources, formats, and from many business applications.

Our discussion focuses on ways that enterprises are effectively harvesting data in all its forms, and creating integration that fosters better use of data throughout the business process lifecycle.

Here now to share their insights into using data strategically by exploiting all of the data from all of the applications across business ecosystems, we’re joined by Jon Petrucelli, Senior Director of Hitachi Solution Dynamics, CRM and Marketing Practice, based in Austin, Texas. Welcome, Jon.

Jon Petrucelli: Thanks, Dana.

Gardner: We’re also here with Rick Percuoco, Senior Vice President of Research and Development at Trillium Software in Bedford, Mass. Welcome, Rick.

Rick Percuoco: Hi, Dana. Thank you.

Gardner: And we're also joined by Betsy Bilhorn, Vice President of Product Management at Scribe Software in Manchester NH. Welcome, Betsy. [Disclosure: Scribe Software is a sponsor of BriefingsDirect podcasts.]

Betsy Bilhorn: Thank you, Dana.

Gardner: Betsy, let me start with you. We know that more businesses are trying to leverage and exploit their data, helping them to become more agile, predictive, and efficient. What's been holding them back from gaining access to the most relevant data? What's the roadblock here?

Bilhorn: There are a couple of things. One is the explosion in the different types and kinds of data. Then, you start mixing that with legacy systems that have always been somewhat difficult to get to. Bringing those all together and making sense of that are the two biggest ones. Those have been around for a long, long time.

Bilhorn
That problem is getting exponentially harder, given the variety of those data sources, and then all the different ways to get into those. It’s just trying to put all that together. It just gets worse and worse. When most people look at it today, it almost seems somewhat insurmountable. Where do you even start?

Gardner: Jon, how about your customers, at Hitachi? What are you seeing in terms of the struggle that they're facing in getting better data for better intelligence and analytics?

Legacy systems

Petrucelli: We work with a lot of large enterprise, global-type customers. To build on what Betsy said, they have a lot of legacy systems. There's a lot of data that’s captured inside these legacy systems, and those systems were not designed to be open architected, with sharing their data with other systems.

When you’re dealing with modern systems, it's definitely getting easier. When you deal with middleware software like Scribe, especially with Scribe Online, it gets much easier. But the biggest thing that we encounter in the field with these larger companies is just a lack of understanding of the modern middleware and integration and lack of understanding of what the business needs. Does it really need real-time integration?

Petrucelli
Some of our customers definitely have a good understanding of what the business wants and what their customers want, but usually the evaluator, decision-maker, or architect doesn’t have a strong background in data integration.

It's really a people issue. It's an educational issue of helping them understand that this isn't as hard as they think it is. Let's scope it down. Let's understand what the business really needs. Usually, that becomes something a lot more realistic, pragmatic, and easier to do than they originally anticipated going into the project.

In the last 5 to 10 years, we've seen data integration get much easier to do, and a lot of people just don’t understand that yet. That’s the lack of understanding and lack of education around data integration and how to exploit this big-data proliferation that’s happening. A lot of users don't quite understand how to do that, and that’s the biggest challenge. It’s the people side of it. That’s the biggest challenge for us.

Gardner: Rick Percuoco at Trillium, tell us what you are seeing when it comes to the impetus for doing data integration. Perhaps in the past, folks saw this as too daunting and complex or involved skill sets that they didn't have. But it seems now that we have a rationale for wanting to have a much better handle on as much data as possible. What's driving the need for this?

Percuoco: I would definitely agree with what Betsy and Jon said. In dealing with that kind of client base, I can see that a lot of the principles and a lot of the projects are in their infancy, even with some of the senior architects in the business. Certain companies, by their nature, deal with volume data. Telecom providers or credit card companies are being forced into building these large data repositories because the current business needs would support that anyway.

Percuoco
So they’re really at the forefront of most of these. What we have are large data-migration projects. There are disparate sources within the companies, siloed bits of information that they want to put into one big-data repository.

Mostly, it's used from an analytics or BI standpoint, because now you have the capability of using big-data SQL engines to link and join across disparate sources. You can ask questions and get information, mines of information, that you never could before.

The aspect of extract, transform, load (ETL) will definitely be affected with the large data volumes, as you can't move the data like you used to in the past. Also, governance is becoming a stronger force within companies, because as you load many sources of data into one repository, it’s easier to have some kind of governance capabilities around that.

Higher scales

Gardner: Betsy, it sounds that as if the technology has moved in such a way that the big-data analytics, the platform for doing analysis, has become much more capable in dealing at higher scales, faster speeds at lower costs. But we still come back to that same problem of getting to the data, putting it in a format that can be used, directing it, managing that flow, automating it, and then, of course, dealing with the compliance, governance, risk, and security issues.

Is that the correct read on this, that we've been able to move quite well in terms of the analytics engine capability, but we're still struggling with getting the fuel to that engine?

Bilhorn: I would absolutely agree with that. When you look at the trends out there, when we talk about big data, big analytics and all of that, that's moved much faster than capturing those data sources and getting them there. Again, it goes back to all of these sources Jon was referring to. Some of these systems that we want to get the data from were never built to be open. So there is a lot of work just to get them out of there.

The other thing a lot of people like to talk about is an application programming interface (API) economy. "We will have an API and we can get through web services at all this great stuff," but what we’ve seen in building a platform ourselves and having that connectivity, is that not all of those APIs are created equal.

The vendors who are supplying this data, or these data services, are kind of shooting themselves in the foot and making it difficult for the customer to consume them, because the APIs are poorly written and very hard to understand, or they simply don’t have the performance to even get the data out of the system.
The vendors who are supplying this data, or these data services themselves, are kind of shooting themselves in the foot and making it difficult for the customer to consume them.

On top of that, you have other vendors who have certain types of terms of service, where they cut off the service or they may charge you for it. So when they talk about how it's great that they can do all these analytics, in getting the data in there, there are just so many show stoppers on a number of fronts. It's very, very challenging.

Gardner: Let's think about what we are doing in terms of expanding the requirements for business activities and values here. Customer relationship management (CRM), I imagine, paved the way where we’re trying to get a single view of the customer across many different data type of activities. But now, we’re pushing the envelope to a single view of the patient across multiple healthcare organizations or a single view of a process that has a cloud part, an on-premises part, and an ecosystem supply-chain part.

It seems as if we’ve moved in more complexity here. Jon Petrucelli, how are the systems keeping up with these complex demands, expanding concentric circles of inclusion, if you will, when it comes to a single view of an object, individual, or process?

Petrucelli: That’s a huge challenge. Some people might call it data taxonomy, data structuring, or data hygiene, but you have to be able to define a unique identifier for your primary object in the data. That’s what we see. Sometimes, businesses have a hard time deciding on that, but usually it jumps out at you.

The only things that will transact business with you in the world are people or organizations, generally speaking. A dog, a tree, or an asset is not going to actually transact business with you.

Master key

We have specialists on our team that do this taxonomy, architects that help our organizations, figure out what a master key is, a master global unique identifier for an object. Then, you come up with a schema that allows you to either use one that’s existing or you concatenate a bunch of the data together to create one. That becomes the way that you relate all of the objects to each other that sets the foreign key that they hook up to.

Gardner: I think that helps illustrate how far you can go with this. It seems, though, as if you have to get your own house in order -- your own legacy applications, your own capabilities -- before you can start to expand and gain some of these competitive advantages. It seems that the more data you can bring it to bear on your analytics, the more predictive, the more precise, and the more advantageous your business decisions will be.

I think we understand the complexity, but let's take it back inside the organization. Rick, tell us first about what Trillium Software does and how you're seeing organizations take the steps to begin to get the skills, expertise, and culture to make data integration and data lifecycle management happen better.

Percuoco: Trillium Software has always been a data-quality company. We have a fairly mature and diverse platform for data that you push through. Because for analytics, for risk and compliance, or for anything where you need to use your data to calculate some kind of risk quotient ratios or modeling whereby you run your business, the quality of your data is very, very important.
With the advent of big data and the volume of more and varied unstructured data, the problem of data quality is like on steroids now.

If you’re using that data that comes in from multiple channels to make decisions in your business, then obviously data quality and making that data the most accurate that it can be by matching it against structured sources is a huge difference in terms of whether you'll be making the right decisions or not.

With the advent of big data and the volume of more and varied unstructured data, the problem of data quality is on steroids now. You have a quality issue with your data. If anybody who works in any company is really honest with themselves and with the company, they see that the integrity of the data is a huge issue.

As the sources of data become more varied and they come from unstructured data sources like social media, the quality of the data is even more at risk and in question. There needs to be some kind of platform that can filter out the chatter in social media and the things that aren't important from a business aspect.

Gardner: Betsy Bilhorn, tell us about Scribe Software and how what Trillium and Hitachi Solutions are doing helps data management.

Bilhorn: We look at ourselves as the proverbial PVC pipe, so to speak, to bring data around to various applications and the business processes and analytics. Where folks like Hitachi leverage our platform is in being able to make that process as easy and as painless as possible.

We want people to get value out of their data, increase the pace of their business, and increase the value that they’re getting out of their business. That shouldn’t be a multi-year project. It shouldn’t be something that you’re tearing your hair out over and running screaming off a bridge.

As easy as possible

Our goal here at Scribe is to make that data integration and to get that data where it needs to go, to the right person, at the right time, as easily and simply as possible for companies like Hitachi and their clients.

Working with Trillium, one of the great things with that partnership is obviously that there is the problem of garbage in/garbage out. Trillium provides that platform by which not only can you get your data where you need it to go, but you can also have it clean and you can have it deduped. You can have a better quality of data as it's moving around in your business. When you look at those three aspects together, that’s where Scribe sits in the middle.

Petrucelli: We used to do custom software integration. With a lot of our customers we see lot of custom .NET code or other types of codesets, Java for example, that do the integration. They used to do that, and we still see some bigger organizations that are stuck on that stuff. That’s a way to paint yourself into a corner and make yourself captive to some developer.

We highly recommend that people move away from that and go to a platform-based middleware application like Scribe. Scribe is our preferred platform middleware, because that makes it much more sustainable and changeable as you move forward. Inevitably, in integration, someone is going to want to change something later on.

When you have a custom code integration someone has to actually crack open that code, take it offline, or make a change and then re-update the code and things like -- and its all just pure spaghetti code.
We highly recommend that people move away from that and go to a platform-based middleware application like Scribe.

With a platform like Scribe, its very easy to pick up industry-standard training available online. You’re not held hostage anymore. It’s a graphical user interface (GUI). It's literally drag-and-drop mappings and interlock points. That’s really amazing, being this nice capability in their Scribe Online service. Even children can do an integration. It’s a teaching technique that was developed at Harvard or MIT about how to put puzzle pieces together through integration. If it doesn’t work, the puzzle pieces don’t fit.

They’ve done a really amazing job of making integration for rest of us, not just for developers. We highly recommend people to take a look at that, because it just brings the power back to the business and takes it away from just one developer, a small development shop, or an outsourced developer.

That’s one thing. The other thing I want to add is that we see integration as critical to all of the successor projects at the high levels of adoption and return on investment (ROI). Adoption by the users and then ultimately ROI by the businesses is important, because integration is like gas in the sports car. Without the gas, it's not going to go.

We want to give them one user experience or one user interface to productive users -- especially sales reps in the CRM world and customer service reps. You don’t want them all tabbing between a bunch of different systems. So we bring them into one interface, and with a platform like Microsoft CRM, they can use their interface of choice.

They can move from a desktop, to a laptop, to a tablet, to a mobile device and they’re seeing one version of the truth, because they’re all looking into windows looking into the same realm. And in that realm, what is tunneled in comes through pipes that are Scribe.

Built-in integration

What we do for a lot of customers is intentionally build integration into it using Scribe, because we know that if we can take them down from five different interfaces, you're looking at getting a 360-degree view of the customer that’s calling them or that they’re about to call on. We can take that down to one interface from five.

They’re really going to like that. Their adoption is going to be higher and their  productivity is going to be higher. If you can raise the productivity of the users, you can raise the top line of the company when you’re talking about a sales organization. So, integration is the key to drive high level of adoption and high level of ROI and high levels of productivity.

Gardner: Let's talk about some examples of how organizations are using these approaches, tools, methods, and technologies to improve their business and their data value. I know that you can’t always name these organizations, but let's hear a few examples of either named or non-named organizations that are doing this well, doing this correctly, and what it gets for them.
If you can raise the productivity of the users, you can raise the top line of the company when you’re talking about a sales organization.

Petrucelli: One that pops to mind, because I just was recently dealing with them, is the Oklahoma City Thunder NBA basketball team. I know that they’re not a humongous enterprise account, but sometimes it's hard for people to understand what's going on inside an enterprise account.

Most people follow and are aware of sports. They have an understanding of buying a ticket, being a season ticket holder, and what those concepts are. So it's a very universal language.

The Thunder had a problem where they were using a ticketing system that would sell the tickets, but they had very little CRM capabilities. All this ticketing was done at the industry standard for ticketing and that was great, but there was no way to track, for example, somebody's preferences. You’d have this record of Jon Petrucelli who buys season tickets and comes to certain games. But that’s it; that’s all you’d have.

They couldn’t track who my favorite player was, how many kids I have, if I was married, where I live, what my blog is, what my Facebook profile  is. People are very passionate about their sports team. They want to really be associated with them, and they want to be connected with those people. And the sports teams really want to do that, too.

So we had a great project, an award winning project. It's won a Gartner award and Microsoft awards. We helped the Oklahoma City Thunder to leverage this great amount of rich interaction data, this transactional data, the ticketing data about every seat they sat in, and every time they bought.

Rich information

That’s a cool record and that might be one line in the database. Around that record, we’re now able to wrap all the rich information from the internet. And that customer, that season ticket holder, wants to share information, so they can have a much more personalized experience.

Without Scribe and without integration we couldn’t do that. We could easily deploy Microsoft CRM and integrate it into the ticketing system, so all this data was in one spot for the users. It was a real true win-win-win, because not only did the Oklahoma City Thunder have a much more productive experience, but their season ticket account managers could now call on someone and could see their preferences. They could see everything they needed to track about them and see all of their ticketing history in one place.

And they could see if they’re attending, if they are not attending, everything about what's going on with that very high-value customer. So that’s a win for them. They can deliver personalized service. On the other end of it, you have the customer, the season ticket holder and they’re paying a lot of money. For some of them, it’s a lifelong dream to have these tickets or their family has passed them down. So this is a strong relationship.

Especially in this day and age, people expect a personalized touch and a personalized experience, and with integration, we were able to deliver that. With Scribe, with the integration with the ticketing system, putting that all in Microsoft CRM where it's real-time, it's accessible and it's insightful.

It’s not just data anymore. It's real time insights coming out of the system. They could deliver a much better user experience or customer experience, and they have been benchmarked against the best customer organizations in the world. The Oklahoma City Thunder are now rated as the top professional sports fan experience. Of all professional sports, they have the top fan experience -- and it's directly relatable to the CRM platform and the data being driven into it through integration.
It’s not just data anymore. It's real time insights coming out of the system.

Gardner: Great. You can actually see where there is transformational benefit. They’re not just iterative or nice to have. It really changes their business in a major way. Rick Percuoco, any thoughts there at Trillium Software of some examples that exemplify why these approaches are so powerful?

Percuoco: I’ve seen a couple of pretty interesting use cases. One of them is with one of our technical partnerships. They have a data platform also where they use a behavior account-sharing model. It's very interesting in that they take multiple feeds of different data, like social media data, call-center data, data that was entered into a blog from a website. As Jon said, they create a one-customer view of all of those disparate sources of data including social media and then they map for different vertical industries behavioral churn models.

In other words, before someone churns their account or gets rid of their account within a particular industry -- like insurance, for example -- what steps do they go through before they churn their account? Do they send an e-mail to someone? Do they call the call center? Do they send social media messages? Then, through statistical analysis, they build these behavioral churn models.

They put data through these models of transactional data, and when certain accounts or transactional data fall out at certain parts, they match that against the strategic client list and then decide what to do at the different phases of the account churn model.

I've heard of companies, large companies, saving as much as $100 million in account churn by basically understanding what the clients are doing through these behavioral churn models.

Sentiment analysis

Probably the other most prevalent that I've seen with our clients is sentiment analysis. Most people are looking at social media data, seeing what people are saying about them on social media channels, and then using all different creative techniques to try and match those social media personas to client lists within the company to see who is saying what about them.

Sentiment analysis is probably the biggest use case that I've seen, but the account churn with the behavioral models was very, very interesting, and the platform was very complex. On top, it had a productive analytics engine that had about 80 different modeling graphs and it also had some data visualization tools. So it was very, very easy to create shots and graphs and it was actually pretty impressive.

Gardner: Betsy, do you have any examples that also illustrate what we're talking about when it comes to innovation and value around data gathering analytics and business innovation.

Bilhorn: I’m going to do a little bit of a twist here on that problem. We have had a recent customer, who is one of the top LED lighting franchisors in United States, and they had a different bit of a problem. They have about 150 franchises out there and they are all disconnected.
Sentiment analysis is probably the biggest use case that I've seen.

So, in the central office, I can't see what my individual franchises are doing and I can't do any kind of forecasting or business reporting to be able to look at the health of all my franchises all over the country. That was the problem.

The second problem was that they had decided on a standardized NetSuite platform and they wanted all of their franchises to use these. Obviously, for the individual franchise owner, NetSuite was a little too heavy for them and they said overwhelmingly they wanted to have QuickBooks.

This customer came to us and said, “We have a problem here. We can't find anybody to integrate QuickBooks to our central CRM system and we can't report. We’re just completely flying blind here. What can you do for us?”

Via integration, we were able to satisfy that customer requirement. Their franchises can use QuickBooks, which was easy for them, and then through all of that synchronized information back from all of these franchises into central CRM, they were able to do all kinds of analytics and reporting and dashboarding on the health of the whole business.

The other side benefit, which also makes them very competitive, is that they’re able to add franchises very, very quickly. They can have their entire IT systems up and running in 30 minutes and it's all integrated. So the franchisee is ready to go. They have everything there. They can use a system that’s easy for them to use and this company is able to have them up and are getting their data right away.

Consistency and quality

So that’s a little bit different. Big data is not social, but it’s a problem that a lot of businesses face. How do I even get these systems connected so I can even run my business? This rapid repeatable model for this particular business is pretty new. In the past, we’ve seen a lot of people try to wire things up with custom codes, or every thing is ad hoc. They’re able to stand up full IT systems in 30 minutes, every single time over and over again with a high level consistency and quality.

Gardner: Well we have to begin to wrap it up, but I wanted to take a gauge of where we are on this. It seems to me that we’re just scratching the surface. It’s the opening innings, if you will.

Will we start getting these data visualizations down to mobile devices, or have people inputting more information about themselves, their devices, or the internet of things? Let's start with you, Jon. Where are we on the trajectory of where this can go?

Petrucelli: We’re working on some projects right now with geolocation, geocaching, and geosensing, where when a user on a mobile device comes within a range of a certain store, it will serve that user up, if they have downloaded the app. It will be an app on their smartphone and they have opted into those. It will serve them up special offers to try to pull them into the store the same way in which, if you’re walking by a store, somebody might say, “Hey, Jon.” They know who I am and know my personalization, when I come in a range, it now knows my location.
Integration is really the key to drive high levels of adoption, which drives high levels of productivity.

This is somebody who has an affinity card with a certain retailer, or it could be a sports team in the venue that the organization knows during the venue, it knows what their preferences are and it puts exactly the right offer in front of the right person, at the right time, in the right context, and with the right personalization.

We see some organizations moving to that level of integration. With all of the available technology, with the electronic wallets, now with Google Glass, and with smart watches, there is a lot of space to go. I don’t know if it's really relevant to this, but there is a lot of space now.

We’re more in the business app side of it, and I don’t see that going away. Integration is really the key to drive high levels of adoption, which drives high levels of productivity which can drive top line gain and ultimately a better ROI for the company that’s how we really look it integration.

Gardner: Where are we on the trajectory here for using these technologies to advance business?

Percuoco: You mentioned specifically location information, and, as Jon mentioned, it is germane to this discussion. There’s the concept of digital marketing, marketing coupons to people in real-time over their smartphones as they’re walking by businesses, and so forth. That’s definitely one of the very prevalent use cases for location objects.

Shopping patterns

There’s also an interesting one that kind of goes on top of that, where you evaluate web traffic shopping patterns of people, using Google location objects. For large ticket items, you can actually email them, in real time, competitor coupons. For example, a mile down the street, this one company has something for $100 or $200 less.

It's another interesting use case kind of intelligent marketing through digital media in the mobile market. I also see the mobile delivery of information being critical as we move forward.

Pretty much all data integration or BI professionals are basically working parents. It’s very, very important to be able to deliver that information, at least in a dashboard format or a summary format on all the mobile devices. You could be at your kid’s Little League game or you could be out to dinner with your wife, but you may have to check things.

The delivery of information through the mobile market is critical, although the user experience has to be different. There needs to be a bunch of work in terms of data visualization, the user experience, and what to deliver. But the modern family aspects of life and people working are forcing the mobile market to come up to speed.
It’s very, very important to be able to deliver that information, at least in a dashboard format or a summary format on all the mobile devices.

The other thing that I would say is in terms of integration methods and what Jon was talking about. You do have to watch out for custom APIs. Trillium has a connectivity business as does Scribe.

As long as you stick with industry-standard handshaking methods, like XML or JSON or web services and RESTful APIs, then usually you can integrate packages fairly smoothly. You really need to make sure that you're using industry-standard hand-offs for a lot of the integration methods. You have four or five different ways to do that, but it’s pretty much the same four or five.

Those would be my thoughts on the future. I also see cloud computing, platform as a service (PaaS), and software as a service (SaaS) really taking hold of the market. Even Microsoft and some of the other platform tools like Office 365 and the email systems in CRM, are all cloud-based applications now, and to be honest, they’re better. The service is better, and there’s no on-premise footprint. I really see the market moving toward PaaS and SaaS to the cloud computing market.

Gardner: What is Scribe Software's vision, and what are the next big challenges that you will be taking your technology to?

Bilhorn: Ideally, what I would like to see, and what I’m hoping for, is that with mobile and consumerization of IT you’re beginning to see that business apps act more like consumer apps, having more standard APIs and forcing better plug and play. This would be great for business. What we’re trying to do, in absence of that, is create that plug-and-play environment to, as Jon said, make it so easy a child can do it.

Seamless integration

Our vision in the future is really flattening that out, but also being able to provide seamless integration experience between this break systems, where at some point you wouldn’t even have to buy middleware as an individual business or a consumer.

The cloud vendors and legacy vendors could embed integration and then be able to have really a plug and play so that the individual user could be doing integration on their own. That’s where we would really like to get to. That’s the vision and where the platform is going for Scribe.

Gardner: Well, great. I’m afraid we’ll have to leave it there. We've been listening to a sponsored BriefingsDirect podcast discussion on how business intelligence and big-data trends are requiring improved access and automation to data flows from a variety of sources.

We've learned of ways that enterprises are effectively harvesting data in all it's forms and creating integrations that foster better use of data throughout the entire lifecycle. The result has been the ability to exploit data strategically among more aspects of enterprise businesses and across more types of applications and processes.

So a huge thanks to our guests Jon Petrucelli, Senior Director of Hitachi Solutions Dynamic CRM and Marketing Practice. Thanks so much Jon.

Petrucelli: Thank you, glad to be here.

Percuoco: Also Rick Percuoco, Senior Vice President of Research and Development at Trillium Software. Thank you so much, Rick.

Percuoco: You’re welcome, Dana.

Gardner: And Betsy Bilhorn, Vice President of Product Management at Scribe Software. Thank you, Betsy.

Bilhorn: Thank you again, Dana.

Gardner: And also a huge thank you to our audience for joining this insightful discussion. This is Dana Gardner, Principal Analyst at Interarbor Solutions. Don’t forget to come back and listen next time.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: Scribe Software.

Transcript of a BriefingsDirect podcast on how creating big-data capabilities are new top business imperatives in dealing with a flood of data from disparate sources. Copyright Interarbor Solutions, LLC, 2005-2013. All rights reserved.

You may also be interested in:

Thursday, December 05, 2013

Service Virtualization Solves Bottlenecks Amid Complex Billing Process for German Telco

Transcript of a BriefingsDirect podcast on how a large telco in Germany has optimized the testing and development procedure with advanced service virtualization tools from HP.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: HP.

Dana Gardner: Hello, and welcome to the next edition of the HP Discover Podcast Series. I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for this ongoing discussion of IT innovation and applications transformation.

Gardner
Once again, we're focusing on how software testing improvements and advanced service virtualization solutions are enabling IT leaders to deliver better experiences for businesses and end users alike.

Today, we’re here to learn how German telco EWE TEL has solved performance complexity across an extended enterprise billing process by using service virtualization. In doing so, EWE has significantly improved applications performance and quality for their end users, while also gaining predictive insights in the composite application services behavior.

Here to explain, how EWE is leveraging service virtualization technologies and techniques for composite applications, we're joined by Bernd Schindelasch, Leader for Quality Management and Testing at EWE TEL based in Oldenburg, Germany. Bernd will be presenting on this use-case next week at the HP Discover conference in Barcelona. Welcome to BriefingsDirect, Bernd.

Bernd Schindelasch: Hi, Dana. Thank you for having me. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

Gardner: First, tell us a little bit about EWE TEL, what it does and what you do there.

Schindelasch
Schindelasch: EWE is a telecommunications company. We operate the network for EWE and we provide a large range of telecommunications services. So we invest a lot of money into infrastructure and we supply the region with high-speed Internet access. EWE TEL was founded in 1996, is a fully owned subsidiary of EWE, and has about 1,400 employees.

Gardner: Your software and IT systems are obviously so important. This is how you interact with your end-users. So these applications must be kept performing.

Schindelasch: Yes, indeed. Our IT systems are very important for us to fulfill our customers’ needs. We have about 40 applications, which are involved in the process of a customer, starting from customer self-service application, to the activation component, and the billing system. It’s a quite complex infrastructure and it’s all based on our IT systems.

Gardner: What have you done over the past several years to put together a team or a process through which you can make sure that your applications are performing and continue to perform time and time again?

Schindelasch: We have a special situation here. Because the telecommunications business is very specialized, we need very customized IT solutions. Often, the effort to customize standard software is so high that we decided to develop a lot of our applications on our own.

Developed in house

Nearly half of our applications are developed in house, for example, the customer self service portal I just mentioned, or our customer care system or Activation Manager.

We had to find a way to test it. So we created a team to test all those systems we developed on our own. We recruited personnel from the operating departments and added IT staff, and we started to certify them all as testers. We created a whole new team with a common foundation, and that made it very easy for us to agree on roles, tasks, processes, and so on, concerning our tests. 

Gardner: Today, we’re interested in hearing about how you adopted service virtualization as a technology and a process. Tell me about the problem that led you to discover service virtualization as a solution?

Schindelasch: When we created this new team, we faced the problem of testing the systems end to end. When you have 40 applications and have to test an end-to-end process over all of those applications, all the contributing applications have to be available and have to have a certain level of quality to be useful.
We created a whole new team with a common foundation, and that made it very easy for us to agree on roles, tasks, processes, and so on, concerning our tests. 

What we encountered was that the order interface of another service provider was often unavailable and responses from that system were faulty. So we hadn’t been able to test our processes end to end.

We once tried to do a load test and, because of the bottleneck of that other interface, we experienced the failure of that other interface and weren’t able to test our own systems. That’s the reason we needed a solution to bypass this problem with the other interface. That was the initial initiative that we had.

Gardner: I think you’re representative of many more companies that are dealing with extended enterprise applications and services, ones they can’t fully control, can’t access, and can't get to, but they have to continue to be responsible for the quality of the end process. It can be a quite difficult problem to solve.

Why weren’t traditional testing or scripting technologies able to help you in this regard?

Schindelasch: We tried it recently. We developed diverse simulations based on traditional mockup scripts. These are very useful for developers to do unit testing, but they weren’t configurable for testers to be used to create the right situations for positive and negative tests.

Additionally, there was a big effort to create these mockups, and sometimes the effort to create the mockup would have been bigger than the real development effort. That was the problem we had.

Complex and costly

Gardner: So any simulations you were approaching were going to be very complex and very costly. It didn't really seem to make sense. So what did you do then?

Schindelasch: We constantly analyzed the market and searched for products that might be able to help us with our problem. In 2012, we found such solutions and finally made a proof of concept (POC) with HP Service Virtualization.

We found that it supported different protocols, all the protocols we needed, and with a rule set to predict the responses. During the POC we found that benefits were both for developers and testers. Even our architects found it to be a good solution. So in the end, we decided to purchase that software this year.

Gardner: Tell us how you’ve implemented HP Service Virtualization and how this pilot project has proceeded.

Schindelasch: We implemented service virtualization in a pilot project and we virtualized even that order interface we talked about. We had to integrate service virtualization as a proxy between our customer care system and the order system. The actual steps you have to take vary by the used protocols, but you have to put it in between them and let the system work as a proxy. Then, you have the ability to let it learn.
That reduced our efforts and cost in development and testing and it’s the basis for further test automation at low testing cost.

It’s in the middle, between your systems, and records all messages and their responses. Afterward, you can just replay this message response or you can improve the rules manually. For example, you can add data tables so you can configure the system to work with the actual test data you are using for you test cases to be able to support positive and negative tests. 

Gardner: For those folks that aren’t familiar with HP Service Virtualization for composite applications, how has this developed in terms of its speed and its cost? What are some of the attributes of it that appeal to you?

Schindelasch: Our main objective was to find a way to do our end-to-end testing to optimize it, but we were able to gain more benefits by using service virtualization. We’ve reduced the effort to create simulations by 80 percent, which is a huge amount, and have been able to virtualize services that were still under development.

So we have been able to uncouple the tests of the self service application from a new technical feasibility check. Therefore, we’ve been able to test earlier in our processes. That reduced our efforts and cost in development and testing and it’s the basis for further test automation at low testing cost.

In the end, we’ve improved quality. It’s even better for our customers, because we’re able to deliver fast and have a better time to market for new products. 

Future attributes

Gardner: Are there other attributes that you’d like to see in future products, perhaps with network-virtualization attributes? I know that you’ve been doing this with certain middleware, messaging, and workflow technology. What would you like to see next?

Schindelasch: One important thing is that development is shifting to agile more and more. Therefore, the people using the software have changed. So we have to have better integration with development tools.

From a virtualization perspective, there will be new protocols, more complex rules to address every situation you can think of without complicated scripting or anything like that. I think that’s what’s coming in the future.

Gardner: And, Bernd, has the use of HP Service Virtualization allowed you to proceed toward more agile development and, as well, to start to benefit from DevOps, more tight association and integration between development and deployment and operations?
Service virtualization has the potential to change the performance model, so you can let your application answer slower or faster.

Schindelasch: We already put it together with our development, I think it’s very crucial to cooperate with development and testing, because there wouldn’t be a real benefit to virtualize the service after development already mocked up in an old-fashioned way.

We brought them together. We had the training for a lot of developers. They started to see the benefits and started to use service virtualization the way the testers already did.

We’re working together more closely and earlier in the process. What’s coming in the future is that the developers will start to use service virtualization for their continuous integration, because service virtualization has the potential to change the performance model, so you can let your application answer slower or faster.

If you put it into fast mode, then you use it in continuous integration. That’s a really big benefit for the developers, because their continuous integration will be faster and therefore they will be able to deploy faster. So for our development, it’s a real benefit.

Gardner: I should think that for an organization like yours, where you’re a services provider, being able to meet your service-level agreements (SLAs) is important to you. This could probably have a very positive impact on that.

Schindelasch: Yes, definitely.

Lessons learned

Gardner: Before we end our discussion, I wonder if you could maybe offer some insights to those who are considering the use of service virtualization with composite applications now that you have been doing it. Are there any lessons learned? Are there any suggestions that you would make for others as they begin to explore new service virtualization in the testing phase?

Schindelasch: One thing I’ve already mentioned is that it’s important to work together with development and testing. To gain maximum benefit from HP Service Virtualization, you have to design your future solutions. What service do you want to virtualize, which protocols will you use, and where are the best places to intercept? Do I want to replace real systems or create the whole environment as virtualized? In which way do I want to use performance model and so on?

It’s very important to really understand what your needs are before you start using the tools and just virtualize everything. It’s easy to virtualize, but there is no real benefit if you virtualize a lot of things you didn’t really want. As always, it’s important to think first, design your future solutions, and then start to do it.
It’s very important to really understand what your needs are before you start using the tools and just virtualize everything.

Gardner: I am afraid we’ll have to leave it there. We’ve been learning how German telco EWE has solved performance complexity across an extended enterprise billing process using HP Service Virtualization.

We have heard how EWE has significantly improved applications performance and quality for their end users, while also gaining predictive insights in the composite applications services behavior, even back into the development phases. Bernd will be presenting on this use case next week at the HP Discover conference in Barcelona.

I’d like to thank our supporter for the series, HP Software, and reminder our audience to carry on the dialogue through the IT Strategy & Performance group on LinkedIn. You can always access this and another episodes in our HP Discover podcast series on iTunes under BriefingsDirect.

And so a big thanks to our guest. We’ve been joined by Bernd Schindelasch, Leader for Quality Management and Testing at EWE TEL based in Oldenburg, Germany. Thank you so much, Bernd.

Schindelasch: Thank you, Dana.

Gardner: This is Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for this ongoing discussion on IT innovation. Thanks again for listening, and come back next time.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: HP.

Transcript of a BriefingsDirect podcast on how a large telco in Germany has optimized the testing and development procedure with advanced service virtualization tools from HP. Copyright Interarbor Solutions, LLC, 2005-2013. All rights reserved.

You may also be interested in:

Wednesday, December 04, 2013

Identity and Access Management as a Service Gets Boost with SailPoint's IdentityNow Cloud Service

Transcript of a BriefingsDirect podcast on the need for and innovation in improved identity and access management.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: SailPoint Technologies.

Dana Gardner: Hi, this is Dana Gardner, Principal Analyst at Interarbor Solutions, and you’re listening to BriefingsDirect.

Gardner
Today, we present a sponsored podcast discussion on the changing needs for, and heightened value around, improved identity and access management (IAM). We'll examine now how business trends are forcing organizations to safely allow access to all kinds of applications and myriad resources anytime, anywhere, and from any device.

According to research firm MarketsandMarkets, the demand for IAM is therefore estimated to grow from more than $5 billion this year to over $10 billion in 2018. What's driving the doubling of the market in five years? Well, as with much of the current IT space, it's about cloud, mobile, bring your own device (BYOD), consumerization of IT, and broader security concerns.

But the explosive growth also factors the move to more pervasive use of identity and access management as a service (IDaaS).

So join us now as we explore how new IDaaS offerings are helping companies far better protect and secure their informational assets. Here to share insights into this future of identity management is Paul Trulove, Vice President of Product Marketing at SailPoint Technologies in Austin, Texas. Welcome, Paul. [Disclosure: SailPoint is a sponsor of BriefingsDirect podcasts.]

Paul Trulove: Thanks, Dana. Glad to be here.

Gardner: The word "control" comes up so often when I talk to people about security and IT management issues, and companies seem to feel that they are losing control, especially with such trends as BYOD. How do companies regain that control, or do we need to think about this differently. Is it no longer an issue of control?

Trulove: The reality in today's market is that a certain level of control will always be required. But as we look at the rapid adoption of new corporate enterprise resources, things like cloud-based applications or mobile devices where you could access corporate information anywhere in the world at any time on any device, the reality is that we have to put a base level of controls in place that allow organizations to protect the most sensitive assets. But you have to also provide ready access to the data, so that the organizations can move at the pace of what the business is demanding today.

Gardner: The expectations of users has changed. When they can go sign up for a software-as-a-service (SaaS) application or access cloud services, they're used to having more of their own freedom. How is that something that we can balance, allow them to get the best of their opportunity and their productivity benefits, but at the same time, allow for the enterprise to be as low risk as possible?

Trulove
Trulove: That's the area that the organization has to find the right balance for their particular business that meets the internal demands, the external regulatory requirements, and really meet the expectations of their customer base. While the productivity aspect can't be ignored, taking a blind approach to allowing an individual end-user to begin to migrate structured data out of something like an SAP or other enterprise resource planning (ERP) systems, up to a personal Box.com account is something most organizations are just not going to allow.

Each organization has to step back, redefine the different types of policies that they're trying to put in place, and then put the right kind of controls that mitigate risk in terms of inappropriate acts, access to critical enterprise resources and data, but also allow the end user to have a little bit more control and little bit more freedom to do things that make them the most productive.

Uptake in SaaS

Gardner: We've seen a significant uptake in SaaS, certainly at the number of apps level, communications, and email, but it seems as if some of the infrastructure services around IAM are lagging. Is there a maturity issue here, or is it just a natural way that markets evolve? What's the case in understanding why the applications have gone fast, but we're now just embarking on IDaaS?

Trulove: We're seeing a common trend in IT if you look back over time, where a lot of the front-end business applications were the first to move to a new paradigm. Things like ERP and service resource management (SRM)-type applications have all migrated fairly quickly.

Over the last decade, we've really seen a lot of the sales management applications, like Salesforce and NetSuite come on as full force. Now, there are things like Workday and even some of the work force management becoming very popular. However, the infrastructure generally lagged for a variety of reasons.

In the IAM space, this is a critical aspect of enterprise security and risk management as it relates to guarding the critical assets of the organization. Security practitioners are going to look at new technology very thoroughly before they begin to move things like IAM out to a new delivery paradigm such as SaaS.

The other thing is that organizations right now are still fundamentally protecting internal applications. So there's less of a need to move your infrastructure out into the cloud until you begin to change the overall delivery paradigm for your internal application.
As customers implement more and more of their software out in the cloud, that's a good time for them to begin to explore IDaaS.

What we're seeing in the market, and definitely from a customer perspective, is that as customers implement more and more of their software out in the cloud, that's a good time for them to begin to explore IDaaS.

Look at some of the statistics being thrown around. In some cases, we've seen that 80 percent of new software purchases are being pushed to a SaaS model. Those kinds of companies are much more likely to embrace moving infrastructure to support that large cloud investment with fewer applications to be managed back in the data center.

Gardner: As you mentioned, SaaS has been around for 10 years, but the notion of mobile-first applications now has picked up in just the last two or three years. I have to imagine that's another accelerant to looking at IAM differently when you get the devices.

We've talked a little bit about SaaS and IDaaS, coming on as a follow up, how does the mobile side of things impact this?

Trulove: Mobile plays a huge part in organizations' looking at IDaaS, and the reason is that you’re moving the device that's interacting with the identity management service outside the bounds of the firewall and the network. So, having a point of presence in the cloud gives you a very easy way to generate all of the content out to the devices that are being operated outside of the traditional bounds of the IT organization, which was generally networked in to the PCs, laptops, etc that are on the network itself.

Moving to IDaaS

Gardner: I'd like to get into what hurdles organizations need to overcome to move in to IDaaS, but let's define this a little better for folks that might not be that familiar with it. How does SailPoint define IDaaS? What are we really talking about?

Trulove: SailPoint looks at IDaaS as a set of capabilities across compliance and governance, access request and provisioning, password management, single sign-on (SSO), and Web access management that allow for an organization to do fundamentally the same types of business processes and activities that they do with an internal IAM systems, but delivered from the cloud.

We also believe that it's critical, when you talk about IDaaS to not only talk about the cloud applications that are being managed by that service, but as importantly, the internal applications behind the firewall that still have to be part of that IAM program.

Gardner: So, this is not just green field. You have to work with what's already in place, and it has to work pretty much right the first time.

Trulove: Yes, it does. We really caution organizations against looking at cloud applications in a siloed manner from all the things that they're traditionally managing in the data center. Bringing up a secondary IAM system to only focus on your cloud apps, while leaving everything that is legacy in place, is a very dangerous situation. You lose visibility, transparency, and that global perspective that most organizations have struggled to get with the current IAM approaches across all of those areas that I talked about.
We see a little bit less of the data export concerns with companies here in the US, but it's a much bigger concern for companies in Europe and Asia in particular.

Gardner: So, we recognize that these large trends are forcing a change, users want their freedom, more mobile devices, more different services from different places, and security being as important if not more than ever. What is holding organizations back from moving towards IDaaS, given that it can help accommodate this very complex set of requirements?

Trulove: It can. The number one area, and it's really made up of several different things, is the data security, data privacy, and data export concerns. Obviously, the level at which each of those interplay with one another, in terms of creating concern within a particular organization, has a lot to do with where the company is physically located. So, we see a little bit less of the data export concerns with companies here in the US, but it's a much bigger concern for companies in Europe and Asia in particular.

Data security and privacy are the two that are very common and are probably at the top of every IT security professional’s list of reasons why they're not looking at IDaaS.

Gardner: It would seem that just three or four years ago, when we were talking about the advent of cloud services, quite a few people thought that cloud was less secure. But I’ve certainly been mindful of increased and improved security as a result of cloud, particularly when the cloud organization is much more comprehensive in how they view security.

They're able to implement patches with regularity. In fact, many of them have just better processes than individual enterprises ever could. So, is that the case here as well? Are we dealing with perceptions? Is there a case to be made for IDaaS being, in fact, a much better solution overall?

IAM as secure

Trulove: Much like organizations have come to recognize the other categories of SaaS as being secure, the same thing is happening within the context of IAM. Even a lot of the cloud storage services, like Box.com, are now signing up large organizations that have significant data security and privacy concerns. But, they're able to do that in a way and provide the service in a way where that assurance is in place that they have control over the environment.

And so, I think the same thing will happen with identity, and it's one of the areas where SailPoint is very focused on delivering capabilities and assurances to the customers that are looking at IDaaS, so that they feel comfortable putting the kinds of information and operating the different types of IAM components, so that they get over that fear of the unknown.

Gardner: Before we get into some of the details about how you’re approaching this, and what your services can provide, I'm curious about what companies can expect to get when they pursue the full cloud and services panoply of possibilities across apps, data, IT management, and other services. What are some of the business drivers? What do you get if you do this right and you make the leap to the services’ strata?

Trulove: One of the biggest benefits of moving from a traditional IAM approach to something that is delivered as IDaaS is the rapid time to value. It's also one of the biggest changes that the organization has to be prepared to make, much like they would have as they move from a Siebel- to a Salesforce-type model back in the day.

IAM delivered as a service needs to be much more about configuration, versus that customized solution where you attempt to map the product and technology directly back to existing business processes.
The benefit that they get out of that is a much lower total cost of ownership (TCO), especially around the deployment aspects of IDaaS.

One of the biggest changes from a business perspective is that the business has to be ready to make investments in business process management, and the changes that go along with that, so that they can accommodate the reality of something that's being delivered as a service, versus completely tailoring a solution to every aspect of their business.

The benefit that they get out of that is a much lower total cost of ownership (TCO), especially around the deployment aspects of IDaaS.

Gardner: It's interesting that you mentioned business process and business process management. It seems to me that by elevating to the cloud for a number of services and then having the access and management controls follow that path, you’re able to get a great deal of flexibility and agility in how you define who it is you’re working with, for how long, for when.

It seems to me that you can use policies and create rules that can be extended far beyond your organization’s boundaries, defining workgroups, defining access to assets, creating and spinning up virtualized companies, and then shutting them down when you need. So, is there a new level of consideration about a boundaryless organization here as well?

Trulove: There is. One of the things that is going to be very interesting is the opportunity to essentially bring up multiple IDaaS environments for different constituents. As an organization, I may have two or three fundamentally distinct user bases for my IAM services.

Separate systems

I may have an internal population that is made up of employees, and contractors that essentially work for the organization that need access to a certain set of systems. So I may bring up a particular environment to manage those employees that have specific policies and workflows and controls. Then, I may bring up a separate system that allows for business partners or individual customers to have access to very different environments within the context of either cloud or on-prem IT resources.

The advantage is that I can deploy these services uniquely across those. I can vary the services that are deployed. Maybe I provide only SSO and basic provisioning services for my external user populations. But for those internal employees, I not only do that, but I add access certifications, and segregation of duties (SOD) policy management. I need to have much better controls over my internal accounts, because they really do guard the keys to the kingdom in terms of data and application access.

Gardner: We began this conversation talking about balance. It certainly seems to me that that level of ability, agility, and defining new types of business benefits far outweighs some of the issues around risk and security that organizations are bound to have to solve one way or the other. So, it strikes me as a very compelling and interesting set of benefits to pursue.

Let's look now, Paul, at your products. You've delivered the SailPoint IdentityNow suite. You've got a series of capabilities, and there are more to come. As you were defining and building out this set of services, what were some of the major requirements that you had, that you needed to check off before you brought this to market?

Trulove: The number one capability that we really talk to a lot of customers about is an integrated set of IAM services that span everything from that compliance and governance to access request provisioning and password management all the way to access management and SSO.
They can get value out of it, not necessarily on day one, but within weeks, as opposed to months.

One of the things that we found as a critical driver for the success of these types of initiatives within organizations is that they don't become siloed, and that as you implement a single service, you get to take advantage of a lot of the work that you've done as you bring on the second, third, or fourth services.

The other big thing is that it needs to be ready immediately. Unlike a traditional IAM solution, where you might have deployment environments to buy and implement software to purchase and deploy and configure, customers really expect IDaaS to be ready for them to start implementing the day that they buy.

It's a quick time-to-value, where the organization deploying it can start immediately. They can get value out of it, not necessarily on day one, but within weeks, as opposed to months. Those things were very critical in deploying the service.

The third thing is that it is ready for enterprise-level requirements. It needs to meet the use cases that a large enterprise would have across those different capabilities, but also as important, that it meets data security, privacy, and export concerns that a large enterprise would have relative to beginning to move infrastructure out to the cloud.

Even as a cloud service, it needs a very secure way to get back into the enterprise and still manage the on-prem resources that aren’t going away anytime soon. n one hand we would talk to customers about managing things like Google Apps, Salesforce and Workday. In the same breath, they also talk about still needing to manage the mainframe and the on-premises enterprise ERP system that they have in place.

So, being able to span both of those environments to provide that secure connectivity from the cloud back into the enterprise apps was really a key design consideration for us as we brought this product to market.

Hybrid model

Gardner: It sounds if it's a hybrid model from the get-go. We hear about public cloud, private cloud, and then hybrid. It sounds as if hybrid is really a starting point and an end point for you right away.

Trulove: It's hybrid only in that it's designed to manage both cloud and on-prem applications. The service itself all runs in the cloud. All of the functionality, the data repositories, all of those things are 100 percent deployed as a service within the cloud. The hybrid nature of it is more around the application that it's designed to manage.

Gardner: You support a hybrid environment, but I see, given what you've just said, that means that all the stock in trade and benefits as a service offering are there, no hardware or software, going from a CAPEX to OPEX model, and probably far lower cost over time were all built in.

Trulove: Exactly. The deployment model is very much that classic SaaS, a multitenant application where we basically run a single version of the service across all of the different customers that are utilizing it.

Obviously, we've put a lot of time, energy, and focus on data protection, so that everybody’s data is protected uniquely for their organization. But we get the benefits of that SaaS deployment model where we can push a single version of the application out for everybody to use when we add a new service or we add new capabilities to existing services. We take care of upright processes and really give the customers that are subscribing to the services the option of when and how they want to turn new things on.
We've put a lot of time, energy, and focus on data protection, so that everybody’s data is protected uniquely for their organization.

Gardner: Let's just take a moment and look at the SailPoint IdentityNow suite. Tell me what it consists of, and how this provides a benefit and on-ramp to a better way of doing IT as a service and business as a service.

Trulove: The IdentityNow suite is made up of multiple individual services that can be deployed distinctly from one another, but all leverage a common back-end governance foundation and common data repository.

The first service is SSO and it very much empowers users to sign on to cloud, mobile, and web applications from a single application platform. It provides central visibility for end users into all the different application environments that they maybe interacting with on a daily basis, both from a launch-pad type of an environment, where I can go to a single dashboard and sign on to any application that I'm authorized to use.

Or I may be using back-end Integrated Windows Authentication, where as soon as I sign into my desktop at work in the morning, I'm automatically signed into all my applications as I used them during the day, and I don’t have to do anything else.

The second service is around password management. This is enabling that end-user self-service capability. When end users need to change their password or, more commonly, reset them because they’ve forgotten them over a long weekend, they don’t have to call the help desk.

Strong authentication

They can go through a process of authenticating through challenge questions or other mechanisms and then gain access to reset that password and even use some strong authentication mechanisms like one-time password tokens that are going to be issued, allow the user to get in and then, change that password to something that they will use on an ongoing basis.

The third service is around access certifications, and this automates that process of allowing organizations to put in place controls through which managers or other users within the organization are reviewing who has access to what on a regular basis. It's a very business-driven process today, where an application owner or business manager is going to go in, look at the series of accounts and entitlements that a user has, and fundamentally make a decision whether that access is correct at a point in time.

One of the key things that we're providing as part of the access certification service is the ability to automatically revoke those application accounts that are no longer required. So there's a direct tie into the provisioning capabilities of being able to say, Paul doesn’t need access to this particular active directory group or this particular capability within the ERP system. I'm going to revoke it. Then, the system will automatically connect to that application and terminate that account or disable that account, so the user no longer has access.

The final two services are around access request and provisioning and advanced policy and analytics. On the access request and provisioning side, this is all about streamlining, how users get access. It can be the automated birth-right provisioning of user accounts based on a new employee or contractor joining new organization, reconciling when a user moves to a new role, what they should or should not have, or terminating access on the back end when a user leaves the organization.
What most customers see, as they begin to deploy IDaaS is the ability to get value very quickly.

All of those capabilities are provided in an automated provisioning model. Then we have that self-service access request, where a user can come in on an ad-hoc basis and say, "I'm starting a new project on Monday and I need some access to support that. I'm going to go in, search for that access. I'm going to request it." Then, it can go through a flexible approval model before it actually gets provisioned out into the infrastructure.

The final service around advanced policy and analytics is a set of deeper capabilities around identifying where risks lie within the organization, where people might have inappropriate access around a segregation of duty violation.

It's putting an extra level of control in place, both of a detective nature, in terms of what the actual environment is and which accounts that may conflict that people already have. More importantly, it's putting preventive controls in place, so that you can attach that to an access request or provisioning event and determine whether a policy violation exists before a provisioning action is actually taken.

Gardner: You've delivered quite a bit in terms of this suite's offering this year. Before we hear some more about some of the roadmap and future capabilities, what are your customers finding now that they are gaining as a result of moving to IDaaS as well, as the opportunity for specific services within the suite? What do you get when you do this right?

Trulove: What most customers see, as they begin to deploy IDaaS is the ability to get value very quickly. Most of our customers are starting with a single service and they are using that as a launching pad into a broader deployment over time.

So you could take SSO as a distinct project. We have customers that are implementing that SSO capability to get rapid time to value that is very distinct and very visible to the business and the end users within their organization.

Password management

Once they have that deployed and up and running, they're leveraging that to go back in and add something like password management or access certification or any combination thereof.

We’re not stipulating how a customer starts. We're giving them a lot of flexibility to start with very small distinct projects, get the system up and running quickly, show demonstrable value to the business, and then continue to build out over time both the breadth of capabilities that they are using but also the depth of functionality within each capability.

Gardner: Do you have any instances, Paul, where folks are saying, "We wanted to go mobile, but we're being held back. Now that we've taken a plunge, this has really opened up a whole new way for us to deliver data and applications to different devices and mobile, whether it’s the campus setting or road warrior setting." Any thoughts about how this is, in particular, aiding and abetting mobile.

Trulove: Mobile is driving a significant increase in why customers are looking at IDaaS. The main reason is that mobile devices operate outside of the corporate network in most cases. If you're on a smartphone and you are on a 3G, 4G, LTE type network, you have to have a very secure way to get back into those enterprise resources to perform particular operations or access certain kinds of data.

One of the benefits that an IDaaS service gives you is a point of presence in cloud that allows the mobile devices to have something that is very accessible from wherever they are. Then, there is a direct and very secure connection back into those on-prem enterprise resources as well as out to the other cloud applications that you are managing.
The other big thing we're seeing in addition to mobile devices is just the adoption of cloud applications.

The reality in a lot of cases is that, as organizations add those BYOD type policies and the number of mobile devices that are trying to access corporate data increase significantly, providing an IAM infrastructure that is delivered from the cloud is a very convenient way to help bring a lot of those mobile devices under control across your compliance, governance, provisioning, and access request type activities.

The other big thing we're seeing in addition to mobile devices is just the adoption of cloud applications. As organizations go out and acquire multiple cloud applications, having a point of presence to manage those in the cloud makes a big difference.

In fact, we've seen several deployment projects of something like Workday actually gated by needing to put in the identity infrastructure before the business was going to allow their end users to begin to use that service. So the combination of both mobile and cloud adoption are driving a renewed focus on IDaaS.

Gardner:  I know you can't actually pre-announce, and I am not asking you to, but as we consider what you can now do with these capabilities, perhaps you can paint a little bit of a vision for us as to where you think your offerings, and therefore the market and the opportunity for improvement in the user organizations, is headed.

Trulove: If you look at the road map that we have for the IdentityNow product, the first three services are available today, and that’s SSO, password management, and access certification. Those are the key services that we're seeing businesses drive into the cloud as early adopters. Behind that, we'll be deploying the access request and provisioning service and the advanced policy and analytic services in the first half of 2014.

Continued maturation

Beyond that, what we're really looking at is continued maturation of the individual services to address a lot of the emerging requirements that we're seeing from customers, not only across the cloud and mobile application environments, but as importantly as they begin to deploy the cloud services and link back to their on-prem identity and access management infrastructure, as well as the applications that they are continuing to run and manage from the data center.

Gardner: So, more inclusive, and therefore more powerful, in terms of the agility, when you can consider all the different aspects of what falls under the umbrella of IAM.

Trulove: We're also looking at new and innovative ways to reduce the deployment timeframes, by building a lot of capabilities that are defined out of the box. These are  things like business processes, where there will be catalog of the best practices that we see a majority of customers implement. That has become a drop-down for an admin to go in and pick, as they are configuring the application.
We're also looking at new and innovative ways to reduce the deployment timeframes, by building a lot of capabilities that are defined out of the box.

We'll be investing very heavily in areas like that, where we can take the learning as we deploy and build that back in as a set of best practices as a default to reduce the time required to set up the application and get it deployed in a particular environment.

Gardner: Well, great. I'm afraid we'll have to leave it there. You've been listening to a sponsored BriefingsDirect podcast discussion on the changing needs for and heightened value around improved IAM, and we have seen how explosive expected growth and change is forcing a move to more a pervasive use of identity and access management as a service or IDaaS.

And, of course, we've learned more about SailPoint Technologies and how they're delivering the means for organizations to safely allow access to all kinds of applications and resources anytime anywhere and from any device.

With that, I'd like to thank our guest, Paul Trulove, Vice President of Product Marketing at SailPoint Technologies. Thanks, Paul.

Trulove: Thank you, Dana. I appreciate the time.

Gardner: This is Dana Gardner, Principal Analyst at Interarbor Solutions. A big thank you also to our audience for joining us, and a reminder to come back and join us again next time.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: SailPoint Technologies.

Transcript of a BriefingsDirect podcast on the need for and innovation in improved identity and access management. Copyright Interarbor Solutions, LLC, 2005-2013. All rights reserved.

You may also be interested in: