Thursday, September 26, 2013

Application Development Efficiencies Drive Agile Payoffs for Healthcare Tech Provider TriZetto

Transcript of a BriefingsDirect podcast on how a major healthcare software provider is using HP tools to move from waterfall to agile.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: HP.

Dana Gardner: Hello, and welcome to the next edition of the HP Discover Performance Podcast Series. I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your moderator for this ongoing discussion of IT innovation and how it’s making an impact on people’s lives.

Gardner
Once again, we're focusing on how IT leaders are improving their services' performance to deliver better experiences and payoffs for businesses and end users alike, and this time we're coming to you directly from the HP Discover 2013 Conference in Las Vegas.

We’re here the week of June 10 to explore some award-winning case studies from leading enterprises. And we’ll see how a series of innovative solutions and IT transformation approach to better development and test and deployment of applications is benefiting these companies.

Our next innovation case study interview highlights how TriZetto has been improving its development processes and modernizing its ability to speed the applications development process, and bring better tools for its internal developers as well as support a lifecycle approach to software.

To learn more about how TriZetto is modernizing its development and deployment capabilities, please join me in welcoming Rubina Ansari, Associate Vice President of Automation and Software Development Lifecycle Tools at TriZetto. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

Rubina Ansari: Thank you, Dana.

Gardner: We hear a lot about improving software capabilities, and Agile of course is an important part of that. Tell me where you are in terms of moving to Agile processes, and we’ll get more into how you're enabling that through tools and products?

Ansari: TriZetto currently is going through an evolution. We’re going through a structured waterfall-to-scaled-Agile methodology. As you mentioned, that's one of the innovative ways that we're looking at getting our releases out faster with better quality, and be able to respond to our customers. We realize that Agile, as a methodology, is the way to go when it comes to all those three things I just mentioned.

We're currently in the midst of evolving how we work. We’re going through a major transformation within our development centers throughout the country.

Gardner: And software is very important to your company, tell us why and then a little bit about what TriZetto does?

Ansari: TriZetto is a healthcare software provider. We have the software for all areas of healthcare. Our mission is to integrate different healthcare systems to make sure our customers have seamless information. Over 50 percent of the American insured population goes through our software for their claims processing. So, we have a big market and we want to stay there.

Leaner and faster

Our software is very important to us, just as it is to our customers. We're always looking for ways of making sure we’re leaner, faster, and keeping up with our quality in order to keep up with all the healthcare changes that are happening.

Gardner: You've been working with HP Software and Application Lifecycle Management (ALM) products for some time. Tell us a little bit about what you have in place, and then let's learn a bit more about the Asset Manager capabilities that you're pioneering?

Ansari
Ansari: We've been using HP tools for our testing area, such as the QTP Products Performance Center and Quality Center. We’ve recently went ahead with ALM 11.5, it has a lot of cross-project abilities. As for agile, we're now using HP Agile Manager.

This has helped us move forward fairly quickly into scaled agile using HP Agile Manager, while integrating with our current HP tools. We wanted to make sure that our tools were integrated and that we didn’t lose that traceability and the effectiveness of having a single vendor to get all our data.

HP Agile Manager is very important to us. It's a software-as-a-service (SaaS) model, and it was very easy for us to implement within our company. There was no concept of installing, and the response that we get from HP has been very fast, as this is the first experience we’ve had with a SaaS deliverable from HP.
It's very lightweight, it's web-based SaaS and it integrates with their current tool suite.

They're following agile, so we get releases every three months. Actually, every few weeks, we get enhancements for defects we may find within their product. It's worked out very well. It's very lightweight, it's web-based SaaS and it integrates with their current tool suite, which was vital to us.

Gardner: And how large of an organization are you in terms of developers, and how many of them are actively using these products?

Ansari: We have between 500 and 1,000 individuals that make up development teams throughout United States. For Agile Manager, the last time we checked, it was approximately 400. We're hoping to get up to 1,000 by end of this year, so that way everyone is using Agile Manager for all their agile/scrum teams and their backlogs and development.

Gardner: Tell us a bit also about how paybacks are manifesting themselves. Do you have any sense of how much faster you're able to develop? What are the paybacks in terms of quality, traceability, and tracking defects? What's the payback from doing this in the way you have?

Working together

Ansari: We’ve seen some, but I think the most is yet to come in rolling this out. One of the things that Agile Manager promotes is collaboration and working together in a scrum team. Agile Manager, having the software work all around the agile processes, makes it very easy for us to roll an agile methodology.

This has helped us collaborate better between testers and developers, and we're finding those defects earlier, before they even happen. We’ll have more hard metrics around this as we roll this out further. One of the major reasons we went with HP Agile Manager is that it has very good integration with the development tools we use.

They integrate with several development tools, allowing our testers to be able to see what changes occurred, what piece of code has changed for each defect enhancement that the tester would be testing. So that tight integration with other development tools was a very pivotal factor in our decision of going forward with that HP Agile Manager.

Gardner: So Rubina, not only are you progressing from waterfall to agile and adopting more up-to-date tools, but you’ve made this leap to a SaaS-based delivery for this. If that's working out well as you’ve said, do you think this is going to lead to doing more with other SaaS tools and tests and capabilities and maybe even look at cloud platform as a service opportunity?
We're also looking at offering some of our products in a SaaS model. So we realize what's involved in it.

Ansari: Absolutely. This was our first experience and it is going very well. Of course, there were some learning curves and some learning pains. Being able to get these changes so quickly and not having it do it ourselves was kind of a mind shift change for us. We're reaping the benefits from it obviously, but we did have to have a little more scheduled conversations, release notes, and documentation about changes from HP.

We're not new to SaaS. We're also looking at offering some of our products in a SaaS model. So we realize what's involved in it. It was great to be on the receiving end of a SaaS product, knowing that TriZetto themselves are playing that space as well.

Gardner: Tell us what the future holds. Are you going to be adding any additional lifecycle elements moving on this journey, as you've described it? What's next?

Ansari: There's always so much more to improve. What we’re looking for is how to quickly respond to our customers. That means also integrating HP Service Manager and any other tools that may be part of this software testing lifecycle or part of our ability to release or offer something to our clients.

We'll continue doing this until there is no more space for efficiency. But, there are always places where we can be even more effective.

Mobile development

Gardner: How about mobile development? Is that something that’s on your radar and that you’ll be doing more of, given that devices are becoming more popular? I imagine that’s true of your customers too?

Ansari: We've talked about it, but it's really not on our roadmap right now. It hasn't been one of our main priorities.

Gardner: I suppose that you're in a good position to be able to move in that direction should you decide to.

Ansari: Absolutely. There's no doubt. The technologies that we’re advancing towards as well will allow us to easily go into the mobile space once we plan and do that.
The technologies that we’re advancing towards as well will allow us to easily go into the mobile space.

Gardner: Well great. I'm afraid we’ll leave it there. We’ve been learning about how TriZetto has been moving to a more agile methodology for its development and using a variety of HP software products for Application Lifecycle Management.

So please join me in thanking our guest. We’ve been here with Rubina Ansari, Associate Vice President of Automation and Software Development Lifecycle Tools at TriZetto. Thank you.

Ansari: Thank you. The pleasure was all mine.

Gardner: And I'd also like to thank our audience as well for joining us for this special HP Discover Performance Podcast coming to you from the HP Discover 2013 Conference in Las Vegas.

I'm Dana Gardner. Principal Analyst at Interarbor Solutions, your host for this ongoing series of HP sponsored discussions. Thanks again for joining, and come back next time.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: HP.

Transcript of a BriefingsDirect podcast on how a major healthcare software provider is using HP tools to move from waterfall to agile. Copyright Interarbor Solutions, LLC, 2005-2013. All rights reserved.

You may also be interested in:

Monday, September 23, 2013

Navicure Gains IT Capacity Optimization and Performance Monitoring Using VMware vCenter Operations Manager

Transcript of a BriefingsDirect podcast on how claims clearinghouse Navicure has harnessed advanced virtualization to meet the demands of an ever-growing business.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: VMware.

Dana Gardner: Hello, and welcome to a special BriefingsDirect podcast series coming to you from the 2013 VMworld Conference in San Francisco. We're here the week of August 26 to explore the latest in cloud-computing and virtualization infrastructure developments.

Gardner
I'm Dana Gardner, Principal Analyst at Interarbor Solutions, and I'll be your host throughout the series of VMware-sponsored BriefingsDirect discussions.

Our next innovator interview focuses on how a fast-growing healthcare claims company is gaining better control and optimization across its IT infrastructure. We're going to hear how IT leaders at Navicure have been deploying a comprehensive monitoring and operational management approach.

To understand how they're using dashboards and other analysis to tame IT complexity, and gain better return on their IT investments, please join me in welcoming Donald Wilkins, Director of Information Technology at Navicure Inc. in Duluth, Georgia. Welcome, Donald. [Disclosure: VMware is a sponsor of BriefingsDirect podcasts.]

Donald Wilkins: Glad to be here.

Gardner: Tell us a little bit about why your organization is focused on taming complexity. Is this a focus that's a result of cost, or is it taming complexity, or both?

Wilkins
Wilkins: At Navicure, we've been focused on scaling a fast-growing business. And if you incorporate very complex infrastructure, it becomes more difficult to scale it. So we're focused on technologies that are simple to implement, yet have a lot of upward availability of growth from the storage, the infrastructure, and the software we use. We do that in order to be able to scale that growth we needed to satisfy our business objectives.

Gardner: Tell us a little bit about Navicure, what you do, how is that you're growing, and why that's putting a burden on your IT systems.

Wilkins: Navicure has been around for about 12 years. We started the company in about 2001 and delivered the product to our customers in the late 2001-2002 timeframe. We've been growing very fast. We're adding 20 to 30 employees every year, and we're up to about 230 employees today.

We have approximately 50,000 physicians on our system. We're growing at a rate of 8,000 to 10,000 physicians a year, and it’s a healthy growth. We don't want to grow too fast, so as not to water down our products and services, but at the same time, we want to grow at a pace that better enables us to deliver better products for our customers.

Customer service is one of the foundation cornerstones of our business. We feel that our customers are number one, and retaining those customers is one of our primary goals.

Gardner: As I understand it, you're an Internet-based medical claims clearinghouse. Tell us what that boils down to. What is that you do?

Revenue cycle management

Wilkins: Claim clearinghouses have been around for a couple of decades now. We've evolved from that claim-clearinghouse model to what we refer to as revenue cycle management. We pioneered that term early as we started the company.

We take the transactions from physicians and send them to the insurance companies. That’s what the clearinghouse model is. But on that product, we added a lot of value-added services, a lot analytics around those transactions to help the provider generate more revenue for their transactions. They get paid faster, and that they get paid the first time through the system.

It was very costly for transactions to be delayed weeks because of poorly submitted transactions to the insurance company or denials because they coded something wrong.

We try to catch all that, so that they get paid the first time through. That’s the return on investment (ROI) that our customers are looking for when they look at our products, to lower the AR days and to increase their revenue at the bottom line.
We wanted to build a foundational structure that we can just build on as we get go into business and growing the transaction volume.

Gardner: Tell us a little bit about your IT environment. What do you have in your data center? Then, we'll get to how you've been able to better manage it.

Wilkins: The first thing we did at Navicure, when we started the company, is we looked at and decided that we didn't want to be in the data-center business. We wanted to use a colo that does that work at a much higher level than we could ever do. We wanted to focus on our product and let the colo focus on what they do.

They serve us from our infrastructure standpoint, and then we can focus on our products and build a good product. With that, we adopted very early on, the grid approach or the rack approach. This means that we wanted to build a foundational structure that we can just build on as we get go into business and grow the transactions volume.

That terminology has changed over the years, and that can be referred to a software-defined infrastructure today, but back then it was that we wanted to build infrastructure that would have a grid approach to it, so we could plug in more modules and components to add to scale out as we scale up.

With that, we continued to evolve what we do, but that inherent structure is still there. We need to be able to scale our business as our transactional volume doubles approximately every two years.

Gardner: And how did you begin your path to virtualization, and how did that progress into this more of a software-defined environment?

Ramping up fast

Wilkins: In the first few years of the operation of the company, we really had enough headroom in our infrastructure that it wasn't a big issue, but as we got four years into the company, we started realizing that we were going to hit a point where we would have to start ramping-up really fast.

Consolidation was not something that we had to worry about, because we didn’t have a lot to consolidate. It was a very early product, and we had to build the customer base. We had to build our reputation in the industry, and we did that. But then we started adding physicians by the thousands to our system every year.

With that, we started to have to add infrastructure. Virtualization came along at such a time that we could add it virtually faster and more efficiently than we could ever have if we added physical infrastructure.

So it became a product that we put in a test, dev, and production all at the same time, but it was something that just allowed us to meet the demands of the business.
We want to evolve that to be more proactive in our approach to monitoring.

Gardner: Of course, as many organizations have used virtualization to their benefit, they've also recognized that there is some complexity involved. And getting better management means further optimization, which further reduces costs. That also, of course, maintains their performance requirements. How did you then focus in on managing and optimizing this over time?

Wilkins: Well, one of the things we tried to look at, when we look at products and services, was to keep it simple. I have a very limited staff, and the staff needs to be able to drive to the point of whatever issue they're researching and/or inspecting.

As we've added technologies and services, we tried to add those that are very simple to scale, very, very simple to operate. We look at all these different tools to make that happen. This has led us to new products like VMware as they have also tried to drive to the same level, trying to simplify their product offering with their new products.

Gardner: Which products you are using? Maybe you could be more specific about what's working best for you?

Wilkins: For years, we've been doing monitoring with other tools that were network-based monitoring tools. Those drive only so much value. They give us things like up-time alerting and responsiveness that are just about when issues happen. We want to evolve that to be more proactive in our approach to monitoring.

It’s not so much about how we can fix a problem when there is one. It’s more of, let’s keep the problem from happening to start with. That's where we've looked at some products for that. Recently we've actually implemented vCenter Operations Manager.

That product gives us a different twist that other SMNP monitoring tools do. It's a history of what's going on, but also a future analysis of that history and how it will change, based on our historical trends.

New line-up

Gardner: Of course, here at VMworld, we're hearing vSphere improvements and upgrades, but also the arrival of VMware vCloud Suite 5.5 and VMware vSphere with Operations Management 5.5. Is there anything in the new line-up that is particularly of interest to you, and have you had a chance to look at over?

Wilkins: I haven’t had a chance to look over the most recent offering, but we're running the current version. Again, for us, it's the efficiency mechanism inside the product that drives the most value for us to make sure that we can budget a year in advance of the expanding infrastructure that we need to have to meet the demands.

Gardner: What sort of paybacks are there? Do you have any sense on a metrics or ROI basis? What you have been able to gain maybe through virtualization generally, and then the improved operations of those of workloads over time?

Wilkins: Just being able to drive more density in our colo by being virtualized is a big value for us. Our footprint is relatively small. As for an actual dollar amount, it’s hard to pin something on there. We're growing so fast, we're trying to keep up with the demand, and we've been meeting that and exceeding that.
Desktop virtualization is going to be a critical component for that.

Really, the ROI is that our customers aren’t experiencing major troubles with our infrastructure not expanding fast enough. That's our goal, to drive high availability for infrastructure and low downtime, and we can do that with VMware and with their products and service.

Gardner: How about looking to the future, Donald? Do you have any sense of whether things like disaster recovery or mobile support, perhaps even hybrid cloud services, will be something you would be interested in as you grow further?

Wilkins: We're a current customer of Site Recovery Manager. That's a staple in our virtual infrastructure and has been since 2008. We've been using that product for many years. It drives all of the planning and the testing of our virtual disaster recovery (DR) plan. I've been a very big proponent of that product and services for years, and we couldn’t do without it.

There are other products we will be looking at. Desktop virtualization is something that will be incorporated into the infrastructure in the next year or two.

As a small business, the value of that becomes a little harder to prove from a dollar standpoint. Some of those features like remote working come into play as office space continues to be expensive. It's something we will be looking at to expand our operations, especially as we have more remote employees working. Desktop virtualization is going to be a critical component for that.

Gardner: How about some 20/20 hindsight. If there were other folks that were ramping up on virtualization, or getting to the point where complexity was becoming an issue for them, do you have any thoughts on getting started or lessons learned that you could share?

Trusted partner

Wilkins: The best thing with virtualization is to get a trusted partner to help you get over the hurdle of the technical issues that may bring themselves to light.

I had a very trusted partner when I started this in 2005-2006. They actually just sat with me and worked with me, with no compensation whatsoever, to help work through virtualization. They made it such an easy value that it just became, "I've got to do this, because there's no way I can sustain this level of operational expense and of monitoring and managing this infrastructure, if it's all physical."

So, seeing that value proposition from a partner is key, but it has to be a trusted partner. It has to be a partner that has your best interest in mind, and not so much a new product to sell. It’s going to be somebody that brings a lot to the table, but, at the same time, helps you help yourself and lets you learn these products, so that you can actually implement it and research it on your own to see what value you can bring into the company.
It has to be a partner that has your best interest in mind, and not so much a new product to sell.

It’s easy for somebody to tell you how you can make your life better, but you have t to actually see it, because then, you become a passionate person for the technology, and then you become a person that realizes you have to do this and will do whatever it takes to get this in here, because it will make your life easier.

Gardner: How about specific advice for mid-market organizations, not too large. Is there something about dashboard, single pane, ease in getting a sense as the head of IT in your organization, over all the systems? Is there anything in particular that helps on that visualization basis that you would recommend that others perhaps consider?

Wilkins: Well, vCenter Operations Manager is key to understanding your infrastructure. If you don’t have it today, you're going to be very reactive to some of your pains and the troubles you're dealing with.

That product, while it does allow you to do a lot of research for various problems and services to drill down from the cluster level, down into the virtual machine levels and find out where your problems and pain points or, actually allows you to more quickly isolate the issue. At the same time, it allows you to project where you're growing and where you need to put your money into resources, whether that's more storage, compute resources, or network resources.

That's where we're seeing value out of the product, because it allows me to go during budget cycles to say that looking at infrastructure and our current growth, we will be out of resources by this time. We need to add this much, based on our current growth. Barring additional new products and services we may be coming up with, we may be adding to our service, if we don't do anything today. We're growing at this pace and here's the numbers to prove it.

When you have that information in front of you, you can actually build a business case around that that further educates the CFOs and the finance people to understanding what your troubles are and what you have to deal with on a day-to-day basis to operate the business.

Gardner: Must feel good to have some sense of being future proof, no matter what comes down you are going to be prepared for it.

Wilkins: Most definitely.

Gardner: Well, great. We'll have to leave it there. We've been talking about how an organization is gaining better control and optimization over their IT infrastructure, and we have heard how Navicure has been exploring comprehensive, monitoring, and operational management approach.

So a big thank you to our guest. We have been here with Donald Wilkins, the Director of IT at Navicure. Thank Donald.

Wilkins: My pleasure. Thank you.

Gardner: And thanks to our audience for joining this special podcast coming to you from the recent 2013 VMworld Conference in San Francisco.

I'm Dana Gardner; Principal Analyst at Interarbor Solutions, your host throughout the series of VMware-sponsored BriefingsDirect discussions. Thanks again for listening, and come back next time.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: VMware.

Transcript of a BriefingsDirect podcast on how claims clearinghouse Navicure has harnessed virtualization to meet the demands of an ever-growing business. Copyright Interarbor Solutions, LLC, 2005-2013. All rights reserved.

You may also be interested in:

Thursday, September 19, 2013

MZI HealthCare Identifies Big Data Patient Productivity Gems Using HP Vertica

Transcript of a BriefingsDirect podcast on how a healthcare services provider has harnessed data analytics to help its users better understand complex trends and outcomes.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: HP.

Dana Gardner: Hello, and welcome to the next edition of the HP Discover Performance Podcast Series. I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your moderator for this ongoing discussion of IT innovation and how it’s making an impact on people’s lives.

Gardner
Once again, we’re focusing on how IT leaders are improving their business performance for better access, use and analysis of their data and information. This time we’re coming to you directly from the recent HP Vertica Big Data Conference in Boston.

Our next innovation case study highlights how a healthcare solutions provider leverages big-data capabilities. We'll see how they've deployed the HP Vertica Analytics Platform to help their customers better understand population healthcare trends and identify how well healthcare processes are working.

To learn more about how high performance and cost-effective big data processing forms a foundational element to improving overall healthcare quality and efficiency, please join me now in welcoming our guest, Greg Gootee, Product Manager at MZI Healthcare, based in Orlando. Welcome, Greg. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

Greg Gootee: Hi. Thank you, Dana.

Gardner: Tell me a little bit about how important big data is turning out to be for how healthcare is being administered. It seems like there is a lot of change going on in terms of how compensation is going to take place, and information analysis seems perhaps important than ever.

Gootee: Absolutely. When you talk about change, change in healthcare is really dramatic, maybe more dramatic than any other industry has ever been. If you look at other industries where they have actually been able to spread that change over time, in healthcare it's being rapidly accelerated.

Gootee
In the past, data had been stored in multiple systems and multiple areas on given patients. It's been difficult for providers and organizations to make informed decisions about that patient and their healthcare. So we see a lot of change in being able to bring that data together and understand it better.

Gardner: Tell us about MZI, what you do, who your customers are, and where you're going to be taking this big data ability in the future.

Gootee: MZI Healthcare has predominantly been working on the payer side. We have a product that's been on the market for over 25 years helping with benefit administration and the lines of payers and different independent physician associations (IPAs) and third-party administrators (TPAs).

Our customers have always had a very tough time bringing in data from different sources. A little over two years ago, MZI decided to look at how we could leverage that data to help our customers better understand their risk and their patients, and ultimately change the outcomes for those patients.

Predictive analysis

Gardner: I think that's how the newer regulatory environment is lining up in terms of compensation. This is about outcomes, rather than procedures. Tell us about your requirements for big data in order to start doing more of that predictive analysis.

Gootee: If you think about how data has been stored in the past for patients across their continuum of care, where, as they went from facility to facility, and physician to physician, it's really been so spread apart. It's been difficult to help understand even how the treatments are affecting that patient.

I've talked a lot about my aunt in previous interviews. Last year, she went into a coma, not because the doctors weren't doing the right thing, but because they were unable to understand what the other doctors were doing.

She went to many specialists and took medication from each one of those to help with her given problem, but what happened was there was an interaction with medication. They didn't even know if she’d come out of the coma.

These things happen every day. Doctors make informed decisions from their experience and the data that they have. So it's critical that they can actually see all the information that's available to them.

When we look at healthcare and how it's changing, for example the Affordable Care Act, one of the main focuses is obviously cost. We all know that healthcare is growing at a rate that's just unsustainable, and while that's the main focus, it's different this time.
Not only are we trying to reduce cost, but we are trying to increase the care that's given to those patients.

We've done that before. In the Clinton Administration we had a kind of HMO and it really made a dramatic difference on cost. It was working, but it didn't give people a choice. There was no basis on outcomes, and the quality of care wasn't there.

This time around, that's probably the major difference. Not only are we trying to reduce cost, but we are trying to increase the care that's given to those patients. That's really vital to making the healthcare system a better system throughout the United States.

Gardner: Given the size of the data, the disparate nature of the data, more-and-more human data will be brought to bear. What were your technical requirements, and what was the journey that you took in finding the right infrastructure?

Gootee: We had a couple of requirements that were critical. When we work with small- and medium-size organizations (SMBs), they really don't have the funds to put in a large system themselves. So our goal was that we wanted to do something similar to what Apple has done with the iPhone. We wanted to take multiple things, put them into one area, and reduce that price point for our customers.

One of the critical things that we wanted to look at was overall price point. That included how we manage those systems and, when we looked at Vertica, one of the things that we found very appealing was that the management of that system is minimal.

High-end analytics

The other critical thing was speed, being able to deliver high-end analytics at the point of care, instead of two or three months later, and Vertica really produced. In fact, we did a proof of concept with them. It was almost unbelievable some of the queries that ran and the speed at which that data came back to us.

You hear things like that and see it through the conference, no matter what volume you may have. It's very good. Those were some of our requirements, and we were able to put that in the cloud. We run in the Amazon cloud and we were able to deliver that content to the people that need it at the right time at a really low price point.

Gardner: Let me understand also the requirement for concurrency. If you have this posted on Amazon Web Services, you're then opening this up to many different organizations and many different queriers. Is there an issue for the volume of queries happening simultaneously, or concurrency? Has that been something you've been able to work through?

Gootee: Absolutely. That's another value add that we get. The ability to expand and scale the Vertica system along with the scalability that we get with the Amazon Services allows us to deliver that information. No matter what type of queries we're getting, we can expand that automatically. We can grow that need, and it really makes a large difference in how we could be competitive in the marketplace.

Gardner: I suppose another dynamic to this on the economic side is the predictability of your cost -- x data volume, x queries. I can predict with perhaps even linear ability what my cost would be. Is that the case with you, because I know that in the past, many organizations didn't know what the costs were going to be until they got in, and it was too late.
Cloud services take some of that unknown away. It lets you scale as you need it and scale back if you don't need it.

Gootee: If you look at traditional ways that we've delivered software or a content before, you always over-buy, because you don’t know what it's going to be. Then, at some point, you don't have enough resources to deliver. Cloud services take some of that unknown away. It lets you scale as you need it and scale back if you don't need it.

So it's the flexibility for us. We're not a large company, and what's exciting about this is that these technologies help us do the same thing that the big guys do. It really lets our small company compete in a larger marketplace.

Gardner: Going back to the population health equation and the types of data and information, we heard a presentation this morning and we saw some examples of HP HAVEn, bringing together Hadoop, Autonomy, Vertica, Enterprise Security, and then creating applications on top of that. Is this something that's of interest to you? How important is this ability to get at all the information in all the different formats as you move forward?

Gootee: That's very critical for us. The way we interact in America and around the world has changed a lot. The HAVEn platform provides us with some opportunities to improve on what we have with healthcare's big security concerns, and the issue of the mobility of data. Getting it anywhere is critical to us, as well as better understanding how that data is changing.

We've heard from a lot of companies here that really are driving that user experience. More-and-more companies are going to be competing on how they can deliver things to a user in the way that they like it. That's critical to us, and that platform really gives us the ability to do that.

Gardner: Well great. I'm afraid we'll have to leave it there. We've been learning how a healthcare solutions provider has been leveraging big-data capabilities, and we've seen how they've deployed the HP Vertica Analytics platform to help customers better understand population healthcare trends, and also to identify how well healthcare processes are working.

So a big thank you to our guest, Greg Gootee, Product Manager at MZI Healthcare. Thanks, Greg.

Gootee: Thank you, Dana.

Gardner: And thanks also to our audience for joining us for this special HP Discover Performance Podcast coming to you directly from the recent HP Vertica Big Data Conference in Boston.

I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host for this ongoing series of HP Sponsored Discussions. Thanks again for joining, and come back next time.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: HP.
Transcript of a BriefingsDirect podcast on how a healthcare services provider has harnessed data analytics to help its users better understand complex trends and outcomes. Copyright Interarbor Solutions, LLC, 2005-2013. All rights reserved.

You may also be interested in:

Wednesday, September 18, 2013

Synthetic APIs Approach Improves Fragmented Data Acquisition for Thomson Reuters’ Content Sharing Platform

Transcript of a BriefingsDirect podcast on how Kapow Software helps a worldwide data company manage data acquisition in a cost-effective and consistent way.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: Kapow Software, a Kofax company.

Dana Gardner: Hello, and welcome to a special BriefingsDirect discussion series on how innovative companies are dodging data complexity through the use of Synthetic APIs.

Gardner
Dana Gardner, Principal Analyst at Interarbor Solutions, is your host throughout this series of Kapow Software-sponsored BriefingsDirect use case discussions.

We'll see how from across many different industries and regions of the globe, inventive companies are able to get the best information delivered to those who can act on it with speed and at massive scale. The next innovator interview examines the improved data use benefits at Thomson Reuters in London.

Here to explain how improved information integration and delivery can be made into business success, we're joined by Pedro Saraiva, product manager for Content Shared Platforms and Rapid Sourcing at Thomson Reuters. Glad to have you with us.

Pedro Saraiva: Thank you very much. Pleased to meet you.

Gardner: Pedro, you first launched Thomson Reuters content-sharing platform over four years ago, I'm told, after joining the company in 1996. And the platform there now enables agile delivery of automated content-acquisition solutions across a range of content areas.

Saraiva: That's right.

Gardner: Tell me what that really means. What are you delivering and to whom?

Saraiva: It's actually very simple. We're a business that requires a lot of information, a lot of data because our business is information -- intelligence information, and we need to do that in a cost-efficient manner. Part of that requires us to have the best technology. When we started four years ago, one of the most obvious patterns that we found was that we had a lot of fragmentation of our content acquisition processes where they were based, who was doing them, and more importantly, what processes they were following or not following.

Saraiva
The opportunity that we immediately saw was to consolidate it all, not just around the central capability, but into an optimal capability, with real experts around it making it work and effectively creating a platform as a service (PaaS) for our internal experts in each content area to perform their tasks just as usual, but faster, better, more reliably, and more consistently.

Fundamentally, we are a platform for web-content acquisition. And that is part of our content-shared platform because it's all part of a bigger picture, where we take content from so many sources and many different kinds of sources, and not just web.

Gardner: So, your customers are essentially other organizations within Thomson Reuters. Is that correct?

Content management

Saraiva: That's right. I don't know the exact percentage, but I would guess that about half of what we do is content management, rather than site technology, per se. And a lot of those content management tasks are highly specialized because that's the only way we're going to add value. We're going to understand the content, where it comes from, what it means, and we are going to present it and structure it in the best possible way for our customers.

So, the needs of our internal groups and internal content teams are huge, very demanding, and very specialized. But they all have certain things in common. We found many of them were using Excel macros or some other technologies to perform their activities.

We tried to capture what was common, in spite of all that diversity, to leverage the best possible value from the technology that we have. But also, from our know-how, expertise, and best practices around how to source content, how to be compliant with the required rules, and producing consistent, high-quality data that we could trust, we could claim to our customers that they could trust our content because we know exactly what happened to it from beginning to the end.

Gardner: Just for the benefit of our listeners, Thomson Reuters is a large company. Tell us how large, and tell us some numbers around the number of different units within the company that you are providing this data to.

Saraiva: We are a large organization. We have about 50,000 employees worldwide in the majority of countries. For example, our news operations have reporters on the ground throughout the world.

We have all languages represented, both internally and in terms of our customers, and the content that we provide to our customers. We're a truly diverse organization.
It takes shape in the vast number of different teams we have specializing in one kind of content.

We have a huge number of individual groups organized around the types of customers that we serve. Are they global? Are they regional? Are they local? Are they large organizations? Are they small organizations? Are they hedge funds? Are they fund managers? Are they investment banks? Are they analysts? We have a variety of customers that we serve within each of our customer organizations around the world.

And that degree of specialty that I mentioned earlier, at some point, has to take shape. It takes shape in the vast number of different teams we have specializing in one kind of content. It may be, perhaps, just a language, French or Chinese. It may be fundamentals, versus real-time data. We have to have the expertise and the centers of excellence for each of those areas, so that we really understand the content.

Gardner: You had massive redundancy in how people would go about this task of getting information from the web. It probably was costly. When you decided that you wanted to create a platform and have a centralized approach to doing this, what were the decisions that you made around technology? What were some of the hurdles that you had to overcome?

Saraiva:  We were looking for a platform that we would be able to support and manage in a cost-effective manner. We were looking for something that we could trust and rely on. We were looking for something that our users could make sense of and actually be productive with. So, that was relatively simple.

The biggest challenge, in my opinion, from the start, was the fact that it's very hard to take a big organization with an inherently fragmented set of operating units and try to change it, because trying to introduce a single, central capability. It sounds great on paper, but when you start trying to persuade your users that there's value to them in in migrating their current processes, they'll be concerned that the change is not in their interest.

Demonstrating value

And there is a degree of psychology at work in trying to not only work with that reluctance that all businesses have to face, but also to influence it positively and try to demonstrate that value to our end users was far in excess to the threat that they perceived.

Gardner: I've heard someone refer to that as having insanely good products. That's going to change people's behavior. Is that what you've been able to accomplish?

Saraiva: Absolutely. I can think of examples that are truly amazing, in my opinion. One is about the agility that we've gained through the introduction of technology such as this one, and not just the user of that technology, but the optimal use of it. Some time ago, before RSA was used in some departments, we had important customers who had an urgent, desperate need for a piece of information that we happened not to have, for whatever reason. It happens all the time.

We tried to politely explain that it might take us a while, because it would have to go through a development team that traditionally build C++ components. They were a small team and they were very busy. They had other priorities. Ultimately, that little request, for us, was a small part of everything we were trying to do. For that customer, it was the most important thing.

The conversation to explain why it was going to take so long why we were not giving them the importance that they deserved was a difficult conversation to have. We wanted to be better than that. Today, you can build a robot quickly. You can do it and plug it into the architecture that we have so that the customer can very quickly see it appearing almost real time in their product. That's an amazing change.
But ultimately, most importantly, we needed the confidence that we could get our job done.

Gardner: So, how did the Kapow platform come to your attention? What was the story behind your adoption of this?

Saraiva: We spent some time looking at the technologies available. We spoke with a number of other customers and other people we knew. We did our own research, including a little bit of the shotgun kind of research that you tend to do on the Internet, trying to find what's available. Very quickly, we had a short list of five technologies or so.

All of them promised to be great, but ultimately, they had to pass the acid test, which was evaluation in terms of our technical operations experts. Is this something that we are able to run? And also in terms of the capabilities we were expecting. They were quite demanding, because we had a variety of users that we needed to cater to.

But ultimately, most importantly, we needed the confidence that we could get our job done. If we are going to invest in a given technology, we want to know that it can be used to solve a given kind of problem without too much fuss, complexity, or delay, because if that doesn't happen, you have a problem. You have only partially achieved the promise, and you will forever be chasing alternatives to fill that gap.

Kapow absolutely gives us that kind of confidence. Our developers, who at first had a little bit of skepticism about the ability of a tool to be so amazing, tried it. After the first robot, typically, their reaction was "Wow." They love it, because they know they can do their job. And that's what we all want. We want to be able to do our jobs. Our customers want to use our products to do their jobs. We're all in the same kind of game. We just need to be very, very good at what we do. Kapow gave us that.

Gardner: Approximately how long have you been using Kapow? Do you have any metrics that might give an indication of what benefits are there? Maybe it's reduced number of developer hours or rapid use for creating robots that can get you the information you want. Any sense of the benefits?

Critically important

Saraiva: Perhaps, the most interesting examples are those about web sources that were critically important to us, and that until we were able to leverage Kapow, we just couldn't automate sensibly.

It was not even a matter of it taking a long time. We were not able to do it. With Kapow, it was a straightforward process. We just click, follow the process that really mirrors a complex workflow in the flow chart that we designed, and the job is done.

In terms of the rapid development of the solutions, it was at least a reduction from several months to weeks. And this is typical. You have cases where it's much faster. You have cases where it's slower, because there are complex, high-risk automation processes that we need to take some time to test. But the development process is shortened dramatically.

Gardner: We were recently at the Kapow User Summit. We've been hearing about newer versions, the Kapow platform 9.2. Is there anything in particular that you've heard here so far that has piqued your interest? Something you might be able to apply to some of these problems right away?

Saraiva: A lot of what we've been doing and focusing on over the last four years was around a pattern whereby we have data flowing into the company, being processed and transformed. We're adding our value, and it's flowing out to our customers. There is, however, another type of web sourcing and acquisition that we're now beginning to work with which is more interactive. It's more about the unpredictable, unplanned need for information on demand.
The main advantage of a cloud-based service running Kapow would be in freeing us from the hassle of having to manage our own infrastructure.

There, interestingly, we have the problem of integrating the button that produces that fetch for data into the end-user workflows. That was something that was not possible with previous versions of Kapow or not straightforward. We would have to build our own interfaces, our own queues, and our own API to interface with the robo-server.

Now, with Kapplets it all looks very, very straightforward because we can easily see that we could have an arbitrary optimized workflow solution or tool for some of our users that happens to embed a Kapplet that allows a user to perform research on demand, perhaps on the customer, perhaps on a company for the kind of data that we wouldn't traditionally be acquiring data on a constant fixed basis.

Gardner: Looking to the future about deployments, we heard the possibility of a cloud version of Kapow. How would you prefer to move in the future on deployments? It sounds as if the direction of bridging organizational boundaries continues for you, maybe delivering this to mobile devices specifically having a cloud-based Kapow set of platform services would make sense.

Saraiva: Over time, things keep changing.  Although currently, we run a relatively standard, low-scale infrastructure, it's always a cost, an overhead, and an extra worry that you have to configure networks.

Security

And you have to worry about security. You have to ensure that things are being monitored and that you respond to alarms and so on. In theory, if we were able to get exactly the same service that we now have internally based in the cloud, we could scale it much more transparently without much planning. That would definitely give us an advantage.

So, right now, I'm beginning to think about that precise question. For the next few years, are we going to have just hosted infrastructure at our premises, or are we going to begin leveraging the cloud properly, because then we can focus on what we really want which is to get value out of robots.
 
Gardner: I'm afraid we're about out of time, but quickly, now that you've been doing this for some time, do you any advice that you might offer to others who are grappling with similar issues around multiple data sources, not being able to use APIs, needing a synthetic API approach, what lessons have you learned that you might be able to share?
I've been amazed at what is possible with technologies such as Kapow.

Saraiva: I suppose the most important message I would want to share is about confidence in technology. When I started this, I had worked for years in technology, many of those years in web technology, some complex web technology. And yet, when I started thinking about web content acquisition, I didn't really think it could be done very well.

I thought this is going to be a challenge, which is partly the reason why I was interested in it. And I've been amazed at what is possible with technologies such as Kapow. So, my message would be don't worry that technology such as Kapow will not be able to do the job for you. Don't fear that you will be better off using your own bespoke C++ based solution. Go for it, because it really works. Go for it and make the most of it, because you will need it with so much data, especially on the Internet. You have to have that.

Gardner: I’m afraid we’ll have to leave it there. We've been talking about how Thomson Reuters in London has improved information integration and delivery using Kapow technology and a Synthetic APIs approach to gain significant business benefits.

Please join me in thanking our guest, Pedro Saraiva, product manager for Content Shared Platforms and Rapid Sourcing at Thomson Reuters. Thanks for being on BriefingsDirect.

And thanks to our audience for joining this special discussion, coming to you from the recent 2013 Kapow.wow user conference in Redwood Shores, California.

I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host throughout this series of Kapow Software-sponsored BriefingsDirect discussions. Thanks for listening, and come back next time.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: Kapow Software, a Kofax company.

Transcript of a BriefingsDirect podcast on how Kapow Software helps a worldwide data company manage data acquisition in a cost-effective and consistent way. Copyright Interarbor Solutions, LLC, 2005-2013. All rights reserved.

You may also be interested in:

Tuesday, September 17, 2013

When Real-Time is No Longer Good Enough, the Predictive Business Emerges

Transcript of a BriefingsDirect podcast on how a predictive business strategy enables staying competitive, and not just surviving -- but thriving -- in fast-paced and dynamic markets.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: SAP Cloud.

Dana Gardner: Hi, this is Dana Gardner, Principal Analyst at Interarbor Solutions, and you're listening to BriefingsDirect. Today, we present a sponsored podcast discussion examining a momentous shift in business strategy. Join us as we explore the impact that big data, cloud computing, and mobility are having on how businesses must act and react in their markets.

Gardner
We'll explore how the agility goal of real-time responses is no longer good enough. What’s apparent across more business ecosystems is that businesses must do even better to become so data-driven that they extend their knowledge and ability to react well into the future. In other words, we're now all entering the era of the predictive business.

To learn more about how heightened competition amid a data revolution requires businesses and IT leaders to adjust their thinking to the next, and the next, and the next move on their respective chess boards, join me with our guest for today, Tim Minahan, the Chief Marketing Officer for SAP Cloud. Welcome, Tim. [Disclosure: SAP Cloud is a sponsor of BriefingsDirect podcasts.]

Tim Minahan: Thanks for having me, Dana.

Gardner: It’s hard to believe that the pace of business agility continues to accelerate. Tim, what’s driving this time-crunch? What are some of the changes afoot that require this need for -- and also enabling the capabilities to deliver on -- this notion of predictive business? We're in some sort of a rapid cycle of cause and effect, and it’s rather complicated.

Minahan: This is certainly not your father’s business environment. Big is no longer a guarantee to success. If you just look at the past 10 years, 40 percent of the Fortune 500 was replaced. So the business techniques and principles that worked 10, 5 or even three years ago are no longer relevant. In fact, they maybe a detriment to your business.

Minahan
Just ask companies like Tower Records, Borders Bookstore, or any of the dozens more goliaths that were unable or unwilling to adapt to this new empowered customer or to adapt new business models that threatened long-held market structures and beliefs.

The world, as you just said, is changing so unbelievably fast that the only constant is change. And to survive, businesses must constantly innovate and adapt. Just think about it. The customer today is now more connected and more empowered and more demanding.

You have one billion people in social networks that are talking about your brand. In fact, I was just reading a recent study that showed Fortune 100 companies were mentioned on social channels like Facebook, Twitter, and LinkedIn a total of 10.5 million times in one month. These comments are really shaping your brand image. They're influencing your customer’s views and buying decisions, and really empowering that next competitor.

But the consumer, as you know, is also mobile. There are more than 15 billion mobile devices, which is scary. There are twice as many smart phones and tablets in use than there are people on the planet. It’s changing how we share information, how we shop, and the levels of service that customers expect today.

It’s also created, as you stated, a heck of a lot of data. More data was created in the last 18 months than had been created since the dawn of mankind. That’s a frightening fact, and the amount of data on your company, on your consumer preferences, on buying trends, and on you will double again in the next 18 months.

Changing consumer

The consumer is also changing. We're seeing an emerging middle class of five billion consumers sprouting up in emerging markets around the world. Guess what? They're all unwired and connected in a mobile environment.

What's challenging for your business is that you have a whole new class of millennials entering the workforce. In fact, by next year, nearly half of the workforce will have been born after 1980 -- making me feel old. These workers just grew up with the web. They are constantly mobile.

These are workers that shun traditional business structures of command-and-control. They feel that information should be free. They want to collaborate with each other, with their peers and partners, and even competitors. And this is uncomfortable for many businesses.

For this always on, always changing world, as you said, real time just isn’t enough anymore. Knowing in real time that your manufacturing plant went down and you won’t be able to make the holiday shipping season -- it’s just knowing that far too late. Or knowing that your top customer just defected to your chief competitor in real time is knowing that far too late. Even learning that your new SVP of sales, who looks so great on paper, is an awful fit with your corporate culture or your go-to-market strategy is just knowing that far too late.

But to your point, what disrupts can also be the new advantage. So technology, cloud, social, big data, and mobile are all changing the face of business. The need is to exploit them and not to be disrupted by them.

Gardner: What’s interesting, Tim, is that the tools and best practices that you might use to better understand your external world, your market, your supply chain, and your ecosystem of partners is also being applied internally.
Too often, we get enamored with the technology side of the story, but the biggest change that’s going to occur in business is going to be the culture change.

These are issues that are affecting companies to operate better internally, organizationally, managing their staff, their employees and how they're working differently -- but at the same time, we had this outward-facing need, too. Another dimension of complexity here is that you not only have to change the way you're operating vis-à-vis your employees, but your customers, partners, supply chain, etc. as well. How does the predictive business create a whole greater than the sum of the parts when we think about this on the total shift internal and external?

Minahan: Too often, we get enamored with the technology side of the story, but the biggest change that’s going to occur in business is going to be the culture change. There's  the need to adapt to this new millennial workforce and this new empowered customer and the need to reach this new emerging middle-class around the world.

In today’s fast-paced business world, companies really need to be able to predict the future with confidence, assess the right response, and then have the agility organizationally and systems-wise to quickly adapt their business processes to capitalize on these market dynamics and stay ahead of the competition.

They need to be able to harness the insights of disruptive technologies of our day, technologies like social, business networks, mobility, and cloud to become this predictive business.

Not enough

I want to be clear here that the predictive business isn't just about advanced analytics. It’s not just about big data. That’s certainly a part of it, but just knowing something is going to happen, just knowing about a market opportunity or a pending risk just isn’t enough.

You have to have that capacity and insight to assess a myriad of scenarios to detect the right course of action, and then have the agility in your business processes, your organizational structures, and your systems to be able to adapt to capitalize on these changes.

Gardner: Tim, you and I have been talking for several years now about the impact of cloud. We were also trying to be predictive ourselves and to extrapolate and figure out where this is going. I think it turns out that it’s been even more impactful than we thought.

How about the notion of having a cloud partner, maybe not only your own cloud, but a hybrid or public cloud environment, where the agility comes from working with a new type of delivery mode or even a whole new type of IT model?

Minahan: We've seen the impact of cloud. You've probably heard a lot about it. You've been tracking it for years. The original discussion was all about total cost of ownership (TCO). It was all about the cost benefits of the cloud. While the cloud certainly offers a cost advantage, the real benefit the cloud brings to business is in two flavors -- innovation and agility.
There's now the agility at the business level to configure new business processes without costly IT or consulting engagements.

You're seeing rapid innovation cycles, albeit incremental innovation updates, several times per year that are much more digestible for a company. They can see something coming, be able to request an innovation update, and have their technology partner several times a year adapt and deliver new functionality that’s immediately available to everyone.

Then there's now the agility at the business level to configure new business processes without costly IT or consulting engagements. With some of the more advanced cloud platforms, they can even create their own process extensions to meet the unique needs of their industry and their business.

Gardner: It’s also, I think, a little bit of return to the past, when we think about the culture and the process. It’s been 20 years plus since people started talking about reengineering their business processes to accommodate technology. Now the technology has accelerated to such a degree that we can start really making progress on productivity and quality vis-à-vis these new approaches.

So without going too far into the past, how is the notion of business transformation being delivered now, when we think about being predictive, using these technology tools, different delivery models, and even different cultures?

Minahan: You're already seeing examples of the predictive business in action across industries today. Leading companies are turning that combination of insight, the big data analytics, and these agile computing models and organizational structures into entirely new business models and competitive advantage.

Strategic marketing

Let’s just look at some of these examples that are before us. Take Cisco, where their strategic marketing organization not only mines historical data around what prompted people to buy, or what they have bought, and what were their profiles. They married that with real-time social media mentions to look for customers, ferret out customers, who reveal a propensity to buy and a high readiness to buy.

They then arm their sales team, push these signals out to their sales force, and recommended the right offer that would likely convert that customer to buy. That had a massive impact. They saw a sales uplift of more than $4 billion by bringing all of those activities together.

It’s not just in the high-tech sector. I know we talk about that a lot, but we see it in other industries like healthcare. Mount Sinai Hospital in New York examined the historical treatment approaches, survival rates, and the stay duration of the hospitals to determine the right treatments to optimize care and throughput a patient.

It constantly runs and adapts simulations to optimize its patients first 8-12 hours in the hospital. With improved utilization based on those insights and the ability to adapt how they're handling their patients, the hospital not only improved patient health and survival rates, but also achieved the financial effect of adding hundreds of new beds without physically adding one.

In fact, if you look at it, the whole medical industry is built on predictive business models using the symptoms of millions of patients to diagnose new patients and to determine the right courses of action.
So all around us, businesses are beginning to adapt and take advantage of these predictive business models.

Closer to home for you Dana, there is also an example of the predictive business, I don’t know if you've read Nate Silver's phenomenal book, "The Signal and the Noise," but he talks about going beyond Moneyball, and how the Boston Red Sox were using predictive systems that really have changed how baseball drafts rookie players.

The difference between Moneyball and rookies is that rookies don’t have a record in the pros. There's no basis from which to determine what their on-base percentage will be or how they will perform. But this predictive model goes beyond standard statistics here and looks at similar attributes of other professional players to determine who are the right candidates that they should be recruiting and projecting what their performance might be based on a composite of other players that have like-attributes.

Their first example of this on the Red Sox was with Dustin Pedroia, who no one wanted to recruit. They said he was too short, too slow, and not the right candidate to play second base. But using this new model, the Red Sox modeled him against previous players and found out some of the best second basemen in the world actually have similar attributes.

So they wanted to take him early in the draft. The first year, he took the rookie of the year title in 2007 and helped the Red Sox win the world series for only the second time, since 1918. He's gone on to win the MVP the following year, and he’s been a top all star performer ever since.

So all around us, businesses are beginning to adapt and take advantage of these predictive business models.

Change in thinking

Gardner: It's curious that when you do take a data-driven approach, you have to give up some of the older approaches around intuition, gut instinct, or some of the metrics that used to be important. That really requires you to change your thinking and, rather than go to the highest paid person’s opinion when you need to make a decision, it's really now becoming more of a science.

So what do you get Tim when you do this correctly? The Red Sox, they got to win a World Series and they've been able to have a very strong club much more consistently than in the past. But what do businesses get when they become more data-driven, when they adjust their culture, take advantage of some of the new tools, and recognize the shift, the consumer behavior? How impactful can this be?

Minahan: It can be tremendously impactful. We truly believe that you get a whole new world of business. You get a business model and organizational and systems infrastructure that has the ability to adapt to all the massive transformation and the rapid changes that we discussed earlier. We believe the predictive business will transform every function within the enterprise and across the value chain.

Just think of sales and marketing. Sales and marketing professionals, as we just talked about with Cisco, will now be empowered to engage customers like never before by tapping into social activity, buying activity on business networks, and geo-location insights to identify prospects and develop optimal offers and engage and influence perspective customers right at the point of purchase.
It can be tremendously impactful. We truly believe that you get a whole new world of business.

I think of pushing offers, coupons, to mobile devices of prospective buyers based on their social finger print and their actual physical location or service organizations. We talk about this Internet of things. We haven’t even scratched a surface on this, but they can massively drive customer satisfaction and loyalty to new levels by predicting and proactively resolving potential product or service disruption even before they happen.

Think about your device being able to send a signal and demonstrate a propensity to break down in the future. It may be possible to send a firmware update to fix it without your even knowing.

That’s the power that we’ve already seen with this type of thing in the supply chain. Procurement, logistics and supply chain teams are now being alerted to potential future risks in their sub-tier supply chains and being guided to alternative suppliers based on optimal resolutions and community-generated ratings and buying patterns of like buyers on a business network. We've talked about that in the past.

We really believe that the future of business is the predictive business. The predictive business is not going to be an option going forward. It's not a luxury. It will be what's required not only to win, but eventually, to survive. Your customers are demanding it, your employees are requiring it, and your livelihood is going to depend on it.

The need to adapt

Gardner: I suppose we're seeing instances where newer companies, upstarts, easily enter the market. Because they're not encumbered by past practices, using some of these newer tools can actually make a tremendous amount of headway for them.

Tesla Motors comes to mind as one reflection of that. Netflix is another that shows how disruptive the company can be in the short term. But we're not just talking about green field companies. We're talking about the need for entrenched global enterprises to be able to move quickly to change and adapt to some of these opportunities for transformation.

Any thoughts before we begin to close out, Tim, on how big companies that have so much complexity, so many moving parts, can start to evolve to be predictive and to be ready for some serious competition?

Minahan: Number one is that you can't have the fear of change. You need to set that aside. At the outset of this discussion, we talked about changes all around us, whether it's externally, with the new empowered consumer who is more informed and connected than ever before, or internally with a new millennial workforce that’s eager to look at new organizational structures and processes and collaborate more, not just with other employees but their peers, and even competitors, in new ways.

That's number one, and probably the hardest thing. On top of that, this isn't just a single technology role. You need to be able to embrace a lot of the new technologies out there. When we look at one of the attributes of an enabling platform for the predictive business, it really comes down to a few key areas.
You have assess multiple scenarios and determine the best course of action faster than ever before.

You need the convenience and the agility of the cloud, improved IT resources and use basically everything as a service -- apps, infrastructure, and platform. You can dial up the capabilities, processing power, or the resource that you need, quickly configure and adapt your business processes at the business level, without massive IT or consulting engagements. Then, you have to have the agility to use some of these new-age cloud platforms to create your own and differentiated business processes and applications.

The second thing is that it's critically important to gather those new insights and productivity, not just from social networks but from business networks, with new rich data sources, from real time market and customer sentiments, through social listening and analytics, the countless bits and histories of transactional and relationship data available on robust business networks.

Then, you have to manage all of this. You also need to advance your analytical capabilities. You need the power and speed of big data, in-memory analytics platforms, and exploiting new architectures like Hadoop and others to enable companies to aggregate, correlate and assess just countless bits of information that are available today and doubling every 18 months.

You have assess multiple scenarios and determine the best course of action faster than ever before. Then, ultimately, one of the major transformational shifts, which is also a big opportunity, is that you need to be able to assess and deliver with ease all of this information to mobile devices.

This is true whether it's your employees who can engage in a process and get insights where they are in the field or whether it's your customer you need to reach, either across the street or halfway around the globe. So the whole here is greater than the sum of the parts. Big data alone is not enough. Cloud alone is not enough. You need all of these enabling technologies working together and leveraging each other. The next-generation business architecture must marry all of these capabilities to really drive this predictive business.

Next generation

Gardner: So clearly at SAP Cloud, you will be giving us a lot of thought. I think you appreciate the large dimension of this, but also the daunting complexity that’s faced in many companies. I hope in our next discussion, Tim, we can talk a little bit about some of the ideas you have about what the next generation of business services platform and agility capability that gets you into that predictive mode would be. Maybe you could just give us a sense very quickly now about the direction and role that an organization like SAP Cloud would play?

Minahan: SAP, as you know, has had a history of helping business continually innovate and drive this next wave of productivity and unlock new value and advantage for the business. The company is certainly building to be this enabling platform and partner for this next wave of business. It's making the right moves both organically and otherwise to enable the predictive business.

If you think about the foundation we just went through and then marry it up against, where SAP is invested and innovated, it's now the leading cloud provider for businesses. More business professionals are using cloud solutions from SAP than from any other vendor.
The company is certainly building to be this enabling platform and partner for this next wave of business.

It's leapt far ahead in the world of analytics and performance with the next generation in-memory platform in HANA. It's the leader in mobile business solutions and social business collaboration with Jam, and as we discussed right here on your show, it now owns the world’s largest and most global business network with the acquisition of Ariba.

That’s more than 1.2 million connected companies transacting over half a trillion dollars worth of commerce, and a new company joining every two minutes to engage, connect, and get more informed to better collaborate. We're very, very excited about the promise of the predictive business and SAPs ability to deliver and innovate on the platform to enable it.

Gardner: Well, great. I'm afraid we'll have to leave it there, but I do expect we’ll be revisiting this topic of the predictive business for quite some time. You've been listening to a sponsored BriefingsDirect podcast discussion on this momentous shift in business strategy to the agility required to become a predictive business.

And we've heard how the business goal of real-time responses is really no longer good enough. We can begin now to plot a course to a better evidence-based insights capability that will allow companies to proactively shape their business goals and align their resources to gain huge advantages first and foremost in their industry. So with that, please join me in thanking our guest, Tim Minahan, Chief Marketing Officer for SAP Cloud. Thanks so much, Tim.

Minahan: Thanks, Dana, it's been great to be here.

Gardner: I would like to also thank our audience for joining us. This is Dana Gardner, Principal Analyst at Interarbor Solutions. Thanks again for coming, and do come back next time.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: SAP Cloud.

Transcript of a BriefingsDirect podcast on how a predictive business strategy enables staying competitive, and not just surviving -- but thriving -- in fast-paced and dynamic markets. Copyright Interarbor Solutions, LLC, 2005-2013. All rights reserved.

You may also be interested in: