Showing posts with label Web Data Services. Show all posts
Showing posts with label Web Data Services. Show all posts

Thursday, February 04, 2010

Part 4 of 4: Real-Time Web Data Services in Action at Deutsche Börse

Transcript of a sponsored BriefingsDirect podcast on an intriguing example of web data services in action, one of a series of presentations on web data services with Kapow Technologies.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Learn more. Sponsor: Kapow Technologies.


Dana Gardner: Hello and welcome to a special BriefingsDirect dual webinar and podcast presentation, "Real-Time Web Data Services in Action at Deutsche Börse." I'm your host and moderator, Dana Gardner, principal analyst at Interarbor Solutions.

As the culmination of a four-part series on web data services (WDS), we're here to examine a fascinating use-case for data services with Deutsche Börse Group in Frankfurt, Germany. An innovative information service recently created there highlights how real-time content and data assembled from various online sources scattered across the Web provides a valuable analysis service.

The offering supports energy traders seeking to track global fluctuations and micro trends in oil and other related markets. But, the need for real-time and precise data affects more than energy traders and financial professionals. More than ever, all sorts of businesses need to know what's going on in and what's being said about their respective markets, products, and services.

In this series with Kapow Technologies, we've examined the need for WDS and ways that WDS and related tools can be used broadly to solve these problems. Now, we are going to learn the full story of how Deutsche Börse took web data resources, and not only efficiently assembled knowledge from automated robots, cleansing tools, and analytics management, but from these capabilities they also created high value and focused WDS offerings onto itself.

Thanks for joining us, as we take an in-depth look at how the market for WDS has shaped up, quickly recap the major findings from our series so far, and then hear directly from the leader of the Deutsche Börse project, as well as from a key supplier that supported them in accomplishing their web services goal.

Access the full series of podcasts on web data services:
So to learn more about WDS as a business, please join me in welcoming our guests, Mario Schultz, director of Energy Facts at Deutsche Börse Group.

Mario Schultz: Hi. I'm happy to be here and looking forward to the session today.

Gardner: Stefan Andreasen is also with us. He is the CTO at Kapow Technologies in Palo Alto, California. Welcome back, Stefan.

Stefan Andreasen: Thank you Dana. It's a pleasure to be here.

Gardner: First, let me try to set the stage for how WDS becomes the grist for new analysis mills. We've been through quite a transition in the past 10 or 15 years. We have moved quickly as a result of the Web. We started not too long ago with very proprietary content, often bound in books and distributed by trucks, and it was perhaps six months or a year outdated, in terms of the facts and figures, by the time it was fully distributed.

Chaotic content

The Web really helped accelerate the time, but was still chaotic in terms of the types of content. It was really loosely coupled information and not very well structured or organized and wasn’t necessarily of a business-critical nature.

We quickly saw, during the late '90s and into the 2000s, that the use of middleware and objects and standards, like SQL, and use of relational databases started to cross over into what became more considered general content, not necessarily data or content, but what people used to do in business processes.

Now, we've moved along through how organizations manage their applications and data together through use of XML, web services, and service oriented architecture (SOA), to the point where we are now, at the level of WDS. We're beginning now to manage that much better and bring automation, low risk, and security to those uses.

It's interesting to me that we've moved beyond a level of static information to dynamic information and yet we still haven’t taken full advantage of everything that’s being developed and created across the Web.

But today’s market turbulence demands that we do that. We have to move into an era where we can take quality data and provide agility into how we can consume and distribute it. We're dealing with more diverse data sources. That means we need to have completeness and we need to be comprehensive, in order to accomplish the business information challenges each business faces.

The need now is for flexible, agile, and mixed sourcing of services and data together.



The need now is for flexible, agile, and mixed sourcing of services and data together. The content is often portable. That means it's ubiquitous across mobile devices and social networks in such a way that real-time analytics becomes extremely important. This cuts across many different verticals, from retail, to trading and finance, healthcare, defense, and government.

The use of data as a business is now coming to the fore. We're beginning to see value, not from just the assimilation of data for use internally, but as more and more businesses are starting to take advantage of the data that they create and have access to. They share that with their partners, create ecosystems of value, and then even perhaps sell outright the information, as well as insights and analysis from that information.

According to Forrester Research, WDS describes the end-to-end analytic information pipelining process, a stream of liquid intelligence. It's palatable and consumable. I've also looked at the Wikipedia definition, and it seems to me that we have gone well into the ability of mashup and reuse of information. It's really about the technologies around discovery extraction, moving into consolidation and access, and then external source and distribution.

To me, WDS really means the lifecycle of content use and reuse across the Web, not in a chaotic fashion, but in a managed fashion, with security permissions, access control, and the ability to bring it into play with other analytic applications and business intelligence (BI) processes.

I want to go now to Mario. When you think of WDS, how has this definition really impacted you and your business?

At the beginning

Schultz: I began by working on the exchange of information that we have in our own systems. We were proceeding with our ideas of enhancing our services and designing new products and services. We were then looking into the Web and trying to get more information from the data that we gather from websites -- or somewhere else on the global Web -- and to integrate this with our own company's internal information.

Everything we do focuses on the real-time aspect. Our WDS are always focusing on the real-time aspects of this.

Gardner: Before we get into the fuller Deutsche Börse story, I'd like to revisit our podcast series so far. In our first podcast we talked with Howard Dresner, a real leader and thought developer in BI. He told us quite a bit about the need for bringing more sources, just as Mario pointed out, both internal and external, into an analytic process.

The idea of extended data sources forms strong components of forecast and analytic activities that are now underway, according to Howard, and BI needs to be not constrained or limited by the need for timely and relevant information from any web source. Howard really reinforced the notion for me that the Web has become where structured data was 10 or 15 years ago and is important for enterprises doing analytic activity.

In the second podcast in our series, Forrester's Jim Kobielus talked about the need to know what's going on and how important it is for organizations to have a sense of what the people in the organization and outside the organization across the spectrum of their supplied chain and/or distribution networks and actual end-users are doing and saying.

We've really seen an increase in networking, social networks, and social media. There's all this buzz going on about business activities, products, and services, all of which can be extremely valuable. You can think of it as a massive real-time focus group, but only if you can access the information that's relevant. People are willing to tell you what they think, if you're able to scoop it up. And, it was about this ability to scoop up the data and information and inference that Jim Kobielus really honed in on.

He told us a lot about the identity gathering, cleansing, and the ability to then exercise the content in some sort of meaningful way. He also emphasized the need to manage this in terms of marts and warehouses. A lot of infrastructure has been put in place. But, again, the value of the infrastructure is only as good as the value of the actual content that's involved.

In the third part of the series -- we are now in the fourth and last part -- Seth Grimes, another thought leader in terms of web analytics and text analytics, talked about the need to analyze in real-time. He emphasized the need of structured data as important, but real-time data as being the next big thing to move us to the era of advanced analytics. We're not just telling what happened before in the pipeline or supply chain, but what's going to happen next. This, I think, bears quite a bit on what Mario is going to discuss.

So, let’s move along now to Deutsche Börse. Mario, I want to hear more about this organization for our listeners in North America. Tell us a little bit about your company, your organization, and what you do.

Small business lines

Schultz: Deutsche Börse is the German stock exchange in Frankfurt, Germany, and we offer all kinds of products and services around on-exchange trading and the adjacent processes. That means we have made small business lines at Deutsche Börse.

We have something that’s called Xetra, our electronic trading system for cash products. We have Eurex, our derivative business line, which is worldwide, well-known, where you can trade other derivatives on that platform.

We have a subsidiary that’s called Clearstream doing all the custody and clearing services after you have done your trade. And, we have the Market Data & Analytics (MD&A) business line, where I've been working for 10 years. The MD&A business line is responsible for the real-time delivery of information to the world outside.

We have a main system called CEF. It is our backbone IT solution for delivering data in real-time with milliseconds optimization. The data is mainly coming from our internal IT systems, like Xetra and Eurex, and we deliver this data to the outside world.


In addition, we calculate all the relevant indices, like the DAX, the flagship index for the German markets with 30 instruments, and more than 2,000 -- or nearly 3,000 -- indices that are distributed over the well-known data vendors, for example, Bloomberg or Reuters. They are our main distribution networks, where we are delivering all our information.

Germany is currently the most important market for energy and power trading in the middle of Europe.



For several years now, I've been responsible for developing new products and services around information for on-exchange or off-exchange trading. This is why we've invented and developed the Energy Facts service that is part of our discussion today.

Gardner: When you were thinking about the challenges around this opportunity, it strikes me you had many different sources of information you had to bring together. What were the challenges that you encountered as you started to pipeline these information sources together?

Schultz: One-and-a-half years ago, the idea was to develop new products and services where we could transform our know-how and this real-time connection, aggregation, and dissemination of data to other business lines where we were not currently working. This is why we looked into the energy trading sector, mainly focused on the power trading here in Europe.

Energy markets really got liberalized over the last years. It started with the Nordic area, Sweden and Norway. Ten or 12 years ago, they started with liberalizing the energy trading markets, and Germany is the next country that followed this trend. Germany is currently the most important market for energy and power trading in the middle of Europe.

We started to analyze the information needs in this sector, and recognized that it's a fundamentals-driven market. Traders are looking into the fundamental factors that affect the price of the energy or the power that you trade, whether it’s oil or whatever. That’s how we started with power trading.

You have the wind and other weather factors. You have temperature. You have the availability of power plants. So, you try to categorize and summarize these sectors. It's called the supply and the demand side regarding this energy trading.

Fundamental data models

By talking to well-known players in the market, we quickly recognized what they were doing on their trading and analytic side, and that we could build up a very powerful and fundamental data models. You have to collect all the relevant information to get an overview and to get an estimate about the price, in this case, where power could develop and in which direction it could develop.

The main issue and main task in the beginning was to collect the relevant data. Quite quickly, we were able to set up a big list of all relevant data sets or sources, especially for Germany and some adjacent countries. We came up with something around 70, 80, or even 100 different sources on the Web to grab information from. So, the main issue was how to collect and grab all this data in a manageable way into one data base. That was the first step.

In the second step, Kapow came into this play. We recognized that it’s really important to have a one-stop shopping inbound channel that collects all the information from these sources, so that you don’t have to have have several IT systems, or your own program, JavaScript, or whatever to get the information.

I wanted to have a responsible product manager for this project or for this new product. From the beginning, I had to have a good technology in place that would be able to handle all these kind of sources from the Web.

Gardner: Let me go to Stefan now at Kapow. When you heard about Deutsche Börse and some of these issues that they were facing and the challenges that they were trying to solve, what came to your mind in terms of how Kapow might apply?

Andreasen: It came to mind that, if these data sources exist somewhere on the Web, we can actually grab them where they are. What you traditionally do with information gathering is that you call every company or every entity that has data and ask them, "Will you please provide the data in this or this format?" But, with Kapow Web Data Services, you can just grab the data, wherever it is on the Web, and assemble this valuable data source much easier and much faster.

Gardner: Let’s go back to Mario. Tell us, as you progressed through the solution, what was the experience?

Schultz: Just to go back one step. We recognized that there are so many different data formats that we had to grab. There are all these different providers of information in Germany and other European countries. They have their own websites. Some give the data in HTML format. Others use XLS, CSV, or even PDFs.

Kapow tells us how to get this information from these different sources in quite different formats. This is a manageable way, with a process-driven or graphical user interface (GUI) driven tool, that would use the effort, the personal, the manpower efforts to collect and grab the data.

At our starting point, one-and-a-half years ago, there were a lot of things underway here in Germany or the other European counties with the Copenhagen Conference, the carbon-emission discussion, and the liberalization. There were discussions about the big players with the transmission nets and power plants and whether they had to split up these things. So really there are a lot of changes. If you have source a or website source known once, you can just take it, program the script, and then leave it. We have to always check it, and they are changing the structure.

Recognize change

New companies are built, and some transmission lines are transformed. So, other companies are building up a new website. There are a lot of things underway. You have 70, 80, or even 100 sources, I don't know. You always have to recognize change and then check whether you have to rework it.

I started to work with an internal solution that I thought could handing all that. After a few weeks of developing and discussing, we recognized that our internal solution was not appropriate and not capable of doing all that kind of stuff. We quickly came across Kapow and evaluated their possibilities. We decided, nearly from the beginning or just a few weeks into starting the project, that we had to use the Kapow tool to collect all these data from the websites.

Gardner: As I understand, you were involved with programming some robots and setting them up, and then you were able to adjust them dynamically to whatever the needs were of the analysis intent.

Schultz: The main focus in the beginning was to get all these different formats, even, for example, go into a PDF and describe the relevant data that we want to grab, not as text, but a figure that we needed for our further processing.

There are even some interesting JavaScript or Java-based websites where you have to click on the switch, and then, with a right-hand click on the mouse, get the dataset. We were able to do all these kind of things with the Kapow tool and these robots within Kapow to grab this kind of data automatically.

We decided, nearly from the beginning or just a few weeks into starting the project, that we had to use the Kapow tool to collect all these data from the websites.



Gardner: What have been some of the results? What business-development activities have you had? What's been the value add?

Schultz: The value add was to grab all this data into one common data format, one database, so we would be able to deliver this data to the vendors via web tool, web terminal, or even our existing CEF data feeds. A lot of the players in the market are trying to collect this data by themselves, or even manually, to get an overview of where the power price would develop over the next day, hours, weeks, months, whatever.


There are some other providers in the market focusing on the real-time delivery of data. In the general on-exchange or off-exchange business, we're talking millisecond optimization. That’s not the timing that we have here, but it's getting from a once-a-day PDF analyst commentary via email in the morning, to a real-time terminal, or even to go to Bloomberg or Reuters screen where you get our Energy Facts data for on-time and real-time information set for trading.

Gardner: I'm really intrigued by your ability to manage so many different sources in real time and, as you say, coming from all different sources, interfaces, and application formats. Can you give us a little demonstration and show us the application in action?

Schultz: Okay. You should see on your screen our Energy Facts web terminal. This is one of our delivery possibilities to bring this data in real-time to the end users.

In the first phase, we're just focusing on the German market -- Belgium, France, and the Netherlands. We decided to start with four European countries. I don’t want to go to other pages. I just decided to take two or maybe three of them to give a view of what's going on in this Energy Facts terminal.

Not only websites

Currently, we have 70 or 80 sources that we're grabbing. It's not only websites, but we have some third-party providers that are delivering information, for example, weather, temperature, and things like that. We have providers giving data via FTP service, and we even use Kapow for grabbing data from these third-party players. As I said, it's a one-stop shopping solution to get everything via one channel.

For example, an interesting thing in the energy trading space is availability. When a company is looking into the future, they want to know the availability of different power plants. You can see on the right hand side there will be a summary of nuclear power, for example, and lignite, hard coal, and water.

There are various sources in Germany giving all this information in different formats. We grab everything into one database, do quality checks, and then compile the information to the front-end that you can see down there with a graphical presentation. We have a table with all the figures and we even do some kind of analytic enrichments, so we have a deviation from what has been published the day before.

You can see, for example, that we have some changes in the hard coal availability for the next 30 days. We're taking those sources, collecting the information, doing quality checks and quality assurance, aggregating everything into one database, one data format, and then presenting it on the screen.

If you ask us to add other kinds of data, we can integrate it quite quickly into our service. No problem.



Gardner: If I'm a consumer of this, if I am trader that subscribes to your service, and I encountered some other form of information that I wanted to bring in the mix, do I have the option of approaching you and asking for you to bring that in, or is that out of the question?

Schultz: No. This is just our starting point. As I said, this is something where we tried to create a complete new business in the energy sector. We started with these four countries and datasets and we will enhance it to other countries. If you ask us to add other kinds of data, we can integrate it quite quickly into our service. No problem.

Just two other examples. One, for example, is something that in Germany is called Urgent Market Messages. In Germany, we have four big power plant providers or transmission-system operators. The power plant providers push out, in real time, as fast as possible, Urgent Market Messages, when a power plant has to go into maintenance mode or has an accident and they have to repair something.

We grab all these different kind of sources from all those power plant providers and then aggregate all these Urgent Market Messages in the table that you can see down there. If you go to other pages on our screen, you can see them on the left hand side, where you always get this Urgent Market Messages, the latest ones. If, for example, a nuclear power plant goes off the grid unexpectedly, this could dramatically change the power price on the market. This is another example of collecting data for Urgent Market Messages.

I don’t want to stretch this too much, but the last point is cross border. Germany is somewhat in the middle of all this trading in Europe, so we have a lot of connection points to the other countries. We have Denmark, Sweden, Poland, France, Belgium, and Luxembourg. So, we have so many grid lines going over the border to the other countries.

You always have to collect the data from these different transmission lines to the other countries, because they are auctioning the power, to transport power, for example, from France, Germany, or the way around. You have to get all this kind of information and a better understanding of pricing.

Power allocation

For example, in this case, it's either Germany to France and to connection point. Down there, you can see then how much of power has been allocated for a specific hour in a day. The red line is the price for the transportation in this case. In addition, you could show the price difference, for example, between Paris and Leipzig, the two exchanges for energy. Everything is collected and then put into one view, where you show the interesting figures on the one screen.

Gardner: Suffice it to say that there is an awful lot going on behind this little red line. It's not that easy to put this together. This is reflecting an awful lot of information and processing.

Schultz: This slide is from one of our pages in used for one provider for Germany to France. Now, I'll go to this button and show you the other ones, like the ones to Germany, then the Germany-Netherlands connection.

These are the four countries we're currently covering, and you can see all the connection points for this. Later on, we'll go on with Denmark and the other one. This is really the power of having all this kind of data in one tool, where the aggregation, quality check, and everything comes into play.

Gardner: Mario, I have to imagine that there are external forces that can come to bear on this, perhaps a massive snowstorm or some other disruption in the price of a major commodity, and that’s something that you can bring into this picture almost immediately, right?

Energy Facts focuses on the fundamental data collected in real-time, and aggregated into one service, the place where we saw it as the missing piece in Europe.



Schultz: Yes. For example, in what I just showed, if we go to this weather page, you see temperature. This is very interesting. Generally, in Germany we have something, as you see on the yellow curve, between 1 and 3 degrees Celsius as the general temperature in winter. You see the other forecast for temperature is something around -5. Some time ago, it was even -7 degrees. It's really big. This is normally an indication for higher power prices, because people will demand more power for heating their buildings or offices. So, this has really changed. This weather aspect has changed every six hours within our service.

Gardner: If these traders also wanted to try to find out why they were seeing certain effects in these analytic graphs, is there a way for them to then quickly go out and look at the news feeds or other information, so that they could determine what’s behind the curves?

Schultz: Currently, it's not part of our service, and we didn’t do that because there are other providers for this information. Generally, you have the on-exchange and off-exchange prices that are normally available from the existing data vendors. For example, they use Bloomberg or other service providers. Energy Facts focuses on the fundamental data collected in real-time, and aggregated into one service, the place where we saw it as the missing piece in Europe. If you want to go to the news site, traders have other providers for the news on their desks.

Gardner: I see. So, this is really focused on numeric, algorithmic, programmable types of information and data.

Schultz: This is what we call the fundamental data sets, what is fundamentally behind driving the power price -- the demand and supply side factors behind the price. The analysts or traders can get this information in real-time in one service to do better estimation of the pricing elements.

Gardner: That’s really impressive, I appreciate your walking us through it. I wonder if we can go back now to Stefan and talk a little bit about what Kapow and its values and services brought to the table to help support this really impressive application and service.

Impressive service

Andreasen: Sure, Dana. This is an extremely impressive service that Mario just showed us here, and I'm sure, if you're dealing with buying and selling energy, this is a must for you to be sure you made the right decision.

If we go back to what I talked about earlier, businesses are relying more and more on data to make the right decision, and their focus is on quality, completeness, and agility. Let's be more practical here and ask how you actually get this data.

There is a term, data integration, which is about accessing the data and providing it in standard API, so that you can actually leverage the measure of business application.

Energy Facts is accessing this data at the 70-80 different data sources, as Mario said, and providing it as a feed that depends on the volatility of the different data sources. Some of the data delivers every minute, and some deliver every four hours, etc., based on how quickly the data source changes. WDS is all about getting access to this data where it resides.

There are really two different kinds of data sources. One set of data sources is more like a real-time source data source. Let's say you go to a patent directory, and there are probably millions of patents. In that case you would use Kapow Data Server to wrap that data source into a service layer, and then you would be able to do real-time, as soon as you get real-time results back. So, that's real-time access, where you have vast amount of information.

Actually, all styles exist, but there is a tendency for many companies to actually access the data where it is, rather than trying to consolidate it to a new place.



The other scenario, and I think that's more what we see in the Energy Facts example here, is where you have a more limited data source, and you are actually trying to do a consolidation of the data into a database, and then you use that database to serve different customers or different applications.

With Kapow, you can actually go in and access the data, if you can see them on your browser. That's one thing. The other thing you need to do to make this data available to your business application is to transform and enrich the data, so that it actually matches the format that you want.

For example, on the website, it might have the date saying, "2 hours ago" or "3 minutes ago" and so on. That's really not useful. What you really want is a time stamp with the hour, the second, the minute, the months, the day, the year, so you can actually start comparing these. So, data cleansing is an extremely important part of data extraction and access.

The last thing, of course, is serving the data in the format you need. That can be a database, if you're doing consolidation, or it can be as an API, if you are doing more of a federated access to data, and leaving the data where it is.

Actually, all styles exist, but there is a tendency for many companies to actually access the data where it is, rather than trying to consolidate it to a new place.

Urgent messages

Schultz: Dana, I have a very good example for this one. I talked about Urgent Market Messages, where the power plant providers are sending out, as quickly as possible after an incident occurs, an Urgent Market Message regarding changes in the power plant availability. This is something that is a good aggregate, using Kapow, because we can schedule all these robots in a very good way.

Currently we're checking these Urgent Market Messages sources every minute. On all aggregated levels, we always can state whether this message is valid or invalid. I didn't focus on this is my presentation.


If we find the message on the website, we put it on our service. Maybe in the next minute the message disappears on the website. We have it still in our service, but then we flag this message as invalid. The user knows that this message had been on the source website, but now it disappeared. We still have the information, but we can separate between these two statuses, valid or invalid Urgent Market Message.

This is accomplished by accessing the source, enriching it into the database, doing some scheduling, and then giving feedback and checking the website again. By doing these three steps, we're able to offer this part of our Urgent Market Message presentation layer.

Gardner: Mario, I think you're really a pioneer in this. What intrigues me is how far this can go in addition to what you have done with it, and how this could affect the number of other industries and vertical businesses as well.

Kapow Technologies today has more than 400 customers. For them, our technology becomes a business critical part of what they do.



From your perspective, Stefan, how are other types of business, enterprises, and service providers likely to start using this and providing WDS-based, value added services as well?

Andreasen: That’s a very good question. Kapow Technologies today has more than 400 customers. For them, our technology becomes a business critical part of what they do. Let me try to explain that. Most information providers sell data to all the businesses. In the U.S., for example, there is a big business around background checking, both of people and companies. It's a fact in the U.S. that if you go into a bank to get a credit card, they're going to run a background check on you, before you can get this credit card.

One of the things they check are a lot of resources on the Web, for example, criminal records. On the Web, every courthouse has a website, where you can log in and search for criminal records for a certain person.

Most of these companies that are doing the background checking are Kapow customers, using Kapow's Web Data Services to service enable all these courthouses. When they go in and want a credit card in the background check, Kapow automatically goes out and gets that information from these courthouse websites and a lot of other data sources in real-time. Otherwise, they would have to have 50 or 60 people manually typing in, and they wouldn’t get the results until two days later.

Gardner: I suppose another effect also over the past 10 or 15 years, from my timeline earlier in the presentation, is that these web standards have kicked in, not only for looking up information across the Web, but it has also become a standardized way of accessing information internally. What about the use of this for corporate performance management and other aspects of the web data that’s inside of companies?

Available white paper

Andreasen: I encourage everybody to go to our website and download a white paper from one of our customers, called Fiserv. It's a large financial services company in the U.S. Fiserv has a lot of business partners, actually they have more than 300 banks in more than 10 countries as business partners. Because they're selling services, it's incredibly important for them to also monitor their customers to understand what's happening.

They had lot of people who logged into these 300 partner banks every day and grabbed some financial information, such as interest rates, etc., into an Excel spreadsheet, put it into a database, and then got it up on a dashboard.

The thing about this is that, first, you have a lot of human labor, which can cause human errors, and so on. You can only do it once a day, and it's a tedious process. So what they did is got Kapow in and automated the extraction of this data from all their business partners -- 300 banks in more than 10 countries.

They can now get that data in near real-time, so they don’t have to wait for data. They don’t have to go without on the weekend, because people are not working. They get that very business critical insights to the market and their partners instantly through our product.

I can give you another example. A large car manufacturer is spending almost a billion dollars a year in advertising on television. Of course, there are several parameters that are important for them to understand about how should they spend the advertising money the best possible way.

These data sources are, for example, the lead reporting, understanding what are the leads they're getting in and understanding the market data they're are getting from business information providers about trends in the markets and so on. What is the reporting they get from ad campaigns? How can they see how many people clicked on this or watched these television shows? Also, how many cars are getting registered, their models versus their competitors?

So, it's just another example again about how WDS can help the market analyst, the product manager, and a lot of people who have to make very vital business decisions in the companies out there.



By using Kapow, they could hook up to all of these data sources in real time and suddenly get complete insights to the effectiveness of how they spend their advertising dollar, and having very, very good return on the investment.

So, it's just another example again about how WDS can help the market analyst, the product manager, and a lot of people who have to make very vital business decisions in the companies out there.

Gardner: Great. I appreciate your input Stefan. Today’s discussion on how the Deutsche Börse Group in Frankfurt, Germany is using Kapow Technologies for a real-time web data analysis service comes as a culmination of a four-part series on WDS.

We have seen how an innovative information service that’s been created rapidly elegantly demonstrates how real-time content and data, assembled from various online sources, provides a valuable service and an analysis capability as a business.

What's happening with WDS is that it's gone beyond an internal enterprise focus. It's become a business onto itself. So, there are lots of value opportunities. We can sell new value across business solutions. We can look for ways that strategies internally are enhanced, and we can create ecosystems of partnership.

I think what we are going to see, when cloud computing starts to really take off, rather than be discussed so much, is the opportunity for companies that are in partnership to really encourage competitive advantage by sharing data and analytics effectively. It also drives more business strategy and execution and creates new and additional revenue streams as a result.

So, I want to thank Mario at Deutsche Börse for his participation here. I think they're a real poster child for how real-time analytics can be brought together. So, thanks to you, Mario, for joining us.

Schultz: It was a pleasure, Dana. Thank you.

Gardner: And, certainly, I also want to give the opportunity for viewers and listeners to learn more about some of the topics we have discussed from Kapow. There are a lot of different resources available there in order to take some next steps or continue to educate yourselves on some of these issues.

This is Dana Gardner, principal analyst at Interarbor Solutions, your host and moderator. I also want to thank Stefan Andreasen. He is the CTO of Kapow.

Andreasen: Thank you very much, Dana.

Gardner: You've been enjoying a BriefingsDirect presentation. Thanks again for joining us, and come back next time.


Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Learn more. Sponsor: Kapow Technologies.

Transcript of a sponsored BriefingsDirect podcast on information management for business intelligence, one of a series on web data services with Kapow Technologies. Copyright Interarbor Solutions, LLC, 2005-2010. All rights reserved.

You may also be interested in:

Monday, November 09, 2009

Part 3 of 4: Web Data Services--Here's Why Text-Based Content Access and Management Plays Crucial Role in Real-Time BI

Transcript of a sponsored BriefingsDirect podcast on information management for business intelligence, one of a series on web data services with Kapow Technologies.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Learn more. Sponsor: Kapow Technologies.

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you’re listening to BriefingsDirect.

Today we present a sponsored podcast discussion on how text-based content and information from across web properties and activities are growing in importance to businesses. The need to analyze web-based text in real-time is rising to where structured data was in importance just several years ago.

Indeed, for businesses looking to do even more commerce and community building across the Web, text access and analytics forms a new mother lode of valuable insights to mine.

In Part 1 of our series on web data services with Kapow Technologies, we discussed how external data has grown in both volume and importance across the Internet, social networks, portals, and applications.

As the recession forces the need to identify and evaluate new revenue sources, businesses need to capture such web data services for their business intelligence (BI) to work better, deeper, and faster.

In Part 2, we dug even deeper into how to make the most of web data services for BI, along with the need to share those web data services inferences quickly and easily.

Now, in this podcast, Part 3 of the series, we discuss how an ecology of providers and a variety of content and data types come together in several use-case scenarios. We look specifically at how near real-time text analytics fills out a framework of web data services that can form a whole greater than the sum of the parts, and this brings about a whole new generation of BI benefits and payoffs.

Here to help explain the benefits of text analytics and their context in web data services, is Seth Grimes, principal consultant at Alta Plana Corp. Thanks for joining, Seth.

Seth Grimes: Thank you, Dana.

Gardner: We're also joined by Stefan Andreasen, co-founder and chief technology officer at Kapow Technologies. Welcome, Stefan.

Stefan Andreasen: Thank you, Dana.

Gardner: We have heard about text analytics for some time, but for many people it's been a bit complex, unwieldy, and difficult to manage in terms of volume and getting to this level of a "noise-free" text-based analytic form. Something is emerging that you can actually work with, and has now become quite important.

Let's go to you first, Seth. Tell us about this concept of noise free. What do we need to do to make text that's coming across the Web in sort of a fire hose something we can actually work with?

Difficult concept

Grimes: Dana, noise free is an interesting concept and a difficult concept, when you're dealing with text, because text is just a form of human communication. Whether it's written materials or spoken materials that have been transcribed into text, human communications are incredibly chaotic.

We have all kinds of irregularities in the way that we speak -- grammar, spelling, syntax. Putting aside any kind of irregularities, we have slang, sarcasm, abbreviations, and misspellings. Human communications are chaotic and they are full of "noise." So really getting to something that's noise-free is very ambitious.

I'm going to tell you straightforwardly, it's not possible with text analytics, if you are dealing with anything resembling the normal kinds of communications that you have with people. That's not to say that you can't aspire to a very high level of accuracy to getting the most out of the textual information that's available to you in your enterprise.

It's become an imperative to try to deal with the great volume of text -- the fire hose, as you said -- of information that's coming out. And, it's coming out in many, many different languages, not just in English, but in other languages. It's coming out 24 hours a day, 7 days a week -- not only when your business analysts are working during your business day. People are posting stuff on the web at all hours. They are sending email at all hours.

If you want to keep up, if you want to do what business analysts have been referring to as a 360-degree analysis of information, you've got to have automated technologies to do it.



Then, the volume of information that's coming out is huge. There are hundreds of millions of people worldwide who are on the Internet, using email, and so on. There are probably even more people who are using cell phones, text messaging, and other forms of communication.

If you want to keep up, if you want to do what business analysts have been referring to as a 360-degree analysis of information, you've got to have automated technologies to do it. You simply can't cope with the flood of information without them.

That's an experience that we went through in the last decades with transactional information from businesses. In order to apply BI or to get BI out of them, you have to apply automated methods with specialized software.

Fortunately, the software is now up to the job in the text analytics world. It's up to the job of making sense of the huge flood of information from all kinds of diverse sources, high volume, 24 hours a day. We're in a good place nowadays to try to make something of it with these technologies.

Gardner: Of course, we're seeing the mainstream media starts behaving more like bloggers and social media producers. We're starting to see that when events happen around the world, the first real credible information about them isn't necessarily from news organizations, but from witnesses. They might be texting. They might be using Twitter. It seems that if you want to get real-time information about what's going on, you need to be able to access those sorts of channels.

Text analytics

Grimes: That's a great point Dana, and it helps introduce the idea of the many different use-cases for text analytics. This is not only on the Web, but within the enterprise as well, and crossing the boundary between the Web and the inside of the enterprise.

Those use-cases can be the early warning of a Swine flu epidemic or other medical issues. You can be sure that there is text analytics going on with Twitter and other instant messaging streams and forums to try to detect what's going on.

You even have Google applying this kind of technology to look at the pattern of the searches that people are putting in. If people are searching on a particular medical issue centered in a particular geographic location, that's a good indicator that there's something unusual going on there.

It's not just medical cases. You also have brand and reputation management. If someone has started posting something very negative about your company or your products, then you want to detect that really quickly. You want early warning, so that you can react to it really quickly.

We have some great challenges out there, but . . . we have some great technologies to respond to those challenges.



We have a great use case in the intelligence world. That's one of the earliest adopters of text analytics technology. The idea is that if you are going to do something to prevent a terrorist attack, you need to detect and respond to the signals that are out there, that something is pending really quickly, and you have to have a high degree of certainty that you're looking at the right thing and that you're going to react appropriately.

We have some great challenges out there, but, as I said, we have some great technologies to respond to those challenges in a whole variety of business, government, and other applications.

Gardner: Stefan, I think there are very few people who argue with the fact that there is great information out there on the Web, across these different new channels that have become so prominent, but making that something that you can use is a far different proposition. Seth has been telling us about automated tools. Tell us what you see in terms of web data services and how we can make this information available to automated system.

Deep data

Andreasen: Thank you Dana. Let's just look at something like Google. You go there and do a search, and you think that you're searching the entire Internet. But, you're not, because you're probably not going to access data that's hidden behind logins, behind search forms, and so on.

There is a huge amount of what I call "deep web," very valuable information that you have to get to in some other way. That's where we come in and allow you to build robots that can go to the deep web and extract information.

I'd also like to talk a little bit more about the noise-free thing and go to the Google example. Let's say you go to Google and you search for "IBM software." You think that you will be getting an article that has something to do with IBM software.

You often actually find an article that has nothing to do with IBM software, but, because there are some advertisements from IBM, IBM was a hit. There is some other place that links to software, and you will find software. Basically, end up in something completely irrelevant.

Eliminating noise is getting rid of all this stuff around the article that is really irrelevant, so you get better results.

The other thing around noise-free is the structure. It would be great if you could say, "I want to search an article about IBM software which was dated after Oct. 7," or whatever, but that means you also need to have that additional structured information in it.

It's very important to have tools that can . . . understand where the content is within a page and what's the navigation on that page.



The key here is to get noise-free data and to get full data. It's not only to go to the deep web, but also get access to the data in a noise-free way, and in at least a semi-structured way, so that you can do better text analysis, because text analysis is extremely dependent on the quality of data.

Grimes: I have to agree with you there, Stefan. It's very important to have tools that can strip away not only the ads, but understand where the content is within a page and what's the navigation on that page.

We might not be interested in navigation elements, the fluff that's on a page. We want to focus on the content. In addition, nowadays on the Web, there's a big problem of duplication of material that's been hosted in multiple sites. If you're dealing with email or forums, then people typically quote previous items in their reprise, and you want to detect and strip that kind of stuff away and focus on the real relevant content. That is definitely part of the noise-free equation, getting to the authentic content.

Gardner: Stefan, you refer to the deep web. I imagine this also has a role, when it comes to organizations trying to uncover information inside of their firewalls, perhaps among their many employees and all the different tools that they're using. We used to call it the intranet, but is there an intranet effect here for this ability to gather noise-free text information that we can then start processing?

Extended intranet

Andreasen: Absolutely. I'd even say the extended intranet. If we're looking at a web browser, which is the way that most business analysts or other persons today are accessing business applications, we're accessing three different kinds of applications.

One involves applications inside the firewall. It could be the corporate intranet, etc. Then there are applications where you have to use a login, and this can be your partners. You're logging in to your supplier to see if some item is in stock. Or, it can be some federal reporting site or something.

The sites behind the login are like the extended enterprise. Then, of course, there is everything out of the World Wide Web -- more than 150 million web pages out there -- which have all kinds of data, and a lot of that is behind search forms, and so on.

Gardner: Seth, as a consultant and analyst, you've been focused on text analytics for some time, but perhaps a number of our listeners aren't that familiar with it. Could you maybe give us a brief primer on what it is that happens when you identify some information -- be it Internet, extended web, deep web? How do you go through some basic steps to analyze, cleanse, and then put data into a form that you can then start working with?

Grimes: Dana, I'm going to first give you an extremely short history lesson, a little factoid for you. Text analytics actually predates BI. The basic approaches to analyzing textual sources were defined in the late '50s. Actually, there is a paper from an IBM researcher from 1958, that defines BI as the analysis of textual sources.

People apply so-called machine-learning technologies in order to improve the accuracy of what they are doing.



What happened is that enterprises computerized their operations, their accounting, their sales, all of that in the 1960s. That numerical data from transactional systems is readily analyzable, where text is much more difficult to analyze. But, now we have come to the point, as I said earlier, where there is software and great methods for analyzing text.

What do they do? The front-end of any text analysis system is going to be information retrieval. Information retrieval is a fancy, academic type of term, meaning essentially the same thing as search. We want to take a subset of all of the information that's out there in the so-called digital universe and bring in only what's relevant to our business problems at hand. Having the infrastructure in place to do that is a very important aspect here.

Once we have that information in hand, we want to analyze it. We want to do what's called information extraction, entity extraction. We want to identify the names of people, geographical location, companies, products, and so on. We want to look for pattern-based entities like dates, telephone numbers, addresses. And, we want to be able to extract that information from the textual sources.

In order to do that, people usually apply a combination of statistical and linguistic methods. They look for language patterns in the text. They look for statistics like the co-occurrence of words in multiple text. When two words appear next to each other or close to each other in many different documents -- that can be web pages or other documents -- that indicates the degree of relationship. People apply so-called machine-learning technologies in order to improve the accuracy of what they are doing.

Suitable technologies

All of this sounds very scientific and perhaps abstruse -- and it is. But, the good message here is one that I have said already. There are now very good technologies that are suitable for use by business analysts, by people who aren't wearing those white lab coats and all of that kind of stuff. The technologies that are available now focus on usability by people who have business problems to solve and who are not going to spend the time learning the complexities of the algorithms that underlie them.

So, we're at the point now where you can even treat some of these technologies as black boxes. They just work. They produce the results that you need in the form that you need them. That can be in a form that extracts the information into databases, where you can do the same kind of BI that you have been used to for the last 20 years or so with BI tools.

It can be visualizations that allow you to see the interrelationships among the people, the companies, and the products that are identified in the text. If you're working in law enforcement or intelligence, that could be interrelationships among individuals, organizations, and incidents of various types. We have visualization technologies and BI technologies that work on top of this.

Then, we have one other really nice thing that's coming on the horizon, which is semantic web technology -- the ability to use text analytics to support building a web of data that can be queried and navigated by automated software tools. That makes it even easier for individuals to carry out everyday business and personal problems for that matter.

Obviously, any BI or any text analysis is no better than the data source behind it.



Gardner: I'd like to dig into some use-cases and understand a little bit better how this is being used productively in the field. Before we do that, Stefan, maybe you could explain from Kapow Technologies' perspective, how you relate to this text analytics field that Seth so nicely just described. Where does Kapow begin and end, and how do you play perhaps within an ecosystem of providers that help with text analytics?

Andreasen: Text analytics, exactly as Seth was saying, is really a form of BI. In BI, you are examining some data and drawing some conclusions, maybe even making some automated actions on it.

Obviously, any BI or any text analysis is no better than the data source behind it. There are four extremely important parameters for the data sources. One is that you have the right data sources.

There are so many examples of people making these kind of BI applications, text analytics applications, while settling for second-tier data sources, because they are the only ones they have. This is one area where Kapow Technologies comes in. We help you get exactly the right data sources you want.

The other thing that's very important is that you have a full picture of the data. So, if you have data sources that are relevant from all kinds of verticals, all kinds of media, and so on, you really have to be sure you have a full coverage of data sources. Getting a full coverage of data sources is another thing that we help with.

Noise-free data

We already talked about the importance of noise-free data to ensure that when you extract data from your data source, you get rid of the advertisements and you try to get the major information in there, because it's very valuable in your text analysis.

Of course, the last thing is the timeliness of the data. We all know that people who do stock research get real-time quotes. They get it for a reason, because the newer the quotes are, the surer they can look into the crystal ball and make predictions about the future in a few seconds.

The world is really changing around us. Companies need to look into the crystal ball in the nearer and nearer future. If you are predicting what happens in two years, that doesn't really matter. You need to know what's happening tomorrow. So, the timeliness of the data is important.

Let me get to the approach that we're taking. Business analysts work with business applications through their web browser. They actually often cut and paste data out of business application into some spreadsheet.

The world is really changing around us. Companies need to look into the crystal ball in the nearer and nearer future.



You can see our product as a web browser, where you can teach it how to interact with the website, how to only extract the data that's relevant, and how you can structure that data, and then repeat it. Our product can give you automated, real-time, and noise-free access to any data you see in a web browser.

How does that apply to text analytics? Well, it gives you the 100-percent covered, real-time data source, with all of those values that I just explained.

Gardner: I really was intrigued by this notion of the crystal ball, and not two years from now, but tomorrow. It seems to me that so many people are putting up so much information about their lives, their preferences. People in business are doing the same around their occupation. We have this virtual focus group going on around us all the time. If we could just suck out the right information based on our products, we could get that crystal ball polished up.

Let me go back to you, Stefan. Can you give us an example of where a market research, customer satisfaction, or virtual focus group benefit is being derived from these text analytics capabilities?

Knowing the customer

Andreasen: Absolutely. For any company selling services or products, the most important thing for them to know is what the customers think about their product. Are we giving our customers the right customer service? Are we packaging our products the right way? How do we understand the customer's buying behavior, the customer communications, and so on?

Intuit is a customer we have together with a text analysis company called Clarabridge. They use text analysis solution to understand the TurboTax customers.

Before they had a text analysis system, they had some people that did one percent coverage sampling of forums on the web, their own customer support system, and emails into their contact center to get some rudimentary overview of what the customer thought.

We went in, and with Kapow Technologies they can now get to all these data sources -- forums online, their own customer support center, and wherever there are networks of TurboTax users -- and extract all the information in near real-time. Then, they use the text-analysis engine to make much, much better predictions of what the customers think, and they actually having the finger on the pulse.

With the web, you don't have to get those people together, because they come together on their own and participate in social media forums of various types.



If a set of customers suddenly talk about a feature that doesn't work, or that is much better in the competitor's product -- and thereby looking into the near future of the crystal ball --they can react early and try to deal with this in the best possible way.

Gardner: Seth Grimes, is this an area where you have seen a lot of the text analytics work focused on these sort of virtual focus groups?

Grimes: Definitely. That's an interesting concept. The idea behind a focus group is that it's a traditional qualitative research tool for market research firms. They get a bunch of people into a room and they have the facilitator lead those people through a conversation to talk about brand names, marketing, positioning, and then get their reactions to it.

With the web, you don't have to get those people together, because they come together on their own and participate in social media forums of various types. There are a whole slew of them. Together they constitute a virtual focus group, as you say.

The important point here is to get at the so-called voice of the customer. In other words, what is the customer saying in his own voice, not in some form where you're forcing that person to tick off number one, two, three, four, or five, in order to rate your product. They can bring up the issues that are of interest to them, whether they are good or bad issues, and they can speak about those issues however they naturally do. That's very important.

I've actually been privileged to share a stage with the analytics manager from Intuit, Chris Jones, a number of times to talk about what he is doing, the technologies, and so on. It's really interesting stuff that amplifies what Stefan had to say.

Broad picture

The idea is that you can use these technologies, both to get a broad picture of the issues, and no longer have to bend those issues into categories that your business analysts have predefined. Now, you can generate the topics of most interest, using automated, statistical methods from what the people are actually saying. In other words, you let them have their own voice.

You also get the effect of not only looking at the aggregate picture, at the mass of the market, but also at the individual cases. If someone posts about a problem with one of the products to an online forum, you can detect that there's an issue there.

You can make sure that the issues gets to the right person, and the company can personally address each issue in order to really keep it from escalating and getting a lot of attention that you really don't want it to get. You get the reputation of being a very responsive company. That's a very important thing.

The goal here is not necessarily to make more money. The goal is to boost your customer satisfaction rating, Net Promoter score, or however you choose to measure it. These technologies, the text technologies, are a very important package and part of the overall package of responding to customer issues and boosting customer satisfaction.

While you're doing it, those people are going to buy more. They're going to reduce your support costs, all of that kind of stuff, and you are going to make more money. So, by doing the right thing, you're also doing something good for your own company.

What you really want to know is who this person knows in all kinds of social networks on the 'Net, and to try to make a network of who are the real influencers and who are the network centers.



Gardner: In business, you want to reduce the guesswork to do better by your customers. Stefan, as I understand it, Kapow Technologies has been quite successful in working with a variety of military, government, and intelligence agencies around the world on getting this real-time information as to what's going on, but perhaps with the stakes being a bit higher, things like terrorism, and even insurrections and uprising.

Tell us a little bit about a second use case scenario, where text analytics are being used by government agencies and intelligence agencies.

Andreasen: As Seth said, the voice of the customer is very interesting and very valuable use case with text analysis. I'll add one thing to what Seth said. He was talking about product input, and of course, we all know that developing products -- maybe not so much a product like TurboTax, but developing a car -- is extremely expensive. So, understanding what kind of product your customers want in the future is an important part of the voice of the customer.

With a lot of the customers in the military intelligence, it's similar. Of course, they would like to know what people are writing from a sentiment point of view, an opinion point of view, but another thing that's actually even more important in the intelligence community is what I will call relationships.

Seth mentioned relationships earlier, and also understanding the real influencers and who are the ones that have the most connections in these relationships. Let's say somebody writes an article about how you mix some chemicals together to make an efficient bomb. What you really want to know is who this person knows in all kinds of social networks on the 'Net, and to try to make a network of who are the real influencers and who are the network centers.

Finding relationships

We see a lot of uses of our product, going out to blogs, forums, etc., in all kinds of languages, translating it often into English, and doing this relationship analysis. A very popular product for that, which is a partner of ours, is Palantir Technologies. It has a very cool interactive way of finding relationships. I think this is also very relevant for normal enterprises.

Yesterday I met with one of the big record companies, which is also a customer of ours. As soon as I explained this relationship stuff, they said, "We can really use this for anti-piracy, because it is really just very few people who do the major work when it gets to getting copies of new films out in the 'Net. So, understanding this relationship can be very relevant for this kind of scenario as well.

Grimes: Dana, when you introduced our podcast today, you used the term ecology or ecosystem, and that's a real great concept that we can apply here in a number of dimensions. We do have an ecosystem in at least two dimensions.

Stefan mentioned one of the Kapow partners, Palantir. We earlier mentioned the text analytics partner, Clarabridge. We have the ability now through integration technologies like Kapow to bring together different information sources, very disparate, different information sources with different characteristics, to provide an ecosystem of information that can be analyzed and brought to bear to solve particular business or government problems.

I find that ecosystem concept to be very useful here in framing the discussions about how the text technologies fit into something that's a much larger picture.



We have a set of software technologies that can similarly be integrated into an ecosystem to help you solve those problems. That might be text analysis technologies. It might be traditional BI or data warehousing technologies. It might be visualization technologies, whatever it takes to handle your particular business problem.

As we've been discussing, we do see applications in a whole variety of business and government issues, whether it's customer or intelligence or many other things that we haven’t even discussed today. So, I find that ecosystem concept to be very useful here in framing the discussions about how the text technologies fit into something that's a much larger picture.

Gardner: So, we are looking at the ecologies. We are looking at some of these use-cases. It seems to me that we also want to be able to gather information from a variety of different players, perhaps in some sort of a supply chain, ecosystem, business process, channel partners, or value added partners. The ecology and ecosystem concept works not only in terms of what we do with this information, but how we can apply that information back out to activities that are multi-player, beyond the borders or boundaries of any one organization.

I'm thinking about product recall, health, and public-health types of issues. Seth, have you worked with any clients or do you have any insights into how text analytics is benefiting an extended supply chain of some sort, and how the ecosystem of insight into the text analytics solves some unique problems there?

Product recall

Grimes: Product recall is an interesting one. Let me give you an example there. This is, like most examples that we are going to discuss, a multifaceted one.

People are all familiar with the problems with Firestone tires back a number of years ago, early in this decade, where the tread was coming off tires. Well, there are a number of parties that are going to be interested in this problem.

I am sorry, but put aside the consumers who are obviously affected by it, very badly affected by it. But, we have the manufacturers, not only of the tires, but also of the vehicles, the Ford Explorer in this case.

We have the regulatory bodies in the government, parts of the U.S. Department of Transportation. We have the insurance industry. All of these are stakeholders who have an interest in early detection, early addressing, and early correction of problem.

You don't want to wait until there are just so many cases here that it's just obvious to everyone, the issues really spill out into the press, and there are questions of negligence, and so on. So, how can you address something like a problem with tires where the tread is coming off?

You don't want to wait until there are just so many cases here that it's just obvious to everyone, the issues really spill out into the press, and there are questions of negligence, and so on.



Well, one way is warranty claims. For example, someone might file a claim through the vehicle manufacturer, Ford in this case, or through the tire manufacturer, claiming a defective product. Sometimes, just an individual tire is defective, but sometimes that's an indication of manufacturing or design issues. So you have warranty claims.

You also have accident reports that are filed by police departments or other government agencies and find their way into databases in the Department of Transportation and other places. Then, you have news reports about particular incidents.

There are multiple sources of information. There are multiple stakeholders here. And, there are multiple ways of getting at this. But, like so many problems, you're going to get at the issue much faster, if you combine information from all of these different sources, rather than relying on a single source.

Again, that's where the importance of building up an ecosystem of different data sources that come to bear on your problem is really important, and that's just a typical use case. I know of other organizations, manufacturing organizations, that are using this technology in conjunction with data-mining technologies for warranty claims, for example. Consumer appliances is another area that I have heard a lot about, but really there is no limitation in where you can apply this.

Gardner: Stefan, from your perspective, for these extended supply chains, public health issues, etc., again we get down to this critical time element -- for example, the Swine flu outbreak last spring. If folks could identify through text analytics where this was starting to crop up, they didn't have to wait for the hospital reports necessarily. Is that an instance where some of these technologies can really play an important role?

Big pitfall

Andreasen: Absolutely. Before I get into some more real examples, I want to emphasize some of the things that Seth was saying. He's talking about getting to multiple data sources. I cannot stress enough that what I have seen out there as one of the biggest pitfalls when people are making a text analysis solution or actually any BI solution is that they look at what data sources they have and they settle for that.

They should have said, "What are the optimal data sources to get the best prediction and get the best outcome out of this text analysis?" They should settle for no less than that.

The example here will actually explain that. I also have a tire example. We actually have two different kinds of customers using our products looking at tires, tire explosions, and tire recalls.

One is a tire company itself. They go to automated forums and try to monitor if people are doing exactly what Seth is saying, filing claims or writing on an automotive blog: "I got this tire, and it exploded." "It's just really bad." "Don't buy it." All those kinds of information from different sources.

If you get enough of the data source and you get that data in real-time, you can actually go in and contain the situation of a potential tire recall before it happens, which of course could be very valuable for your company.

Many different players here can use the same kind of information for different purposes, and that makes this really interesting.



The other use case is stock research. We have a lot of customers doing financial and market research with our technology. One of them is using our product, for example, to go out and check the same forums, but their objective is to predict if there is a tire recall. Then, they can predict that the stock is going to get a crash, when that happens, and project that beforehand.

Many different players here can use the same kind of information for different purposes, and that makes this really interesting as well.

Gardner: Well, it really seems the age old part of this is that, getting information first has many, many advantages, but the new element is that more and more information is in the form of analytics out in the web.

I wonder if we could cap this discussion -- we are about out of time -- by looking at the future. Seth, you mentioned earlier the semantic web. How automated can this get, and what needs to take place in order for that vision of a semantic web to take place?

Grimes: Well, the semantic web right now is a dream. It's a dream that was first articulated over a decade ago by Tim Berners-Lee, the person who created the World Wide Web, but it is one that is on the fast track to being realized. Being realized in this case means creating meaning.

What Stefan was referring to earlier when he talked about the dates of a published article, the title, perhaps other metadata fields such as the author, creating information that describes what's out there on the web and in databases.

Machine processable

Rendering that information into a form that's machine processable, not only in the sense of analysis, but also in the sense of making interconnections among different pieces of information, is what the semantic web is really about. It's about structuring information that's out there on the Web. That can include what Stefan referred to as the deep web, and creating tools that allow people to search and issue other types of queries against that web data.

It's something that people are working hard on now, but I don't think will be really realized in terms of any broad business usable applications for a fair number of years. Not next year or the year after, but maybe three to five years out, we will really start to see a very broadly useful business application. There is going to be niche applications in the near term, but later something much broader.

It's a direction that really hits on the themes that we have been talking about today, integrating applications and data from multiple sources and of multiple types in order to create a whole that is much greater than each of the parts.

We need software technologies that can do that nowadays, and fortunately we have them.



We need software technologies that can do that nowadays, and fortunately we have them, as we have been discussing. We need a path that will evolve us towards something that really creates much greater value for much larger massive applications in the future, and fortunately the technologies that we have now are evolving in that direction.

Gardner: Very good. I think we have to leave it there. I want to thank both of our guests. We have been discussing the role of text analytics and how companies can take advantage of that and bring that into play with their BI and marketing and other activities, and how the mining of this information is now being done by tools and is increasingly being automated.

I want to thank Seth Grimes, principal consultant at Alta Plana Corp., for joining us. Thanks so much, Seth.

Grimes: Again, thank you Dana, and thanks to Kapow for making this possible.

Gardner: Also, Stefan Andreasen, co-founder and CTO at Kapow Technologies. Thanks again for sponsoring and joining us, Stefan.

Andreasen: Well, thank you. That was a great discussion. Thank you.

Gardner: This is Dana Gardner, principal analyst at Interarbor Solutions. This is Part Three of a series from Kapow Technologies on using BI and web data services in unique forms to increase business benefits.

You have been listening to a sponsored BriefingsDirect podcast. Thanks and come back next time.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Learn more. Sponsor: Kapow Technologies.

Transcript of a sponsored BriefingsDirect podcast on information management for business intelligence, one of a series on web data services with Kapow Technologies. Copyright Interarbor Solutions, LLC, 2005-2009. All rights reserved.