Thursday, February 04, 2010

Part 4 of 4: Real-Time Web Data Services in Action at Deutsche Börse

Transcript of a sponsored BriefingsDirect podcast on an intriguing example of web data services in action, one of a series of presentations on web data services with Kapow Technologies.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Learn more. Sponsor: Kapow Technologies.


Dana Gardner: Hello and welcome to a special BriefingsDirect dual webinar and podcast presentation, "Real-Time Web Data Services in Action at Deutsche Börse." I'm your host and moderator, Dana Gardner, principal analyst at Interarbor Solutions.

As the culmination of a four-part series on web data services (WDS), we're here to examine a fascinating use-case for data services with Deutsche Börse Group in Frankfurt, Germany. An innovative information service recently created there highlights how real-time content and data assembled from various online sources scattered across the Web provides a valuable analysis service.

The offering supports energy traders seeking to track global fluctuations and micro trends in oil and other related markets. But, the need for real-time and precise data affects more than energy traders and financial professionals. More than ever, all sorts of businesses need to know what's going on in and what's being said about their respective markets, products, and services.

In this series with Kapow Technologies, we've examined the need for WDS and ways that WDS and related tools can be used broadly to solve these problems. Now, we are going to learn the full story of how Deutsche Börse took web data resources, and not only efficiently assembled knowledge from automated robots, cleansing tools, and analytics management, but from these capabilities they also created high value and focused WDS offerings onto itself.

Thanks for joining us, as we take an in-depth look at how the market for WDS has shaped up, quickly recap the major findings from our series so far, and then hear directly from the leader of the Deutsche Börse project, as well as from a key supplier that supported them in accomplishing their web services goal.

Access the full series of podcasts on web data services:
So to learn more about WDS as a business, please join me in welcoming our guests, Mario Schultz, director of Energy Facts at Deutsche Börse Group.

Mario Schultz: Hi. I'm happy to be here and looking forward to the session today.

Gardner: Stefan Andreasen is also with us. He is the CTO at Kapow Technologies in Palo Alto, California. Welcome back, Stefan.

Stefan Andreasen: Thank you Dana. It's a pleasure to be here.

Gardner: First, let me try to set the stage for how WDS becomes the grist for new analysis mills. We've been through quite a transition in the past 10 or 15 years. We have moved quickly as a result of the Web. We started not too long ago with very proprietary content, often bound in books and distributed by trucks, and it was perhaps six months or a year outdated, in terms of the facts and figures, by the time it was fully distributed.

Chaotic content

The Web really helped accelerate the time, but was still chaotic in terms of the types of content. It was really loosely coupled information and not very well structured or organized and wasn’t necessarily of a business-critical nature.

We quickly saw, during the late '90s and into the 2000s, that the use of middleware and objects and standards, like SQL, and use of relational databases started to cross over into what became more considered general content, not necessarily data or content, but what people used to do in business processes.

Now, we've moved along through how organizations manage their applications and data together through use of XML, web services, and service oriented architecture (SOA), to the point where we are now, at the level of WDS. We're beginning now to manage that much better and bring automation, low risk, and security to those uses.

It's interesting to me that we've moved beyond a level of static information to dynamic information and yet we still haven’t taken full advantage of everything that’s being developed and created across the Web.

But today’s market turbulence demands that we do that. We have to move into an era where we can take quality data and provide agility into how we can consume and distribute it. We're dealing with more diverse data sources. That means we need to have completeness and we need to be comprehensive, in order to accomplish the business information challenges each business faces.

The need now is for flexible, agile, and mixed sourcing of services and data together.



The need now is for flexible, agile, and mixed sourcing of services and data together. The content is often portable. That means it's ubiquitous across mobile devices and social networks in such a way that real-time analytics becomes extremely important. This cuts across many different verticals, from retail, to trading and finance, healthcare, defense, and government.

The use of data as a business is now coming to the fore. We're beginning to see value, not from just the assimilation of data for use internally, but as more and more businesses are starting to take advantage of the data that they create and have access to. They share that with their partners, create ecosystems of value, and then even perhaps sell outright the information, as well as insights and analysis from that information.

According to Forrester Research, WDS describes the end-to-end analytic information pipelining process, a stream of liquid intelligence. It's palatable and consumable. I've also looked at the Wikipedia definition, and it seems to me that we have gone well into the ability of mashup and reuse of information. It's really about the technologies around discovery extraction, moving into consolidation and access, and then external source and distribution.

To me, WDS really means the lifecycle of content use and reuse across the Web, not in a chaotic fashion, but in a managed fashion, with security permissions, access control, and the ability to bring it into play with other analytic applications and business intelligence (BI) processes.

I want to go now to Mario. When you think of WDS, how has this definition really impacted you and your business?

At the beginning

Schultz: I began by working on the exchange of information that we have in our own systems. We were proceeding with our ideas of enhancing our services and designing new products and services. We were then looking into the Web and trying to get more information from the data that we gather from websites -- or somewhere else on the global Web -- and to integrate this with our own company's internal information.

Everything we do focuses on the real-time aspect. Our WDS are always focusing on the real-time aspects of this.

Gardner: Before we get into the fuller Deutsche Börse story, I'd like to revisit our podcast series so far. In our first podcast we talked with Howard Dresner, a real leader and thought developer in BI. He told us quite a bit about the need for bringing more sources, just as Mario pointed out, both internal and external, into an analytic process.

The idea of extended data sources forms strong components of forecast and analytic activities that are now underway, according to Howard, and BI needs to be not constrained or limited by the need for timely and relevant information from any web source. Howard really reinforced the notion for me that the Web has become where structured data was 10 or 15 years ago and is important for enterprises doing analytic activity.

In the second podcast in our series, Forrester's Jim Kobielus talked about the need to know what's going on and how important it is for organizations to have a sense of what the people in the organization and outside the organization across the spectrum of their supplied chain and/or distribution networks and actual end-users are doing and saying.

We've really seen an increase in networking, social networks, and social media. There's all this buzz going on about business activities, products, and services, all of which can be extremely valuable. You can think of it as a massive real-time focus group, but only if you can access the information that's relevant. People are willing to tell you what they think, if you're able to scoop it up. And, it was about this ability to scoop up the data and information and inference that Jim Kobielus really honed in on.

He told us a lot about the identity gathering, cleansing, and the ability to then exercise the content in some sort of meaningful way. He also emphasized the need to manage this in terms of marts and warehouses. A lot of infrastructure has been put in place. But, again, the value of the infrastructure is only as good as the value of the actual content that's involved.

In the third part of the series -- we are now in the fourth and last part -- Seth Grimes, another thought leader in terms of web analytics and text analytics, talked about the need to analyze in real-time. He emphasized the need of structured data as important, but real-time data as being the next big thing to move us to the era of advanced analytics. We're not just telling what happened before in the pipeline or supply chain, but what's going to happen next. This, I think, bears quite a bit on what Mario is going to discuss.

So, let’s move along now to Deutsche Börse. Mario, I want to hear more about this organization for our listeners in North America. Tell us a little bit about your company, your organization, and what you do.

Small business lines

Schultz: Deutsche Börse is the German stock exchange in Frankfurt, Germany, and we offer all kinds of products and services around on-exchange trading and the adjacent processes. That means we have made small business lines at Deutsche Börse.

We have something that’s called Xetra, our electronic trading system for cash products. We have Eurex, our derivative business line, which is worldwide, well-known, where you can trade other derivatives on that platform.

We have a subsidiary that’s called Clearstream doing all the custody and clearing services after you have done your trade. And, we have the Market Data & Analytics (MD&A) business line, where I've been working for 10 years. The MD&A business line is responsible for the real-time delivery of information to the world outside.

We have a main system called CEF. It is our backbone IT solution for delivering data in real-time with milliseconds optimization. The data is mainly coming from our internal IT systems, like Xetra and Eurex, and we deliver this data to the outside world.


In addition, we calculate all the relevant indices, like the DAX, the flagship index for the German markets with 30 instruments, and more than 2,000 -- or nearly 3,000 -- indices that are distributed over the well-known data vendors, for example, Bloomberg or Reuters. They are our main distribution networks, where we are delivering all our information.

Germany is currently the most important market for energy and power trading in the middle of Europe.



For several years now, I've been responsible for developing new products and services around information for on-exchange or off-exchange trading. This is why we've invented and developed the Energy Facts service that is part of our discussion today.

Gardner: When you were thinking about the challenges around this opportunity, it strikes me you had many different sources of information you had to bring together. What were the challenges that you encountered as you started to pipeline these information sources together?

Schultz: One-and-a-half years ago, the idea was to develop new products and services where we could transform our know-how and this real-time connection, aggregation, and dissemination of data to other business lines where we were not currently working. This is why we looked into the energy trading sector, mainly focused on the power trading here in Europe.

Energy markets really got liberalized over the last years. It started with the Nordic area, Sweden and Norway. Ten or 12 years ago, they started with liberalizing the energy trading markets, and Germany is the next country that followed this trend. Germany is currently the most important market for energy and power trading in the middle of Europe.

We started to analyze the information needs in this sector, and recognized that it's a fundamentals-driven market. Traders are looking into the fundamental factors that affect the price of the energy or the power that you trade, whether it’s oil or whatever. That’s how we started with power trading.

You have the wind and other weather factors. You have temperature. You have the availability of power plants. So, you try to categorize and summarize these sectors. It's called the supply and the demand side regarding this energy trading.

Fundamental data models

By talking to well-known players in the market, we quickly recognized what they were doing on their trading and analytic side, and that we could build up a very powerful and fundamental data models. You have to collect all the relevant information to get an overview and to get an estimate about the price, in this case, where power could develop and in which direction it could develop.

The main issue and main task in the beginning was to collect the relevant data. Quite quickly, we were able to set up a big list of all relevant data sets or sources, especially for Germany and some adjacent countries. We came up with something around 70, 80, or even 100 different sources on the Web to grab information from. So, the main issue was how to collect and grab all this data in a manageable way into one data base. That was the first step.

In the second step, Kapow came into this play. We recognized that it’s really important to have a one-stop shopping inbound channel that collects all the information from these sources, so that you don’t have to have have several IT systems, or your own program, JavaScript, or whatever to get the information.

I wanted to have a responsible product manager for this project or for this new product. From the beginning, I had to have a good technology in place that would be able to handle all these kind of sources from the Web.

Gardner: Let me go to Stefan now at Kapow. When you heard about Deutsche Börse and some of these issues that they were facing and the challenges that they were trying to solve, what came to your mind in terms of how Kapow might apply?

Andreasen: It came to mind that, if these data sources exist somewhere on the Web, we can actually grab them where they are. What you traditionally do with information gathering is that you call every company or every entity that has data and ask them, "Will you please provide the data in this or this format?" But, with Kapow Web Data Services, you can just grab the data, wherever it is on the Web, and assemble this valuable data source much easier and much faster.

Gardner: Let’s go back to Mario. Tell us, as you progressed through the solution, what was the experience?

Schultz: Just to go back one step. We recognized that there are so many different data formats that we had to grab. There are all these different providers of information in Germany and other European countries. They have their own websites. Some give the data in HTML format. Others use XLS, CSV, or even PDFs.

Kapow tells us how to get this information from these different sources in quite different formats. This is a manageable way, with a process-driven or graphical user interface (GUI) driven tool, that would use the effort, the personal, the manpower efforts to collect and grab the data.

At our starting point, one-and-a-half years ago, there were a lot of things underway here in Germany or the other European counties with the Copenhagen Conference, the carbon-emission discussion, and the liberalization. There were discussions about the big players with the transmission nets and power plants and whether they had to split up these things. So really there are a lot of changes. If you have source a or website source known once, you can just take it, program the script, and then leave it. We have to always check it, and they are changing the structure.

Recognize change

New companies are built, and some transmission lines are transformed. So, other companies are building up a new website. There are a lot of things underway. You have 70, 80, or even 100 sources, I don't know. You always have to recognize change and then check whether you have to rework it.

I started to work with an internal solution that I thought could handing all that. After a few weeks of developing and discussing, we recognized that our internal solution was not appropriate and not capable of doing all that kind of stuff. We quickly came across Kapow and evaluated their possibilities. We decided, nearly from the beginning or just a few weeks into starting the project, that we had to use the Kapow tool to collect all these data from the websites.

Gardner: As I understand, you were involved with programming some robots and setting them up, and then you were able to adjust them dynamically to whatever the needs were of the analysis intent.

Schultz: The main focus in the beginning was to get all these different formats, even, for example, go into a PDF and describe the relevant data that we want to grab, not as text, but a figure that we needed for our further processing.

There are even some interesting JavaScript or Java-based websites where you have to click on the switch, and then, with a right-hand click on the mouse, get the dataset. We were able to do all these kind of things with the Kapow tool and these robots within Kapow to grab this kind of data automatically.

We decided, nearly from the beginning or just a few weeks into starting the project, that we had to use the Kapow tool to collect all these data from the websites.



Gardner: What have been some of the results? What business-development activities have you had? What's been the value add?

Schultz: The value add was to grab all this data into one common data format, one database, so we would be able to deliver this data to the vendors via web tool, web terminal, or even our existing CEF data feeds. A lot of the players in the market are trying to collect this data by themselves, or even manually, to get an overview of where the power price would develop over the next day, hours, weeks, months, whatever.


There are some other providers in the market focusing on the real-time delivery of data. In the general on-exchange or off-exchange business, we're talking millisecond optimization. That’s not the timing that we have here, but it's getting from a once-a-day PDF analyst commentary via email in the morning, to a real-time terminal, or even to go to Bloomberg or Reuters screen where you get our Energy Facts data for on-time and real-time information set for trading.

Gardner: I'm really intrigued by your ability to manage so many different sources in real time and, as you say, coming from all different sources, interfaces, and application formats. Can you give us a little demonstration and show us the application in action?

Schultz: Okay. You should see on your screen our Energy Facts web terminal. This is one of our delivery possibilities to bring this data in real-time to the end users.

In the first phase, we're just focusing on the German market -- Belgium, France, and the Netherlands. We decided to start with four European countries. I don’t want to go to other pages. I just decided to take two or maybe three of them to give a view of what's going on in this Energy Facts terminal.

Not only websites

Currently, we have 70 or 80 sources that we're grabbing. It's not only websites, but we have some third-party providers that are delivering information, for example, weather, temperature, and things like that. We have providers giving data via FTP service, and we even use Kapow for grabbing data from these third-party players. As I said, it's a one-stop shopping solution to get everything via one channel.

For example, an interesting thing in the energy trading space is availability. When a company is looking into the future, they want to know the availability of different power plants. You can see on the right hand side there will be a summary of nuclear power, for example, and lignite, hard coal, and water.

There are various sources in Germany giving all this information in different formats. We grab everything into one database, do quality checks, and then compile the information to the front-end that you can see down there with a graphical presentation. We have a table with all the figures and we even do some kind of analytic enrichments, so we have a deviation from what has been published the day before.

You can see, for example, that we have some changes in the hard coal availability for the next 30 days. We're taking those sources, collecting the information, doing quality checks and quality assurance, aggregating everything into one database, one data format, and then presenting it on the screen.

If you ask us to add other kinds of data, we can integrate it quite quickly into our service. No problem.



Gardner: If I'm a consumer of this, if I am trader that subscribes to your service, and I encountered some other form of information that I wanted to bring in the mix, do I have the option of approaching you and asking for you to bring that in, or is that out of the question?

Schultz: No. This is just our starting point. As I said, this is something where we tried to create a complete new business in the energy sector. We started with these four countries and datasets and we will enhance it to other countries. If you ask us to add other kinds of data, we can integrate it quite quickly into our service. No problem.

Just two other examples. One, for example, is something that in Germany is called Urgent Market Messages. In Germany, we have four big power plant providers or transmission-system operators. The power plant providers push out, in real time, as fast as possible, Urgent Market Messages, when a power plant has to go into maintenance mode or has an accident and they have to repair something.

We grab all these different kind of sources from all those power plant providers and then aggregate all these Urgent Market Messages in the table that you can see down there. If you go to other pages on our screen, you can see them on the left hand side, where you always get this Urgent Market Messages, the latest ones. If, for example, a nuclear power plant goes off the grid unexpectedly, this could dramatically change the power price on the market. This is another example of collecting data for Urgent Market Messages.

I don’t want to stretch this too much, but the last point is cross border. Germany is somewhat in the middle of all this trading in Europe, so we have a lot of connection points to the other countries. We have Denmark, Sweden, Poland, France, Belgium, and Luxembourg. So, we have so many grid lines going over the border to the other countries.

You always have to collect the data from these different transmission lines to the other countries, because they are auctioning the power, to transport power, for example, from France, Germany, or the way around. You have to get all this kind of information and a better understanding of pricing.

Power allocation

For example, in this case, it's either Germany to France and to connection point. Down there, you can see then how much of power has been allocated for a specific hour in a day. The red line is the price for the transportation in this case. In addition, you could show the price difference, for example, between Paris and Leipzig, the two exchanges for energy. Everything is collected and then put into one view, where you show the interesting figures on the one screen.

Gardner: Suffice it to say that there is an awful lot going on behind this little red line. It's not that easy to put this together. This is reflecting an awful lot of information and processing.

Schultz: This slide is from one of our pages in used for one provider for Germany to France. Now, I'll go to this button and show you the other ones, like the ones to Germany, then the Germany-Netherlands connection.

These are the four countries we're currently covering, and you can see all the connection points for this. Later on, we'll go on with Denmark and the other one. This is really the power of having all this kind of data in one tool, where the aggregation, quality check, and everything comes into play.

Gardner: Mario, I have to imagine that there are external forces that can come to bear on this, perhaps a massive snowstorm or some other disruption in the price of a major commodity, and that’s something that you can bring into this picture almost immediately, right?

Energy Facts focuses on the fundamental data collected in real-time, and aggregated into one service, the place where we saw it as the missing piece in Europe.



Schultz: Yes. For example, in what I just showed, if we go to this weather page, you see temperature. This is very interesting. Generally, in Germany we have something, as you see on the yellow curve, between 1 and 3 degrees Celsius as the general temperature in winter. You see the other forecast for temperature is something around -5. Some time ago, it was even -7 degrees. It's really big. This is normally an indication for higher power prices, because people will demand more power for heating their buildings or offices. So, this has really changed. This weather aspect has changed every six hours within our service.

Gardner: If these traders also wanted to try to find out why they were seeing certain effects in these analytic graphs, is there a way for them to then quickly go out and look at the news feeds or other information, so that they could determine what’s behind the curves?

Schultz: Currently, it's not part of our service, and we didn’t do that because there are other providers for this information. Generally, you have the on-exchange and off-exchange prices that are normally available from the existing data vendors. For example, they use Bloomberg or other service providers. Energy Facts focuses on the fundamental data collected in real-time, and aggregated into one service, the place where we saw it as the missing piece in Europe. If you want to go to the news site, traders have other providers for the news on their desks.

Gardner: I see. So, this is really focused on numeric, algorithmic, programmable types of information and data.

Schultz: This is what we call the fundamental data sets, what is fundamentally behind driving the power price -- the demand and supply side factors behind the price. The analysts or traders can get this information in real-time in one service to do better estimation of the pricing elements.

Gardner: That’s really impressive, I appreciate your walking us through it. I wonder if we can go back now to Stefan and talk a little bit about what Kapow and its values and services brought to the table to help support this really impressive application and service.

Impressive service

Andreasen: Sure, Dana. This is an extremely impressive service that Mario just showed us here, and I'm sure, if you're dealing with buying and selling energy, this is a must for you to be sure you made the right decision.

If we go back to what I talked about earlier, businesses are relying more and more on data to make the right decision, and their focus is on quality, completeness, and agility. Let's be more practical here and ask how you actually get this data.

There is a term, data integration, which is about accessing the data and providing it in standard API, so that you can actually leverage the measure of business application.

Energy Facts is accessing this data at the 70-80 different data sources, as Mario said, and providing it as a feed that depends on the volatility of the different data sources. Some of the data delivers every minute, and some deliver every four hours, etc., based on how quickly the data source changes. WDS is all about getting access to this data where it resides.

There are really two different kinds of data sources. One set of data sources is more like a real-time source data source. Let's say you go to a patent directory, and there are probably millions of patents. In that case you would use Kapow Data Server to wrap that data source into a service layer, and then you would be able to do real-time, as soon as you get real-time results back. So, that's real-time access, where you have vast amount of information.

Actually, all styles exist, but there is a tendency for many companies to actually access the data where it is, rather than trying to consolidate it to a new place.



The other scenario, and I think that's more what we see in the Energy Facts example here, is where you have a more limited data source, and you are actually trying to do a consolidation of the data into a database, and then you use that database to serve different customers or different applications.

With Kapow, you can actually go in and access the data, if you can see them on your browser. That's one thing. The other thing you need to do to make this data available to your business application is to transform and enrich the data, so that it actually matches the format that you want.

For example, on the website, it might have the date saying, "2 hours ago" or "3 minutes ago" and so on. That's really not useful. What you really want is a time stamp with the hour, the second, the minute, the months, the day, the year, so you can actually start comparing these. So, data cleansing is an extremely important part of data extraction and access.

The last thing, of course, is serving the data in the format you need. That can be a database, if you're doing consolidation, or it can be as an API, if you are doing more of a federated access to data, and leaving the data where it is.

Actually, all styles exist, but there is a tendency for many companies to actually access the data where it is, rather than trying to consolidate it to a new place.

Urgent messages

Schultz: Dana, I have a very good example for this one. I talked about Urgent Market Messages, where the power plant providers are sending out, as quickly as possible after an incident occurs, an Urgent Market Message regarding changes in the power plant availability. This is something that is a good aggregate, using Kapow, because we can schedule all these robots in a very good way.

Currently we're checking these Urgent Market Messages sources every minute. On all aggregated levels, we always can state whether this message is valid or invalid. I didn't focus on this is my presentation.


If we find the message on the website, we put it on our service. Maybe in the next minute the message disappears on the website. We have it still in our service, but then we flag this message as invalid. The user knows that this message had been on the source website, but now it disappeared. We still have the information, but we can separate between these two statuses, valid or invalid Urgent Market Message.

This is accomplished by accessing the source, enriching it into the database, doing some scheduling, and then giving feedback and checking the website again. By doing these three steps, we're able to offer this part of our Urgent Market Message presentation layer.

Gardner: Mario, I think you're really a pioneer in this. What intrigues me is how far this can go in addition to what you have done with it, and how this could affect the number of other industries and vertical businesses as well.

Kapow Technologies today has more than 400 customers. For them, our technology becomes a business critical part of what they do.



From your perspective, Stefan, how are other types of business, enterprises, and service providers likely to start using this and providing WDS-based, value added services as well?

Andreasen: That’s a very good question. Kapow Technologies today has more than 400 customers. For them, our technology becomes a business critical part of what they do. Let me try to explain that. Most information providers sell data to all the businesses. In the U.S., for example, there is a big business around background checking, both of people and companies. It's a fact in the U.S. that if you go into a bank to get a credit card, they're going to run a background check on you, before you can get this credit card.

One of the things they check are a lot of resources on the Web, for example, criminal records. On the Web, every courthouse has a website, where you can log in and search for criminal records for a certain person.

Most of these companies that are doing the background checking are Kapow customers, using Kapow's Web Data Services to service enable all these courthouses. When they go in and want a credit card in the background check, Kapow automatically goes out and gets that information from these courthouse websites and a lot of other data sources in real-time. Otherwise, they would have to have 50 or 60 people manually typing in, and they wouldn’t get the results until two days later.

Gardner: I suppose another effect also over the past 10 or 15 years, from my timeline earlier in the presentation, is that these web standards have kicked in, not only for looking up information across the Web, but it has also become a standardized way of accessing information internally. What about the use of this for corporate performance management and other aspects of the web data that’s inside of companies?

Available white paper

Andreasen: I encourage everybody to go to our website and download a white paper from one of our customers, called Fiserv. It's a large financial services company in the U.S. Fiserv has a lot of business partners, actually they have more than 300 banks in more than 10 countries as business partners. Because they're selling services, it's incredibly important for them to also monitor their customers to understand what's happening.

They had lot of people who logged into these 300 partner banks every day and grabbed some financial information, such as interest rates, etc., into an Excel spreadsheet, put it into a database, and then got it up on a dashboard.

The thing about this is that, first, you have a lot of human labor, which can cause human errors, and so on. You can only do it once a day, and it's a tedious process. So what they did is got Kapow in and automated the extraction of this data from all their business partners -- 300 banks in more than 10 countries.

They can now get that data in near real-time, so they don’t have to wait for data. They don’t have to go without on the weekend, because people are not working. They get that very business critical insights to the market and their partners instantly through our product.

I can give you another example. A large car manufacturer is spending almost a billion dollars a year in advertising on television. Of course, there are several parameters that are important for them to understand about how should they spend the advertising money the best possible way.

These data sources are, for example, the lead reporting, understanding what are the leads they're getting in and understanding the market data they're are getting from business information providers about trends in the markets and so on. What is the reporting they get from ad campaigns? How can they see how many people clicked on this or watched these television shows? Also, how many cars are getting registered, their models versus their competitors?

So, it's just another example again about how WDS can help the market analyst, the product manager, and a lot of people who have to make very vital business decisions in the companies out there.



By using Kapow, they could hook up to all of these data sources in real time and suddenly get complete insights to the effectiveness of how they spend their advertising dollar, and having very, very good return on the investment.

So, it's just another example again about how WDS can help the market analyst, the product manager, and a lot of people who have to make very vital business decisions in the companies out there.

Gardner: Great. I appreciate your input Stefan. Today’s discussion on how the Deutsche Börse Group in Frankfurt, Germany is using Kapow Technologies for a real-time web data analysis service comes as a culmination of a four-part series on WDS.

We have seen how an innovative information service that’s been created rapidly elegantly demonstrates how real-time content and data, assembled from various online sources, provides a valuable service and an analysis capability as a business.

What's happening with WDS is that it's gone beyond an internal enterprise focus. It's become a business onto itself. So, there are lots of value opportunities. We can sell new value across business solutions. We can look for ways that strategies internally are enhanced, and we can create ecosystems of partnership.

I think what we are going to see, when cloud computing starts to really take off, rather than be discussed so much, is the opportunity for companies that are in partnership to really encourage competitive advantage by sharing data and analytics effectively. It also drives more business strategy and execution and creates new and additional revenue streams as a result.

So, I want to thank Mario at Deutsche Börse for his participation here. I think they're a real poster child for how real-time analytics can be brought together. So, thanks to you, Mario, for joining us.

Schultz: It was a pleasure, Dana. Thank you.

Gardner: And, certainly, I also want to give the opportunity for viewers and listeners to learn more about some of the topics we have discussed from Kapow. There are a lot of different resources available there in order to take some next steps or continue to educate yourselves on some of these issues.

This is Dana Gardner, principal analyst at Interarbor Solutions, your host and moderator. I also want to thank Stefan Andreasen. He is the CTO of Kapow.

Andreasen: Thank you very much, Dana.

Gardner: You've been enjoying a BriefingsDirect presentation. Thanks again for joining us, and come back next time.


Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Learn more. Sponsor: Kapow Technologies.

Transcript of a sponsored BriefingsDirect podcast on information management for business intelligence, one of a series on web data services with Kapow Technologies. Copyright Interarbor Solutions, LLC, 2005-2010. All rights reserved.

You may also be interested in:

Wednesday, February 03, 2010

CERN’s Evolution to Cloud Computing Portends Revolution in Extreme IT Productivity?

Transcript of a BriefingsDirect podcast on the move to cloud computing for data-intensive operations, focusing on the work being done by the European Organization for Nuclear Research.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: Platform Computing.

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you’re listening to BriefingsDirect. Today, we present a sponsored podcast discussion on some likely directions for cloud computing based on the exploration of expected cloud benefits at a cutting edge global IT organization.

We are going to explore the thinking on how cloud computing both the private and public varieties might be useful at CERN, the European Organization for Nuclear Research in Geneva.

CERN has long been an influential bellwether on how extreme IT problems can be solved. Indeed, the World Wide Web owes a lot of its usefulness to early work done at CERN. Now the focus is on cloud computing. How real is it, and how might an organization like CERN approach cloud?

In many ways CERN is quite possibly the New York of cloud computing. If cloud can make it there, it can probably make it anywhere. That's because CERN deals with fantastically large data sets, massive throughput requirements, a global workforce, finite budgets, and an emphasis on standards and openness.

So please join us, as we track the evolution of high-performance computing (HPC) from clusters to grid to cloud models through the eyes of CERN, and with analysis and perspective from IDC, as well as technical thought leadership from Platform Computing.

Join me in welcoming our panel today, Tony Cass, Group Leader for Fabric Infrastructure and Operations at CERN. Welcome, Tony.

Tony Cass: Pleased to meet you.

Gardner: We’re also here with Steve Conway, Vice President in the High Performance Computing Group at IDC. Welcome, Steve.

Steve Conway: Thanks. Welcome to everyone.

Gardner: And, we're also here with Randy Clark, Chief Marketing Officer at Platform Computing. Welcome Randy.

Randy Clark: Thank you. Glad to be here.

Gardner: Over the last several years, we've seen cloud computing become quite popular as a concept. It remains largely confined to experimentation, but this notion of private cloud computing is being scoped out by many large and influential enterprises as well as large early adopters like CERN.

Let me go to you Steve Conway. What's the difference between private and public cloud and how far away are any tangible benefits of cloud computing from your perspective?

Already here

Conway: Private cloud computing is already here, and quite a few companies are exploring it. We already have some early adopters. CERN is one of them. Public clouds are coming. We see a lot of activity there, but it's a little bit further out on the horizon than private or enterprise cloud computing.

Just to give you an example, we just did a piece of research for one of the major oil and gas companies, and they're actively looking at moving part of their workload out to cloud computing in the next 6-12 months. So, this is really coming up quickly.

Gardner: So, this notion of having a cohesive approach to computing and blending what you do on premises with these other providers isn't just pie in the sky. This is really something people are serious about.

Conway: Well, CERN is clearly serious about it in their environment. As I said, we're also starting to see activity pick up with cloud computing in the private sector with adoption starting somewhere between six months from now and, for some, more like 12-24 months out.

Gardner: Randy Clark, from your perspective, how many customers of Platform Computing would you consider to be seriously evaluating what we now refer to as public or private cloud?

Clark: We have formally interviewed over 200 customers out of our installed base of 2,000. A significant portion -- I wouldn’t put an exact number on that, but it's higher than we initially anticipated -- are looking at private-cloud computing and considering how they can leverage external resources such as Amazon, Rackspace and others. So, it's easily a third and possibly more.

Gardner: Tony Cass, let's go to you at CERN. Tell us first a little bit about CERN for those of our readers who don’t know that much or aren't that familiar. Tell us about the organization and what it does, and then we can start to discuss your perceptions about cloud.

Cass: We're a laboratory that exists to enable, initially Europe’s and now the world’s, physicists to study fundamental questions. Where does mass come from? Why don’t we see anti-matter in large quantities? What's the missing mass in the universe? They're really fundamental questions about where we are and what the universe is.

We do that by operating an accelerator, the Large Hadron Collider, which collides protons thousands of times a second. These collisions take place in certain areas around the accelerator, where huge detectors analyze the collisions and take something like a digital photograph of the collision to understand what's happening. These detectors generate huge amounts of data, which have to be stored and processed at CERN and the collaborating institutes around the world.

We have something like 100,000 processors around the world, 50 petabytes of disk, and over 60 petabytes of tape. The tape is in just a small number of the centers, not all of the hundred centers that we have. We call it "computing at the terra-scale," that's terra with two R's. We’ve developed a worldwide computing grid to coordinate all the resources that we have with the jobs of the many physicists that are working on these detectors.

Gardner: So, to look at the IT problem and unpack it a little bit. You're dealing with such enormous amounts of data. You’ve been in the distribution of these workloads for quite some time. Maybe you could explain a little bit the evolution of how you've distributed and managed such extreme workload?

No central management

Cass: If you look at the past, in the 1990’s, we had people collaborating, but there was no central management. Everybody was based at different institutes and people had to submit the workloads, the analysis, or the Monte Carlo simulations of the experiments they needed.

We realized in 2000-2001 that this wasn’t going to work and also that the scale of resources that we needed was so vast that it couldn’t all be installed at CERN. It had to be shared between CERN, a small number of very reliable centers we call the Tier One centers and then 100 or so Tier Two centers at the universities. We were developing this thinking around the same time as the grid model was becoming popular. So, this is what we’ve done.

What a lot of the grid academics have done is in understanding or exploring what could be done with the grid, as an idea. What we've been focusing on is making it work and not pushing the envelope in terms of the technology, but pushing the envelope in terms of the scale to make sure that it works for the users. We connect the sites. We run tens of thousands of jobs a day across this and gradually we’ve run through a number of exercises to distribute the data at gigabytes a second and tens of thousands of jobs a day.

We've progressively deployed grid technology, not developed it. We've looked at things that are going on elsewhere and made them work in our environment.

Gardner: As I understand it, the interest you have in cloud isn’t strictly a matter of ripping and replacing, but augmenting what you're already doing vis-a-vis these grid models.

Cass: Exactly. The grid solves the problem in which we have data distributed around the world and it will send jobs to the data. But, there are two issues around that. One is that if the grid sends my job to site A, it does so because it thinks that a batch slot will become available at site A first. But, maybe a grid slot becomes available at site B and my job is site A. Somebody else who comes along later actually gets to run their job first.

Today, the experiment team submits a skeleton job to all of the sites in order to detect which site becomes available first. Then, they pull down my job to this site. You have lots of schedulers involved in this -- in the experiment, the grid, and the site -- and we're looking at simplifying that.

These skeleton jobs also install software, because they don’t really trust the sites to have installed the software correctly. So, there's a lot of inefficiency there. This is symptomatic of a more general problem. Batch workers are good at sharing resources that are relatively static, but not when the demand for resource types changes dynamically.

So, we’re looking at virtualizing the batch workers and dynamically reconfiguring them to meet the changing workload. This is essentially what Amazon does with EC2. When they don’t need the resources, they reconfigure them and sell the cycles to other people. This is how we want to work in virtualization and cloud with the grid, which knows where the data is.

Gardner: Steve Conway, you’ve been tracking HPC for some time at IDC. Maybe you have some perceptions on how CERN is a leading adopter of IT over the years, the types of problems they're solving now, or the types of problems other organizations will be facing in the future. Could you tell us about this management issue and do you think that this is going to become a major requirement for cloud computing?

World technology leader

Conway: Starting with CERN, their scientists have earned multiple Nobel prizes over the years for their work in particle physics. As you said before, CERN is where Tim Berners-Lee and his colleagues invented the World Wide Web in the 1980s.

More generally, CERN is a recognized world leader in technology innovation. What’s been driving this, as Tony said, are the massive volumes of data that CERN generates along with the need to make the data available to scientists, not only across Europe, but across the world.

For example, CERN has two major particle detectors. They're called CMS and ATLAS. ATLAS alone generates a petabyte of data per second, when it’s running. Not all that data needs to be distributed, but it gives you an idea of the scale or the challenge that CERN is working with.

In the case of CERN’s and Platform’s collaboration, as Tony said, the idea is not just to distribute the data but also the applications and the capability to run the scientific problem.

CERN is definitely a leader there, and cloud computing is really confined today to early adopters like CERN. Right now, cloud computing services constitute about $16 billion as a market.

IDC: By 2012, which is not so far away, we project that spending for cloud computing is going to grow nearly threefold to about $42 billion. That would make it about 9 percent of IT spending.



That’s just about four percent of mainstream IT spending. By 2012, which is not so far away, we project that spending for cloud computing is going to grow nearly threefold to about $42 billion. That would make it about 9 percent of IT spending. So, we predict it’s going to move along pretty quickly.

Gardner: How important is this issue that Tony brought up about being able to manage in a dynamic environment and not just more predictable static batch loads?

Conway: It’s the single biggest challenge we see for not only cloud computing, but it has affected the whole idea of managing these increasingly complex environments -- first clusters, then grids, and now clouds. Software has been at the center of that.

That’s one of the reasons we're here today with Platform and CERN, because that’s been Platform’s business from the beginning, creating software to manage clusters, then grids, and now clouds, first for very demanding, HPC sites like CERN and, more recently, also for enterprise clients.

Gardner: Randy Clark, as you look at the marketplace and see organizations like CERN changing their requirements, what, in your thinking, is the most important missing part from what you would do in management with HPC and now cloud? What makes cloud different, from a management perspective?

Dynamic resources

Clark: It’s what Tony said, which is having the resources be dynamic not static. Historically, clusters and grids have been relatively static, and the workloads have been managed across those. Now, with cloud, we have the ability to have a dynamic set of resources.

The trick is to marry and manage the workloads and the resources in conjunction with each other. Last year, we announced our cloud products -- Platform LSF and Platform ISF Adaptive Cluster -- to address that challenge and to help this evolution.

Gardner: Let’s go back to Tony Cass. Tell me what you’re doing with cloud in terms of exploration. I know you’re not in a position to validate, or you haven’t put in place, any large-scale implementation or solutions that would lead the market. But, I’m very curious about what the requirements are. What are the problems that you're trying to solve that you think cloud computing specifically can be useful in?

Cass: The specific problem that we have is to deliver the most physics we can within the fixed budget and the fixed amount of resources. These are limited either by money or by data-center cooling and generally are much less than the experiment wants. The key aim is to deliver the most cycles we can and the most efficient computing we can to the physicists.

I said earlier that we're looking at virtualization to do this. We’ve been exploring how to make sure that the jobs can work in a virtual environment and that we can instantiate virtual machines (VMs), as necessary, according to the different experiments that are submitting workloads at one time to integrate the instantiation of VMs with the batch system.

At the moment, we're looking at how you can reliably send a virtual image that's generated at one place to another site.



Once we got that working, we figured that the real problem was managing the number of VMs. We have something like 4,000 boxes, but if you have a VM per call, plus a few spare, then it can easily get to 60,000, 70,000, or 80,000 VMs. Managing these is the problem that we are trying to explore now, moving away from “can we do it” to “can we do it on a huge scale?”

Gardner: Are you yet at the point where you want to be able to manage the VMs that you have under your own control, and perhaps starting to deploy virtualized environments and workloads in someone else’s cloud and make them managed and complementary.

Cass: There are two aspects to that. The resources in our community are at other sites, and all of the sites are very independent. They are also academic environments. So, they are exploring things in their own way as well. At the moment, we're looking at how you can reliably send a virtual image that's generated at one place to another site.

Amazon does this, but there are tight constraints in the way they manage that cluster, because they built it thinking about this. Universities maybe didn’t build their own cluster in a way that separates that out from some of the other computing they're doing. So, there are security and trust implications there that we are looking at. That will be a thing to collaborate on long-term.

More cost effective

Certainly, if we configure things in our own way, when we look in a cloud environment, perhaps it will be more cost effective for us to only purchase the equipment we need for the average workload and they buy resources from Amazon or other providers. But, there are interesting things you have to explore about the fact that the data is not at Amazon, even if they have the cycles.

There are so many things that we’re thinking about. The one we’re focusing on at the moment is effectively managing the resources that we have here at CERN.

Gardner: Steve Conway, it sounds as if CERN has, with its partnered network, a series of what we might call private-cloud implementations and they're trying to get them to behave in concert at what we might call at a public cloud level. That exercise could, as with the World Wide Web, create some de-facto standards and approaches that might, in fact, help what we call hybrid cloud computing moving forward. Does that fairly surmise where we are?

Conway: That’s right. There are going to have to be more rigorous open standards for the clouds. What Tony was talking about at CERN is something that we see elsewhere. People are turning to public clouds today -- "turning to" just meaning exploring at this point for a way to handle overload work and search workloads.

But, we're seeing some smaller and medium-size businesses looking to public clouds as a way to avoid having to purchase their own internal resources . . . and also as a way of avoiding having to hire experts who know how to operate them.



The Internet itself is a pretty high latency network, if you think of it that way. People are looking to send portions of the workload that doesn't have a lot of communication dependencies particularly inter-processor communication dependencies, because the latency doesn't support that.

But, we're seeing some smaller and medium-size businesses looking to public clouds as a way to avoid having to purchase their own internal resources, clusters for example, and also as a way of avoiding having to hire experts who know how to operate them. For example, engineering services firms don't have those experts in house today.

Gardner: Back to you Tony Cass, I know this is still a bit hypothetical, but if there were the standards in place, and you were able to go to a third-party cloud provider for some of these spikes or occasionally dynamically generated workloads that perhaps exceed your current on-premise’s capabilities, would this be a financial boon to you, where you could protect your pricing and you could decide the right supply and demand fit when it comes to these extreme computing problems?

Cass: It would certainly be a boon. The possibility is being demonstrated by experiments that are actually based at Brookhaven to do simulations that are CPU-intensive, where they don't need much data transfer or data access. They have been able to run simulations cost-effectively with EC2.

Although their cycles, compared to some of the things we're doing, are more expensive, if we don't have to buy all of the resources, we could certainly save money. Another aspect is that it is beyond money in some sense. If you need to get something fixed for a conference, and you are desperately trying to decide whether or not you’ve discovered the Higgs then it's not a case of “money's no object,” but you can get the resources from a cloud much more quickly than you can install capacity at CERN. So both aspects are definitely of interest.

Gardner: Randy Clark, this makes a great deal of sense from the perspective of a large research organization. But, we're not just talking about specific workloads. We're talking about workloads that will be common across many other vertical industries or computing environments. Can you name a few, or mention some from your experience, where we should expect the same sorts of economic benefits to play out.

Different use cases

Clark: What we're seeing is across industries. Financial services is certainly taking a leadership role. There's a lot going on in the semiconductor or electronic industry. Business intelligence (BI) is across industries and government. So, across industries, we see different use cases.

To your point, these use cases are enterprise applications to run the business, and we're seeing that in Java applications, test and development environments, and traditional HPC environments.

That's something driven by the top of the organization. Tony and Steve laid it out well. They look at the public/private cloud economically, and say, "Architecturally, what does this mean for our business?" Without any particular application in mind they're asking how to evolve to this new model. So, we're seeing it very horizontally and, to your point, in enterprise and HPC applications.

Gardner: Steve Conway, thinking about these large datasets, Randy brought up BI, and that, of course, means warehousing, data analytics, and advanced analytics. A lot of organizations are creating datasets at a scale never anticipated, never mind seen before, things from sensors, mobile devices, network computing, or social networking.

BI is one of those markets that, in its attributes, straddles the world of HPC and enterprise computing just as financial services does . . .



How do we bring together these compute resources, the raw power with these large data sets. I think this is an issue that CERN might also be a bellwether on, in somehow managing these large data sets and the compute power, bringing them architecturally into alignment.

Conway: BI is one of those markets that, in its attributes, straddles the world of HPC and enterprise computing just as financial services does, in the sense that they have workloads that don't have a whole lot of communications dependencies. They don't need networks with very high latency for the most part.

You see organizations like the University of Phoenix, which has 280,000 online students, that have already made this evolution -- in this case, with Platform helping them out -- from clusters to grid computing today. Now, they're looking toward cloud computing as a way to take them further.

You also see that not just in the private sector side. One of the other active customers that's really looking in that same direction is the Centers for Disease Control (CDC), which has moved to from clusters to grid computing.

What you're seeing here is people who have already stepped through the earlier stages of this evolution. They've gone from clusters to grid computing for the most part and now are contemplating the next move to cloud computing. It's an evolutionary move. It could have some revolutionary implications, but, from a technological standpoint, sometimes evolutionary is much safer and better than revolutionary.

Gardner: Tell us about some of the solutions that you now need to bring to market or are bringing to market around management and other issues? Where have you found that the rubber hits the road, in terms of where people can take this in real time? What's the current state of the art? Rather than talking about hypothetical, what's now possible, when it comes to moving from cluster and grid to the revolution of cloud?

Interaction of technologies

Clark: What Platform sees is the interaction of distributed computing and new technologies like virtualization requiring management. What I mean by that is the ability, in a large farm or shared environment, to share resources and then make those resources dynamic. It's the ability to add virtualization into those on the resource side, and then, on the server side, to make it Internet accessible, have a service catalog, and move from providing IT support to truly IT as a competitive service.

The state of the art is that you can get the best of Amazon, ease of use, cost, accessibility with the enterprise configuration, scale, and dependability of the enterprise grid environment.

There isn't one particular technology or implementation that I would point to, to say "That is state of the art," but if you look across the installations we see in our installed base, you can see best practices in different dimensions with each of those customers.

Gardner: Randy, what are some typical ways that you're seeing people getting started, when they want to make these leaps from evolutionary progression to revolutionary paybacks? Where do they start making that sort of catalytic difference?

Taking a step back, we see customers thinking about architecturally how do they want to have that management layer.



Clark: The evolution is the technology, as Steve said. The revolution is in the approach architecturally to how to get to that new spot.

Taking a step back, we see customers thinking about architecturally and how they want to have that management layer. What is that management layer going to mean to them going forward? And, can they quickly identify a set of applications and resources and get started?

So, there is an architecture piece to it, thinking about what the future will hold, but then there is a very pragmatic piece -- let's get going, let's engage, let's build something and be able to scale that out over time. We saw that approach in grid computing. We're encouraging folks to think, but then also to get started.

Gardner: Tony Cass at CERN, what are your next steps? Where would you expect to be heading next as you explore the benefits and possible real-world opportunities?

Cass: We’re definitely concentrating for the moment on how we exploit effective resources here. The wider benefits we'll have to discuss with our community.

Gardner: What would you like to see happen next?

Focusing on delivery

Cass: What I would like to see happen next is a definite cloud environment at CERN, where we move from something that we're thinking about to something that is in operation, where we have the ability to use resources that aren’t primarily dedicated for physics computing to deliver cycles to experiment. I'd like to see a cloud, a dynamically evolving environment in our computer center. We’re convinced it's possible, but delivering that is what we’re focusing on.

Gardner: Steve Conway, where do you see things headed next? What are the next steps that we should look for, as we move from that evolutionary progression to more of a revolutionary productivity?

Conway: It's along a couple of dimensions. One is the dimension of people actually working in these environments. In that sense, the CERN-Platform collaboration is going to help drive the whole state of the art forward over the next period of time.

People are a little bit concerned about testing their data there. The evolution of standards is going to accelerate this trend.



The other one, as Randy mentioned before, it that the evolution of standards is going to be important. For example, right now, one of the barriers to public-cloud computing is vendor lock-in, where the cloud, the Amazons, the Yahoos, and so forth are not necessarily interoperable. People are a little bit concerned about testing their data there. The evolution of standards is going to accelerate this trend.

Gardner: Why don’t I give the last word today to Randy? Tell us about some information that's available out there for folks who are looking to explore and take some first steps toward this more revolutionary benefit.

Clark: I'd encourage everybody to visit our website. There are a number of white papers, webinars, and webcasts that we've done with other customers to highlight some other use cases within development, test, and production environments. I'd point people to the resource page on our website www.platform.com.

Gardner: I want to thank our guests. This has been a very interesting discussion, and I certainly look forward to following what CERN does, because I do think that they’re going to be a leader in terms of what many others will be end up doing in B2B cloud computing.

Thank you to Tony Cass, Group Leader for Fabric Infrastructure and Operations at CERN. Thank you, sir.

Cass: Thank you.

Gardner: And also a good, big thank you to Steve Conway, Vice President in the High Performance Computing Group at IDC. Thank you, Steve.

Conway: Thanks.

Gardner: And also, of course, thank you to Randy Clark, Chief Marketing Officer at Platform Computing.

Clark: Thank you for the opportunity.

Gardner: This is Dana Gardner, principal analyst at Interarbor Solutions. You've been listening to a sponsored BriefingsDirect podcast on what likely outcomes we can expect from cloud computing and architecture, on the progression from grid to cloud computing, and moving into a more revolutionary set of benefits. Thanks for listening and come back next time.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: Platform Computing.

Transcript of a BriefingsDirect podcast on the move to cloud computing for data-intensive operations, focusing on the work being done by the European Organization for Nuclear Research. Copyright Interarbor Solutions, LLC, 2005-2010. All rights reserved.

You may also be interested in:

BriefingsDirect Analysts Discuss Ramifications of Google-China Dust-Up over Corporate Cyber Attacks

Edited transcript of a BriefingsDirect Analyst Insights Edition podcast, Volume 50, on what the fallout is likely to be after Google's threat to leave China in the wake of security breaches.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Charter Sponsor: Active Endpoints.

Special offer: Download a free, supported 30-day trial of Active Endpoint's ActiveVOS at www.activevos.com/insight.

Dana Gardner: Hello, and welcome to the latest BriefingsDirect Analyst Insights Edition, Volume 50. I'm your host and moderator Dana Gardner, principal analyst at Interarbor Solutions.

This periodic discussion and dissection of IT infrastructure related news and events with a panel of industry analysts and guests, comes to you with the help of our charter sponsor Active Endpoints, maker of the ActiveVOS business process management system.

Our topic this week on BriefingsDirect Analyst Insights Edition focuses on the fallout from the Google’s threat to pull out of China, due to a series of sophisticated hacks and attacks on Google, as well as a dozen more IT companies. Due to the attacks late last year, Google on January 12th vowed to stop censoring Internet content for China’s web users and possibly to leave the country altogether.

This ongoing tiff between Google and the Internet control authorities in China’s Communist Party-dominated government have uncorked a Pandora’s Box of security, free speech and corporate espionage issues. There are human rights issues and free speech issues, questions on China’s actual role, trade and fairness issues, and the point about Google’s policy of initially enabling Internet censorship and now apparently backtracking.

But, there are also larger issues around security and Internet governance in general. Those are the issues we’ll be focusing on today. So, even as the US State Department and others in the US federal government seek answers on China’s purported role or complicity in the attacks, the repercussions on cloud computing and enterprise security are profound and may be long-term.

We’re going to look at some of the answers to what this donnybrook means for how enterprises should best protect their intellectual property from such sophisticated hackers as government, military or, quasi-government corporate entities and whether cloud services providers like Google are better than your average enterprise or even medium-sized business at thwarting such risks.

We'll look at how users of cloud computing should trust or not trust providers of such mission-critical cloud services as email, calendar, word processing, document storage, databases, and applications hosting. And, we’ll look at how enterprise architecture, governance, security best practices, standards, and skills need to adapt still to meet these new requirements from insidious world-class threats.

So, join me now in welcoming our panel for today’s discussion. Welcome to Jim Kobielus, senior analyst at Forrester Research. Hello, Jim.

Jim Kobielus: Hi Dana. How are you, buddy?

Gardner: Jason Bloomberg, managing partner at ZapThink.

Jason Bloomberg: Hi. Glad to be here.

Gardner: Jim Hietala, Vice President for Security at The Open Group.

Jim Hietala: Hello, Dana. [Disclosure: The Open Group is a sponsor of BriefingsDirect podcasts.]

Gardner: Elinor Mills, senior writer at CNET. Hello, Elinor.

Elinor Mills: Hi.

Gardner: And Michael Dortch, Director of Research at Focus.

Michael Dortch: Hi, Dana, and greetings, everyone.

Gardner: Thanks. Great having you with us Michael.

Elinor, let me start with you. You’ve been covering Internet security, and even Google specifically, for several years now. When we think of security, we often think of teenage hackers or lowbrow malware and pesky pop-ups, but do you think that this Google-China finger-pointing business has, in a sense, changed the way security is viewed.

Pointing fingers

Mills: Oh, absolutely. We’ve got a huge first public example of a company coming out and saying, not only that they've been attacked -- companies don’t want to admit that ever and it’s all under the radar -- but also they’re pointing the fingers. Even though they're not specifically saying, "We think it’s the Chinese state," but they think enough of it that they're willing to threaten to pull out of the country.

It’s huge and it’s going to have every company reevaluating what their response is going to be -- not just how they’re going to do business in other countries, but what is their response going to be to a major attack.

Gardner: Does this mean that the companies, enterprises specifically, need to rethink both security for what you'd call criminal activity, but now think at a higher level -- higher level being government versus government?

Mills: Yes, if they’re big companies -- mid-size companies maybe not so much. Bigger companies have been targeted with espionage for a while, especially if they have any kind of technology that China or any other country might want. I think there's going to be more emphasis on it. They’re going to have to think about it. For smaller companies, it’s not going to be as much of a problem.

Gardner: Jim Kobielus, do you view this as a big issue or is this more of the same? Have the folks that you deal with, who are protecting their data and information, been aware of these threats? Is this more of a public relations problem than a real one?

Kobielus: I won’t say it’s just a public relations problem. It is a real one. If you’re going to be a multinational firm -- I've heard the term "supernational" used as well -- you’re not above the laws and governmental structures of the nations within which you operate. It's always been this way. This is a sovereign nation, and you're subject to their laws.

If you’ve been a multinational firm before, or if you wish to be one, you’ve got to play by whatever rules are imposed upon you to operate in these spheres. One of the key issues for Google is whether they want to continue to be a business that’s growing in this particular market, subject to whatever rules are laid down, whether they want to be a crusader for civil rights, human rights, whatever, in the Western context, or if they’re trying to be both. It means they’re going to have to contend with the government of the People’s Republic of China on their own turf -- and good luck there.

Gardner: Don’t you think, Jim, that these issues transcend national boundaries or even laws that govern as a particular sovereign nation? If your servers are in one country, why should it be bound by the laws in another?

Kobielus: Well, your servers are physically hosted somewhere. Your access is from people, end users, in many nations that are trying to access whatever services you provide from those physically hosted servers.

So, your users and your servers are subject to the laws and the firewalls and security constraints and so forth in the various nations within which you will physically operate, as well as where your supply chain and your customer base will physically operate. None of these segments, these nodes, in this broader value chain are free floating in space like they're elevated platforms in the Jetsons.

Wakeup call?

Gardner: I think Google is going to perhaps challenge the way you’re looking at this. It should be interesting to see how it pans out. Jason Bloomberg, does this provide some sort of a wakeup call for enterprises and service providers as well about how they architect? Do they need to start architecting for a larger class of threats?

Bloomberg: It’s not as big of a wakeup call as it should be. You can ask yourself, "Is this an attack by some small cadre of renegade hackers or is this attack by the government of the People’s Republic of China? That’s an open question at this point.

Who is the victim? Is it Google, a corporation, or the United States? Is it the western world that is the victim here? Is this a harbinger of the way that international wars are going to be fought down the road?

We’ve all been worried about cyber warfare coming, but we maybe don’t recognize it when we see it as a new battlefield. It's the same as terrorism. It’s not necessarily clear who the participants are. We have this 18th Century view of warfare, where two armies meet on the battlefield and slug it out with the weapons of the day. But, terrorism has introduced new types of weapons and new types of battlefields.

Now we have cyber warfare, where it’s not even necessarily clear who the perpetrator is, who the victim is, or who the offended party is. This is a whole new context for conflict in the world.

When you place the enterprise into this context, well, it’s not necessarily just that you have a business within the context of a government subject to particular laws of particular government, you have the supernational, as Jim was taking about where large corporations have to play in multiple jurisdictions. That’s already a governance challenge for these large enterprises.

We already have this awareness that every single system on our network has to look out for itself and, even then, has levels of vulnerability.



Now, we have the introduction of cyber warfare, where we have concerted professional attacks from unknown parties attacking unknown targets and where it’s not clear who the players are. Anybody, whether it’s a private company, a public company, or a government organization is potentially involved.

They may not even fully know how involved they are or whether or not they are being targeted. That basically raises the bar for security throughout the entire organization. We’ve seen this already, where perimeter-based security has fallen by the wayside as being insufficient.

Sure, we need firewalls, but even though we have systems inside our firewalls, it doesn’t mean they are secure. A single virus can slip through the firewall with no problem at all. We already have this awareness that every single system on our network has to look out for itself and, even then, has levels of vulnerability. This just takes it to the national level.

Kobielus: But, there has always been corporate espionage and there’s always been vandalism perpetrated by companies against each other through subterfuge, and also by companies or fronts operating as the agent of unseen foreign power. This is what was the Germans did in this country before World War II to infiltrate, or what the Soviet Union did after World War II.

This is international real-politic as usual, but in a different technological realm. Don’t just focus on China. Let’s say that Google had a data center in Venezuela. They could just as easily have that expropriated by Hugo Chavez and his government. In China, that’s a possibility too.

Nothing radically new

What I’m saying is that I don’t see anything radically or fundamentally new going on here. This is just a big, powerful, and growing world power, China, and a big and growing world power on a tech front Google, colliding.

Mills: They have so much data. They’re becoming a service provider for the world. It’s not just their data that’s being targeted. You’ve got the City of Los Angeles, you’ve got DC, other government entities, moving onto Google Apps. So, the end target in the cloud is different than just the employees of one company.

Dortch: That challenge puts Google in the very interesting position of having to decide. Is it a politically neutral corporation or is it a protector of the data that its clients around the world, not just here, and not just from governments but corporations? Is it a protector and an advocate of protection for the data that those clients have been trusted to it? Or, is it going to use the fact that it is a broker of all that data to sort of throw its muscle around and take on governments like China’s in debates like this.

The implications here are bigger than even what we’ve been discussing so far, because they get at the very nature of what a corporation is in this brave new network world of ours.

And, this is taking place against the backdrop where the Supreme Court just decided that corporations in the United States have the same free speech rights and political campaigns as individuals. We're not clear at all on what this is going to mean for how the entity called a corporation is perceived, especially in the cloud.

Gardner: Thank you, Michael. Jim Hietala, help me understand, from your perspective, is this a game-changing event or is this more business as usual when it comes to corporate security.

Hietala: In terms of the visibility it’s gotten and the kinds of companies that were attacked, it’s a little bit game-changing. From the information security community perspective, these sorts of attacks have been going on for quite a while, aimed at defense contractors, and are now aimed at commercial enterprises and providers of cloud services.

I don’t think that the attacks per se are game-changing. There’s not a lot new here. It’s an attack against a browser that was couple of revs old and had vulnerability. The way in which the company was attacked isn’t necessarily game-changing, but the political ramifications around it and the other things we’ve just been talking about are what make it a little game-changing.

Gardner: I’d like to understand more about Michael Dortch’s point about the cloud providers and Elinor's as well. Should people think about a cloud provider as the best defense against these things, because they are current and they’ve got the power of scale they need to make this secure or their business itself is undermined?

Or, is this something that’s best done at the individual level, company by company, firewall by firewall? Does anyone have some thoughts about that?

Dortch: I’m reminded of what Ronald Reagan famously said, “Trust, but verify.” It’s one of those things where the cloud becomes a part of a good defense, but you can’t place all of your eggs in any one basket.

Combining resources

Companies that are doing business internationally and that worry about this sort of thing -- and they all should -- are going to have to combine cloud-based resources from reputable companies with documented protections in place with other protections, in case the first line of defense fails or is challenged in some major way.

Kobielus: In some ways, we all perceive what a cloud provider like Google needs to be regarded as in international law. It’s almost like a cyber Switzerland. Basically, it’s almost like, in another metaphor, an off-shore bank for your data and your other assets, in the same neutral role that Switzerland has played through the years, including during World War II for Nazi secreted assets.

In other words, it’s somehow a sovereign state, in its own right, with the full rights and privileges accruing thereto. I don’t think anybody is willing to take it that far in international law, but I think there is this perception that for cloud providers like Google to really realize their intended mission, there needs to be some change in international governance of sort of assets that transcend nation states.

Bloomberg: You could actually think of that as a reductio argument, because there isn’t going to be such a change. Cloud environments do not have that sort of power or capability and, if anything, cloud environments reduce the level of security.

They don’t increase it for the very reason that we don’t have a way of making them sovereign in their own right. They’re always not only subject to the laws of the local jurisdiction, but they’re subject to any number of different attacks that could be coming from any different location, where now the customers aren’t aware of this sort of vulnerability.

So, “Trust, but verify,” is a good point, but how can you verify, if you’re relying on a third party to protect your data for you? It becomes much more difficult to do the verification. I'd say that organizations are going to be backing away from cloud, once they realize just how risky cloud environments are.

All enterprises still are going to have to be at the top of their game, in terms of protecting their assets. . .



Mills: Microsoft’s general counsel Brad Smith this week gave a keynote at the Brookings Institute Forum, and he talked about modernizing and updating the laws to adapt specifically to the cloud. That included privacy rights under the Electronic Communications Privacy Act being more clearly defined, updating the Computer Fraud and Abuse Act, and setting up a framework so that differences in the regulations and practices in various countries can be worked out and reconciled.

Gardner: What happens if you are a small to medium-sized business and you might not have the resources to put into place all the security you need to deal with something like a China or Venezuela, or perhaps some large company that’s in another country that wants to take your intellectual property? Are you better going to a cloud provider and, in a sense, outsourcing security? Jim Hietala, does that make sense for a small to medium-sized business?

Hietala: I don’t think you can make that case yet today. I don’t think there is a silver-bullet cloud provider out there that has superior security to have that position. All enterprises still are going to have to be at the top of their game, in terms of protecting their assets, and that extends to small or medium businesses.

At some point, you could see a cloud provider stake out that part of the market to say, "We’re going to put in a superior set of controls and manage security to a higher degree than a typical small-to-medium business could," but I don’t see that out there today.

Waiting for disaster

Dortch: All of us who’ve doing this for a while, I think, will agree that where security is concerned, especially where cyber security is concerned, at least in North America, where I’m most familiar, companies tend not to talk about it or do anything, until there is some major catastrophe.

Nobody buys insurance, until the house next doors theirs burns down. So, from that perspective, this event could be useful. In terms of protecting their data, one of the issues that incidents like this raises is exactly how much corporate data is already in the cloud.

Many small businesses outsource payroll processing, customer relationship management (CRM), and a whole bunch of things. A lot of that stuff is outsourced to cloud service providers, and companies haven’t asked enough questions yet about exactly how cloud providers are protecting data and exactly how they can reassure that nothing bad is going to happen to it.

For example, if their servers come under attack, can they demonstrate credibly how data is going to be protected. These are the types of questions that incidents like this can and should raise in the minds of decision-makers at small and mid-sized businesses, just as they're starting to raise these issues, and have been raising them for a while, among decision-makers at larger enterprise.

Kobielus: I think what will happen is that some cloud providers will increasingly be seen as safe havens for your data and for your applications, because (A) they have the strong security, and (B) they are hosted within, and governed by, the laws of nation states that rigorously and faithfully try to protect this information, and assure that the information can then be removed -- transferred out of that country fluidly by the owners, without loss.

How about governments in general, maybe it's the United Nations who steps in? Who is the ultimate governor of what happens in cyber space?



In other words, it's like the Cayman Islands of the cloud -- that offshore banking safe haven you can turn to for all this. Clearly, it's not going to be China.

Gardner: We’ve seen in the history of the United States -- and, of course, the business world at large -- that whenever threats elevate to a certain level, the government steps in. We have seen with piracy, border controls, taxation, trade mandates, freedom pacts, and so forth. Whenever a threat arises, businesses get up and say, "Hey, we pay taxes. Uncle Sam, please come in and save us," whether it's through the navy or some technology.

Should we expect that, if we come to understand that this was an attack against American business interests from a foreign government of some kind, that it's up to the government to solve the problem? How about governments in general, maybe it's the United Nations who steps in? Who is the ultimate governor of what happens in cyber space?

Special offer: Download a free, supported 30-day trial of Active Endpoint's ActiveVOS at www.activevos.com/insight.

Dortch: Dana, in 2007, the National Academies of Science issued a cyber security report, and it included ten provisions that, at that time at least, were looked at as potentially the foundation for a cyber security bill of rights. Maybe it's time to reawaken discussions like that. Maybe what's needed is the cyberspace equivalent of the United Nations.

This is a lot of heavy lifting that we're talking about, and businesses have problems to solve and threats to address today. So your question begs another one: how do we get to the stage we need to be, where there can be trusted offshore equivalence databanks and all of that? And, what do we do in the meantime? I'm not smart enough to have answers to those questions, but they're really interesting.

We know the game

Kobielus: At a governmental level, obviously there will always be approaches and tools available to any sovereign nation -- treaties, negotiations, war, and so forth. We all know that. Clearly, we all know the game there.

In terms of who has responsibility and how will governance best practices be spread uniformly across the world in such areas of IT protection, it's going to be some combination of multilateral, bilateral, and unilateral action. For multilateral, the UN points to that, but there are also regional organizations. In Southeast Asia there is ASEAN, and in the Atlantic there is NATO, and so forth.

So, there is going to be a combination of all that. For this administration and subsequent administrations in the U.S., it’s just a matter of their putting together a clear agenda for trying to influence the policies, practices, and enforcement within China and other nations that may prove unreliable in terms of protecting the interest of our businesses.

Dortch: And, Secretary of State Clinton’s director of innovation -- I believe that's his title -- has already said publicly that it's a linchpin of our negotiating strategy with China and other countries.

Just as we, as a country, are an advocate for human rights, we're increasingly and more overtly advocating that other country’s citizens have free access to the Internet and basically have the cyber equivalent of human rights. That's going to play out in some very interesting ways as it becomes a larger part of our global diplomatic effort.

At a governmental level, obviously there will always be approaches and tools available to any sovereign nation -- treaties, negotiations, war, and so forth.



Kobielus: Keep in mind that the UN had a human rights declaration in 1946. China signed up, the Soviet Union signed up, and it didn’t make a whole lot of difference in terms of how they treated their own people over time. Keep in mind that such declarations are fine and dandy, but often don’t have much impact on the ground.

Gardner: So, enforcement is important. What we’ve seen so far is the enforcement of the marketplace, and I think that's what Google is up to in many respects. They’re saying, "Listen, we are a big enough company. We have such sophisticated technology and our price points for our services are so low that you would be at a disadvantage as a competitive nation not to have us working inside of your market, China."

Then, China says back to Google, "We are potentially, if not already, the biggest Internet market in the world, so don’t you think you have to adhere to our dictates in order to play ball in our court?" So, there is sort of a tussle within market powers. Is that's going to be the best way for these issues to be resolved?

Kobielus: It’s going to have to be resolved in the China context. They are the middle kingdom. They’ve seen themselves as the center of the universe, and it's not just me saying that. It's all manner of China scholars. This not fundamentally any different from the way in which Chinese centralized bureaucracy and governance for over 2,000 years.

Gardner: Jason Bloomberg, do you think that the traditional free market -- the powerful interests and the money -- are enough to balance the risks associated with security in this newest age?

Who decides "enough?"

Bloomberg: When you say "enough," the question is who decides what is enough. We have these opposing forces. One is that information should be free, and the Internet should be available to everybody. That basically pushes for removing barriers to information flow.

Then you have the security concerns that are driving putting up barriers to information flow, and there is always going to be conflict between those two forces. As increasingly sophisticated attacks develop, that pushes the public consensus toward increasing security.

That will impact our ability to have freedom, and that's going to be, continue to be a battle that I don’t see anybody winning. It's’ really just going to be an ongoing battle as technology improves and as the bad guys attacks improve. It's going to be an ongoing battle between security and freedom and between the good guys and the bad guys, as it were, and that's never going to change.

Gardner: Now, taking up on your point, Jason Bloomberg, about this being a spy-versus-spy kind of world, that's been that way so far. We thought about how governments might come in. Large corporations can play their role. Cloud providers might have to step in and offer some sort of an SLA-based protection or outsourced security opportunity of some kind.

What about going in the other direction? What if we go down to the individual who says, "If I'm going to play in the cloud or in this world-class cyber warfare environment, I want to have high encryption. I want to be able to authenticate myself in the best way possible. Therefore, I’ll give up some convenience. I might even pay a price, but I want to have the best security around my identity and I want to be able to play with the big boys, when it comes to encryption and authentication?"

If you're talking about specific individuals, it’s almost hopeless, because your average individual consumer doesn’t have the level of knowledge to go out and find the right solutions to protect themselves today.



We don’t really have an opportunity for those people to say, "I want to exercise security at an individual level." Jim Hietala, is there anything like that out there to get them to move towards the individual level of self-help, when it comes to high levels of security?

Hietala: Large enterprises are going to have to be responsible for the security of their information. I think there are a lot of takeaways for enterprises from this attack. If you're talking about specific individuals, it’s almost hopeless, because your average individual consumer doesn’t have the level of knowledge to go out and find the right solutions to protect themselves today.

So, I'll focus on the large enterprises. They have to do a good job of asset inventory, know where, within their identity infrastructure, they're vulnerable to this specific attack, and then be pretty agile about implementing countermeasures to prevent it. They have to have patch management that's adequate to the task of getting patches out quickly.

They need to do things like looking at the traffic leaving their network to see if people are already in their infrastructure. These Trojans leave traces of themselves, when they ship information out of an organization. When people really understand what happened in this attack, they can take something away, go back, look at what they are doing from a security standpoint, and tighten things up.

If you're talking about individuals putting things in the cloud, that’s a different discussion that doesn’t seem real feasible to me to get them to the point where they can secure their information today.

Centralized directory

Gardner: Jim, I was getting back to what I used to hear almost 20 years ago in the messaging space, when we first started talking about directories, that the directory is only as good as the authentication and the information and verification.

Don’t we need a centralized directory that we can bounce off these credentials and make sure that they are valid and authenticated? But, there was no central place to do that. Is it time for the government or some other agency or organization to come in and create that über directory for that large-scale global authentication capability?

Kobielus: You're talking about identity systems, with a web of trust, PKI and so forth. We've been talking about that for years. About five years ago, I was with a company that was trying to build federated cross-industry identity management for aerospace and defense, one North Atlantic industry, and even that was frightfully complicated. It probably still hasn’t gotten off the ground.

Imagine creating a similar federated directory with all the stronger authentication and encryption and so forth for all industries within the US. Especially consider worldwide. It’s not going to happen. It’s just a huge engineering nightmare, putting together the trust relationships and working out all the interchange and interoperability issues. It’s just overkill. It’s just much more trouble than it’s worth.

Gardner: Too much federation. But what if there are only a handful of major cloud providers? Maybe it’s Google, Yahoo, Amazon, and Microsoft -- and I've just thrown those out. It could be a number of others. They might have the market heft or the technological wherewithal to enforce and deliver such an authentication and federated directory into existence.

I don’t see the people running cloud-computing companies being radically different from the people that run phone companies . . .



Is anybody thinking like I am, that maybe cloud computing is different, that we can start to actually use the scale of these cloud providers to accomplish these large security requirements?

Dortch: You know, Dana, people change a lot more slowly than technology does. Just a few short months ago, a lot of us were outraged, when it turned out that a handful of major telephone service providers had apparently been giving information to the government without the knowledge or consent of the subscribers whose information was manipulated. At least, that's what the published report seemed to indicate.

I don’t see the people running cloud-computing companies being radically different from the people that run phone companies, and I don’t see them being, a priori, any less subject to influence by their own governments, bribes, threats, or anything else than the people who run the phone companies. I think that’s a good idea but I think it’s fraught with the same level of peril.

Kobielus: In fact, look at the last nine years since 9/11 and you can see in all the articles and stories how telcos have just bent over backwards to allow the Feds to come in and survey their users and subscribers and to abscond with call detail records to monitor terrorist and other people's calling patterns, quite often not even using a search warrant. In other words, it's exactly what he said. How can you trust the carrier to safeguard our privacy, when they so easily succumb to such government pressure?

Gardner: So, these are very big issues that will impact us all as individuals and citizens within our national interests, as well as our companies. Yet, no one seems to have a good sense -- and, there are some very bright people on the line today, of how to even go about defining the problem, never mind solving it.

Identity registrars

Kobielus: Dana, there is another point you raised about, why we don't just let the providers become sort of the über identity management registrars and then set a rate among themselves.

Remember about 10 years ago -- I'm getting old, I can remember back 10 or more years -- Microsoft with its MSN Passport fiasco? Microsoft was saying, "We want to be everybody's identity management hub." Then, the huge thing that was raised about it was, "Microsoft wants to control our identities." Then, things like Liberty Alliance and all the others sprung up to say, "No, no, it must be a centralized and better way, so no one company can control all of our online identities."

That whole passport idea was kind of cool in some ways, but was just shot down completely and definitively, because the culture just said, "No, we cannot allow one group to have that much power."

Gardner: They typically didn't trust Microsoft at that point, when it was at perhaps the apex of its power, right?

Kobielus: Exactly. Now, Google is at the apex of their power. Would we trust Google in the same capacity? Look at China. They will become probably the largest economy in the world, in the next 25 years. Can we trust them? No, of course not.

When you have too much power concentrated in one place, people naturally sort of revolt.



When you have too much power concentrated in one place, people naturally sort of revolt. "No, wait, wait. I don't want to give them any more powers than they already have. Let's rethink this whole 'give them control of my identity' thing."

Dortch: It was the desire to get away from too much centralized control that led to the invention of the PC in the first place. It's it's important to keep that in mind in this context.

Gardner: So, if you truly want to be safe, you should just turn off your PC and start sending out mail at 44 cents a pop.

Kobielus: And, then you're not safe from Anthrax, you know.

Gardner: Let's go around our panel. We’re almost out of time. I’d be interested now in hearing some predictions about what you think is going to happen next. We've done a great job at defining the scope, depth, and complexity of this problem set, a very complex undertaking. But, it seems like it's not something that's going to go away. What do you think is going to happen next, Jim Kobielus?

Kobielus: I don't think Google is going to leave China. I even saw a headline today. I think it said that they were going to stay in China and somehow try to work it out with the PRC. I don't know where that's going, but fundamentally Google is a business and has a "don't do evil" philosophy. They're going to continue to qualify evil down to those things that don't actually align with their business interest.

In other words, they're going to stay. There's going to be a lot of wariness now to entrust Google's China operation with a whole lot of your IT -- "you" as a corporation -- and your data. There will be that wariness.

Preferred platforms

Other cloud providers will be setting up shop or hosting in other nations that are more respectful of IP, other nations that may not be launching corporate or governmental espionage at US headquartered properties in China. Those nations will become the preferred supernational cloud hosting platforms for the world.

I can't really say who those nations might be, but you know what, Switzerland always sort of stands out. They're still neutral after all these years. You've got to hand that to them. I trust them.

Gardner: Jason Bloomberg, what do you think is going to happening next?

Bloomberg: In the short-term, the noise is going to die down or going to go back to business as usual. The security is going to need to improve, but so are hacks from the bad guys. It's going to continue, until there is the next big attack. And the question is, "What's it going to be and how big is it going to be?"

We're still waiting for that game changer. I don't think this is a game changer. It's just a way to skirmish. But, if a hacker is able to bring down the internet, for example, targeting the DNS infrastructure to the point that the entire thing collapses, that’s something that could wake people up to say, "We really have to get a handle on this and come up with a better approach."

Gardner: That's mass vandalism. That doesn't really suit the purposes of some of the types of folks we are talking about. They don't want to bring the Internet down. They simply want to get an advantage over their competitors.

From our perspective, we're starting to see more awareness at higher levels in governments that the threats and issues here are real.



Bloomberg: Well, it really depends. We don't know who the bad guys are and what they’re trying to do. There's no single perspective. There's no single bad guy out there with a single agenda. We just don't know. We don't know what the agendas are.

Gardner: We don't know whether we've a level playing field or not?

Bloomberg: We can count on it not being leveled.

Gardner: Right. Jim Hietala, what do you see as some of the short- or medium-term next steps?

Hietala: From our perspective, we're starting to see more awareness at higher levels in governments that the threats and issues here are real. They’re here today. They seem to be state sponsored, and they're something that needs to be paid attention to.

Secretary of State Clinton gave a speech just today, where she talked specifically about this attack, but also talked about the need for nations to band together to address the problem. I don't know what that looks like at this point, but I think that the fact that people at that level are talking about the problem is good for the industry and good for the outlook for solutions that are important in the future.

Gardner: So, perhaps a free world versus an unfree world, at least in cyber terms, and perhaps the free world would have an advantage, or maybe the unfree world would have an advantage. It's hard to say.

Hietala: I'd agree it's hard to say, but the fact that those discussions going on is positive.

Gardner: Elinor Mills, any sense of where things are going?

Leading the way

Mills: I'm horrible at predictions, but I'll just throw this out. I think Google is going to get out of China and try and lead some kind of US corporate effort or be a role model to try to do business in a more ethical way, without having to compromise and censor.

There will be a divergence that you'll see. China and other countries may be pushed more towards limiting and creating their own sort of channel that's government filtered. I think the battle is just going to get bigger. We're going to have more fights on this front, but I think that Google may lead the way.

Gardner: Very good. Michael Dortch, where do you see it going?

Dortch: Elinor is at least partly right. Especially, if Google leaves China, Baidu's going to rise up as being the government approved version of Google for China and its localities. The very next thing Google will do is forge a strong working relationship as it possibly can with Baidu. You might see that model replicated across multiple countries in the world.

In the meantime though, something that -- if I remember correctly -- Astrodienst said almost 30 years ago is important to remember. Privacy is fungible. It's like currency. You're going to see individuals, small businesses, and individual corporate entities forging negotiations, deals, relationships, and accommodation that treat privacy and security as currency.

If it costs me a little bit more to do business here, I'm going to think seriously about it. Every once in a while, I'm going to swallow hard and pay the piper.

Google made itself into a figurehead of representing what a free enterprise approach could do. It's not state sponsored or nationalistic. It's corporate sponsored.



Gardner: Great. I'm going to throw my two cents as well. This boils down to almost two giant systems or schools of thought that are now colliding at a new point. They've collided at different points in the past on physical sovereignty, military sovereignty, and economic sovereignty. The competition is between what we might call free enterprise based systems and state sponsorship through centralized control systems.

Free enterprise won, when it came to the cold war, but it's hard to say what's going to happen in the economic environment where China is a little different beast. It's state sponsored and it's also taking advantage of free enterprise, but it's very choosy about what it allows for either one of those systems to do or to dominate.

When you look at the Google, Google made itself into a figurehead of representing what a free enterprise approach could do. It's not state sponsored or nationalistic. It's corporate sponsored. So, it would be interesting to see who has the better technology, who has the better financial resources, and ultimately who has the organizational wherewithal to manifest their goals online that wins out in the marketplace.

If an organized effort is better at doing this than a corporate one, well then they might dominate. But so far, we've seen a very complex system that the marketplace -- with choice, and shedding light and transparency on activities -- ultimately allows for free enterprise predominance. They can do it better, faster, cheaper and that it will ultimately win.

I think, we're really on the cusp here of a new level of competition, but not between countries or even alliances, but really between systems. The free enterprise system versus the state-sponsored or the centralized or the controlled system. It should be very interesting.

I want to thank our guests for today’s discussion. Jim Kobielus, senior analyst at Forrester Research. Thanks, Jim.

Kobielus: Sure.

Gardner: Jason Bloomberg, managing partner at ZapThink. Great to have you.

Bloomberg: My pleasure.

Gardner: Jim Hietala, Vice President for Security at The Open Group. Thank you, Jim.

Hietala: Thank you, Dana.

Gardner: And thank you for joining us, Elinor Mills, senior writer at CNET.

Mills: My pleasure.

Gardner: Lastly, I appreciate your debut here today, Michael Dortch, Director of Research at Focus.

Dortch: It was great fun, and I hope I passed the audition.

Gardner: You did.

Gardner: I also want to thank our charter sponsor for supporting today’s BriefingsDirect, Analyst Insights Edition, that's Active Endpoints. This is Dana Gardner, principal analyst at Interarbor Solutions. Thanks for listening, and come back next time.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Charter Sponsor: Active Endpoints.

Special offer: Download a free, supported 30-day trial of Active Endpoint's ActiveVOS at www.activevos.com/insight.

Edited transcript of a BriefingsDirect Analyst Insights Edition podcast, Volume 50, on what the fallout is likely to be after Google's threat to leave China in the wake of security breaches. Copyright Interarbor Solutions, LLC, 2005-2010. All rights reserved.

You may also be interested in: