Saturday, February 06, 2010

ISM3 Brings Greater Standardization to Security Measurement Across Enterprise IT

Transcript of a sponsored BriefingsDirect podcast on ISM3 and emerging security standards recorded live at The Open Group’s Enterprise Architecture Practitioners Conference in Seattle.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: The Open Group.

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you’re listening to BriefingsDirect.

Today, we present a sponsored podcast discussion coming to you from The Open Group’s Enterprise Architecture Practitioners Conference in Seattle on Feb. 2, 2010.

We've assembled a panel to examine the need for IT security to run more like a data-driven science, rather than a mysterious art form. Rigorously applying data and metrics to security can dramatically improve IT results and reduce overall risk to the business.

By employing and applying more metrics and standards to security, the protection of IT becomes better, and the known threats can become evaluated uniformly. People can understand better what they are up against, perhaps in close to real-time. They can know what's working -- or is not working -- both inside and outside of their organization.

Standards like Information Security Management Maturity Model (ISM3) are helping to not only gain greater visibility, but also allowing IT leaders to scale security best practices repeatably and reliably.

We're here to determine the strategic imperatives for security metrics, and to discuss how to use them to change the outcomes in terms of IT’s value to the business.

Please join me in welcoming a security executive from The Open Group, as well as two experts on security who are presenting here at the Security Practitioners Conference. I want to welcome Jim Hietala, Vice President for Security at The Open Group. Hi, Jim.

Jim Hietala: Hi Dana.

Gardner: We are also here with Adam Shostack, co-author of The New School of Information Security. Welcome, Adam.

Adam Shostack: Hey, Dana. Great to be here.

Gardner: And also Vicente Aceituno, director of the ISM3 Consortium. Welcome.

Vicente Aceituno: Thank you very much.

Gardner: Now that we have got a sense of this need for better metrics and better visibility, I wonder if I could go to you, Jim. What is it to be a data-driven security organization, versus the alternative?

Hietala: In a sentence, it's using information to make decisions, as opposed to what vendors are pitching at you or your gut reaction. It's getting a little more scientific about gathering data on the kinds of attacks you're seeing and the kinds of threats that you face, and using that data to inform the decisions around the right set of controls to put in place to effectively secure the organization.

Gardner: Is it fair to say that organizations are largely not doing this now?

All over the map

Hietala: It's probably not a fair characterization to say that they're not. A presentation we had today from an analyst firm talked about people being all over the map. I wouldn’t say there's a lot of rigor and standardization around the kinds of data that’s being collected to inform decisions, but there is some of that work going on in very large organizations. There, you typically see a little more mature metrics program. In smaller organizations, not so much. It's a little all over the map.

Gardner: Perhaps it's time to standardize this a little bit?

Hietala: We think so. We think there's a contribution to make from The Open Group, in terms of developing the ISM3 standard and getting it out there more widely.

Gardner: Adam, what, in your perception, is different now in terms of security than say two, three, or four years ago?

Shostack: The big change we've seen is that people have started to talk about the problems that they are having, as a result of laws passed in California and elsewhere that require them to say, "We made a mistake with data that we hold about you," and to tell their customers.

We've seen that a lot of the things we feared would happen haven't come to pass. We used to say that your company would go out of business and your customers would all flee. It's not happening that way. So, we're getting an opportunity today to share data in a way that’s never been possible before.

Gardner: Is it fair to say we are getting real about security?

Shostack: We've been real about security for a long time, but we have an opportunity to be a heck of a lot more effective than we have been. We can say, "This control that we all thought was a really good idea -- well, everyone is doing it, and it's not having the impact that we would like." So, we can reassess how we're getting real, where we're putting our dollars.

Gardner: Vicente, perhaps you could help us understand the application of metrics and data for security with external factors, and then internal. What's the difference?

Aceituno: Well, you can only use metrics to manage internal factors, because metrics are all about controlling what you do and being able to manage the outputs that you produce and that contribute value to the business.

I don’t think it brings a bigger return on investment (ROI) to collect metrics on external things that you can't control. It’s like hearing the news. What can you do about it? You're not the government or you're not directly involved. It's only the internal metrics that really make sense.

Gardner: From your perception, what needs to be a top priority in terms of this data-driven approach to security inside your own organization?

What you measure

Aceituno: The top priority should be to make sure that the things you measure are things that are contributing positivity to the value that you're bringing to business as a information security management (ISM) practitioner. That’s the focus. Are you measuring things that are actually bringing value or are you measuring things that are fancy or look good?

Gardner: We've heard "fit for purpose" applied to some other aspects of architecture and IT. How does this notion, being fit for purpose, apply to your security efforts?

Aceituno: Basically, we link business goals, business objectives, and security objectives in a way that’s never been done before, because we are painfully detailed when we express the outcomes that you are supposed to get from your ISM system. That will make it far easier for practitioners to actually measure the things that matter.

Gardner: We've been talking fairly generally about metrics and data. Jim, what do we really talk about? What are we defining here? Is this about taxonomy and categories, metadata, all the above -- or is there something a bit more defined that we're trying to measure?

Hietala: There's some taxonomy work to be done. One of the real issues in security is that when I say "threat," do other people have the same understanding? Risk management is rife with different terms that mean different things to different people. So getting a common taxonomy is something that makes sense.

The kinds of metrics we're collecting can be all over the map, but generally they're the things that would guide the right kind of decision making within an IT security organization around the question, "Are we doing the right things?"

Today, Vicente used an example of looking at vulnerabilities that are found in web applications. A critical metric was how long those vulnerabilities are out there before they get fixed by different lines of business, by different parts of the business, looking at how the organization is responding to that. We're trying to drive that metric toward the vulnerabilities being open for less time and getting fixed quicker.

Gardner: Adam, in your book, I believe you addressed some of these issues. How do look at metrics? How do you characterize them? I know it could go on for an hour about that, but at the high level ...

Shostack: At the high level, Vicente’s point about measuring the things you can control is critical. Oftentimes in security, we don’t like to admit that we've made mistakes and we conceal some of the issues that are happening. A metrics initiative gives you the opportunity to get out there and talk about what's going on, not in a finger pointing way, which has happened so often in the past, but in an objective and numerically centered way. That gives us opportunity to improve.

Gardner: I suppose this is a maturation of security. Is that fair to say that we're bringing this to where some other aspects of business may have been, in say manufacturing, 30, 40, or 50 years ago?

Learning from other disciplines

Shostack: I think that’s a fair statement. We're learning a lot from other fields. We're learning a lot from other disciplines. Elements of that are going to uncomfortable for some practitioners, and there are elements that will really enable practitioners to connect what they are doing to the business.

Gardner: The stakes here, I imagine, are quite high. This is about the trust you have with your partners, your customers, and the brand equity you have in your company. These are not small considerations.

Hietala: No, they're big considerations, and they do have a big effect on the business. Also, the important outputs of a good metrics program can be that it gives you a different way to talk to your senior management about the progress that you're making against the business objectives and security objectives.

That’s been an area of enormous disconnect. Security professionals have tended to talk about viruses, worms, relatively technical things, but haven't been able to show a trend to senior management that justifies the kind of spending they have been doing and the kind of spending they need to do in the future. Business language around some of that is needed in this area.

Gardner: I have to imagine, too, that if we formalize, structure, and standardize, we can make these repeatable. Then there's not that risk of personnel leaving and taking a lot of the tribal knowledge with them. Is that fair?

I can't think of anything better than for ISM3 to be managed from The Open Group from here on.



Hietala: That's fair as well. That's something that came out today in some of the discussions. Documenting the processes and what you're doing makes it easier to transition to new personnel and that kind of thing.

Gardner: Vicente, tell us a little bit about the ISM3 Consortium, its history, and what it is that you are principally involved with at this time.

Aceituno: The main task of the ISM3 Consortium so far was to manage the ISM3 standard. I'm very happy to say that The Open Group and ISM3 Consortium reached an agreement and, with this agreement, The Open Group will be managing ISM3 from here on in. We'll be devoting our time to other things, like teaching and consulting services in Spain, which is our main market. I can't think of anything better than for ISM3 to be managed from The Open Group.

Gardner: Adam, do you have a sense of this particular standard, the ISM3? Where do you see it fitting in?

Shostack: Actually, I don't have a great sense of where it fits in. There are a tremendous number of standards out there, and what I heard today I am very impressed by. I'm going to go read more about it, but it's not something I have a lot of operational exposure to that really lets me say, "This is where it's working for me."

Gardner: Jim, do you have a sense of where it fits in, and perhaps for those of our listeners who are not that familiar, can you give a quick tutorial?

Business value approach

Hietala: Sure. In terms of where I'd place it in the information security community, it adds a business value approach to information security, a metrics and maturity model approach that you had not necessarily had there with some of the other standards that are out there.

I'd also say that it's approachable from the standpoint that it's geared toward having different target maturity levels for different kinds of enterprises. That makes sense.

One of the things we talk about is that there's an 80-20 rule. You get 80 percent of the benefit from a subset of security controls. You can tailor ISM3 to the organization and get some benefit out of it, without setting the bar so high that it's unachievable for a mid-size or small business. That's the way I would characterize it.

Gardner: I think it's really important that these things are developed and brought into an organization at a practical level for those people who are in the trenches and are down there doing the work. Is there anything about this particular standard that you think is really not academic, but something quite effective in practice?

Hietala: Well, it spans the breadth of information security. You have metrics and control approaches in various areas and you can pick a starting point. You can come at this top-down, if you're trying to implement a big program. Or, you come at it bottoms-up and pick a niche, where you know you are not doing well and want to establish some rigor around what you are doing. You can do a smaller implementation and get some benefit out of it. It's approachable either way.

It was easier to communicate with other teams, and we had metrics to understand the results we were getting from making changes in the process.



Gardner: Adam, any thoughts about this issue of practicality when it comes to security, something that's more scientific and not perhaps a mysterious dark art of some kind?

Shostack: I really liked seeing the practical extracted. "Here are the things we're measuring. Here is why it matters to the business." That's what Vicente was talking about with regards to ISM3 through the day. Getting away from these very broad, hand wavy measures of risk or improvement, down to, "We are measuring this precise thing and this is why we need it to improve," is refreshing.

Gardner: Vincente, do you have any examples of organizations that have taken a lead on this and what sort of results have they been able to provide?

Aceituno: At this moment, the one organization that has implemented the ISM3 is Caja Madrid, which is the fourth biggest financial institution in Spain, and they had very impressive results. We found six times as many vulnerabilities. We were making more than twice as many ethical hacking tests. We could bring down the cost of unethical hacking by a big percentage, and we were getting more vulnerabilities fixed.

It was easier to communicate with other teams, and we had metrics to understand the results we were getting from making changes in the process. We have knowledge management that allows us to change the whole team of people and still carry on doing exactly the same thing in the same way that we were doing it.

I think that Caja Madrid is very happy and, actually, the director of security at Caja Madrid is very impressed with ISM3.

Gardner: Who typically are the folks who would be bringing this into an organization? I suppose there is some variability and the organizational landscape is still quite diverse, but is there a methodology in terms of how to bring this into an organization?

Works either way

Aceituno: It could work either way. Either you're a top-level manager, the CISO, or whatever, and you can think, "Okay, I want to do this" and you can implement a top-down implementation of the method.

Or, you can have no support from higher management and understand that you need to put in some rigor for management and you can think, "Okay, I'm going to organize my own work around this framework."

It can work either way, as Jim was saying before. You can implement it top down or bottom up and get benefit from it.

Gardner: Jim, this is a specific Open Group question. Does this work well inside of some other framework activity or architectural initiatives? Are there some other ITIL related activities? Does this have a brotherhood, if you will, in terms of standards and approaches that The Open Group's heritage is a bit more attuned to?

Hietala: I don't know that there's a direct statement you can make about how well this will work in an enterprise architecture framework or something like that. This is more about managing security objectives and operational things that you are going to do in a information security frame within an enterprise.

It's process-oriented. So, in terms of working well with other things, it works well with ITIL. Some of the early implementations have suggested that, but there is a good synergy there. I'll leave it there.

Gardner: Adam, any thoughts, from your perspective, on how this fits into some larger initiatives around security?

We've seen over the last few years that those security programs that succeed are the ones that talk to the business needs and talk to the executive suite in language that the executives understand.



Shostack: We've seen over the last few years that those security programs that succeed are the ones that talk to the business needs and talk to the executive suite in language that the executives understand.

The real success here and the real step with ISM3 is that it gives people a prescriptive way to get started on building those metrics.

You can pick it up and look at it and say, "Okay, I'm going to measure these things. I'm going to trend on them." And, I'm going to report on them."

As we get toward a place, where more people are talking about those things, we'll start to see an expectation that security is a little bit different. There is a risk environment that's very outside of people's control, but this gives people a way to get a handle on it.

Gardner: Vicente, it seems quite important, as a first step, to know where you are, in order to know how you've progressed. This seems to be an essential ingredient to being able to ascertain your risks over time.

Aceituno: The very first step, when it comes to the usual implementing, is to understand the needs and the goals of the business and the obligations of the business, because that's what drives the whole design of the ISM system There is no need to align security goals and business goals, because there are no goals outside of business goals. You have to serve the business first.

Gardner: There really isn't much difference between the goals of security and the general goals of the business. They are inexorably tied.

Aceituno: Yes, of course, they are.

Gardner: We've been learning more about security, some new metrics, and the ability to tie this into business outcomes. I want to thank our panel. We've been talking to Jim Hietala, Vice President for Security at the Open Group. Thank you, Jim.

Hietala: Thank you, Dana.

Gardner: Adam Shostack, co-author of the book, The New School of Information Security. Thank you.

Shostack: Thank you.

Gardner: And, also Vicente Aceituno, who is the Director of the ISM3 Consortium. Thank you.

Aceituno: Thanks so much.

Gardner: We are coming to you from The Open Group Security Practitioners Conference in Seattle, the week of Feb. 1, 2010.

This is Dana Gardner, principal analyst at Interarbor Solutions. Thanks for listening to this BriefingsDirect podcast, and come back next time.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: The Open Group.

Transcript of a sponsored BriefingsDirect podcast on ISM3 and security standards recorded live at The Open Group’s Enterprise Architecture Practitioners Conference in Seattle. Copyright Interarbor Solutions, LLC, 2005-2010. All rights reserved.

You may also be interested in:

Thursday, February 04, 2010

ArchiMate Gives Business Leaders and Architects a Common Language to Describe How the Enterprise Works

Transcript of a BriefingsDirect sponsored podcast on ArchiMate and enterprise architecture recorded live at The Open Group’s Enterprise Architecture Practitioners Conference in Seattle.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: The Open Group.

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you’re listening to BriefingsDirect.

Today, we present a sponsored podcast discussion coming to you from The Open Group’s Enterprise Architecture Practitioners Conference in Seattle the week of Feb. 1, 2010.

We're now going to look at a way of conceptualizing, modeling, and controlling enterprise architecture (EA) using ArchiMate. We are going to talk with an expert on this, Dr. Harmen van den Berg. He is a partner and co-founder at BiZZdesign. Welcome to the show.

Dr. Harmen van den Berg: Thank you.

Gardner: I really enjoyed your presentation, getting into ArchiMate in ways that you can get a visualization control and even work beyond some of the confines of architecture into some of the business benefits. For the benefit of our audience, could you tell us a little bit about the history of ArchiMate. How did it come about?

Van den Berg: ArchiMate is a standard that was developed in the Netherlands by a number of companies and research institutes. They developed it, because there was a lack of a language for describing EA. After it was completed, they offered it to The Open Group as a standard.

Gardner: What is the problem that it solves?

Van den Berg: The problem that it solves is that you need a language to express yourself, just like normal communication. If you want to talk about the enterprise and the important assets in the enterprise, the language supports that conversation.

Gardner: We are talking about more and more angles on this conversation, now that we talk about cloud computing and hybrid computing. It seems as if the complexity of EA and the ability to bring in the business side, provide them with a sense of trust in the IT department, and allow the IT department to better understand the requirements of the business, all need a new language. Do you think it can live up to that?

Van den Berg: Yes, because if you look at other language, like UML, which is for system development and is a very detailed language, it only covers a very limited part of the complete enterprise. ArchiMate is focused on giving you a language for describing the complete enterprise, from all different angles, not on a detailed level, but on a more global level, which is understandable to the business as well.

Gardner: So more stakeholders can become involved with something like ArchiMate. I guess that's an important benefit here.

Van den Berg: Yes, because the language is not focused only on IT, but on the business as well and on all kinds of stakeholders in your organization.

Gardner: How would someone get started, if they were interested in using ArchiMate to solve some of these problems? What is the typical way in which this becomes actually pragmatic and useful?

Van den Berg: The easiest way is just to start describing your enterprise in terms of ArchiMate. The language forces you to describe it on a certain global level, which gives you direct insight in the coherence within your enterprise.

Gardner: So, this allows you to get a meta-view of processes and assets that are fundamentally in IT, but have implication for and reverberate around the business.

Don't have to start in IT

Van den Berg: You don't have to start in IT. You can just start at the business side. What are products? What are services? And, how are they supported by IT?" That's a very useful way to start, not from the IT side, but from the business side.

Gardner: Are there certain benefits or capabilities in ArchiMate that would, in fact, allow it to do a good job at defining and capturing what goes on across an extended enterprise, perhaps hybrid sourcing or multiple sourcing of business processes and services?

Van den Berg: It's often used, for example, when you have an outsourcing project to describe not only your internal affairs, but also your relation with other companies and other organizations.

Gardner: What are some next steps with ArchiMate within The Open Group as a standard? Tell us what it might be maturing into or what some of the future steps are.

Van den Berg: The future steps are to align it more with TOGAF, which is the process for EA, and also extending it to cover more elements that are useful to describe an EA.

It's often used, for example, when you have an outsourcing project to describe not only your internal affairs, but also your relation with other companies and other organizations.



Gardner: And for those folks who would like to learn more about ArchiMate and how to get this very interesting view of their processes, business activities, and IT architecture variables where would you go?

Van den Berg: The best place to go is The Open Group website. There is a section on ArchiMate and it gives you all the information.

Gardner: We've been talking about ArchiMate and how IT architecture and Enterprise Architecture can come together, but in the confines of a structured language. We've been joined by Dr. Harmen van den Berg, partner and co-founder at BiZZdesign. Thank you.

Van den Berg: Thank you.

Gardner: This sponsored podcast discussion is coming to you from The Open Group's Enterprise Architecture Practitioners Conference in Seattle, the week of Feb. 1, 2010.

This is Dana Gardner, principal analyst at Interarbor Solutions. You've been listening to a BriefingsDirect podcast. Thanks for joining, and come back next time.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: The Open Group.

Transcript of a BriefingsDirect sponsored podcast on ArchiMate and enterprise architecture recorded live at The Open Group’s Enterprise Architecture Practitioners Conference in Seattle. Copyright Interarbor Solutions, LLC, 2005-2010. All rights reserved.

You may also be interested in:

'Business Architecture' Helps Business and IT Leaders Decide On and Communicate Changes at the New Speed of Business

Transcript of a sponsored BriefingsDirect podcast on business architecture recorded live at The Open Group’s Enterprise Architecture Practitioners Conference in Seattle Feb. 2, 2010.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: The Open Group.

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you’re listening to BriefingsDirect.

Today, we present a sponsored podcast discussion coming to you from The Open Group’s Enterprise Architecture Practitioners Conference in Seattle, the week of Feb. 1, 2010.

We're going to look at the topic of the difference between enterprise architecture (EA) and business architecture (BA). We will be talking with Tim Westbrock, Managing Director of EAdirections. Welcome to the show, Tim.

Tim Westbrock: How are you doing, Dana?

Gardner: Doing great. I really enjoyed your presentation today. Can you tell us a little bit about some of the high-level takeaways. Principally, how do you define BA?

Westbrock: Well, the premise of my discussion today is that, in order for EA to maintain and continue to evolve, we have to go outside the domain of IT. Hence, the conversation about BA. To me, BA is an intrinsic component of EA, but what most people really perform in most organizations that I see is IT architecture.

A real business-owned enterprise business architecture and enterprise information architecture are really the differentiating factors for me. I'm not one of these guys that is straight about definitions. You’ve got to get a sense from the words that you use.

To me enterprise business architecture is a set of artifacts and methods that helps business leaders make decisions about direction and communicate the changes that are required in order to achieve that vision.

Gardner: How do we get here? What's been the progression? And, why has there been such a gulf between what the IT people eat, sleep, and drink, and what the business people expect?

Westbrock: There are a lot of factors in that. Back in the late '80s and early '90s, we got really good at providing solutions really quickly in isolated spots. What happened in most organizations is that you had really good isolated solutions all over the place. Integrated? No. Was there a need to integrate? Eventually. And, that's when we began really piling up the complexity.

We went from an environment, where we had one main vendor or two main vendors, to every specific solution having multiple vendors contributing to the software and the hardware environment.

That complexity is something that the business doesn’t really understand, and we haven’t done a real good job of getting the business to understand the implications of that complexity. But, it's not something they should really be worried about. It's our excuse sometimes that it's too complex to change quickly.

Focus on capabilities

We really need to focus the conversation on capabilities. Part of my presentation talked about deriving capabilities as the next layer of abstraction down from business strategy, business outcomes, and business objectives. It's a more finite discussion of the real changes that have to happen in an organization, to the channel, to the marketing approach, to the skill mix, and to the compensation. They're real things that have to change for an organization to achieve its strategies.

In IT architecture, we talk about the changes in the systems. What are the changes in the data? What are the changes in the infrastructure? Those are capabilities that need to change as well. But, we don't need to talk about the details of that. We need to understand the capabilities that the business requires. So, we talk to folks a lot about understanding capabilities and deriving them from business direction.

Gardner: It seems to me that, over the past 20 or 30 years, the pace of IT technological change was very rapid -- business change, not so much. But now, it seems as if the technology change is not quite as fast, but the business change is. Is that a fair characterization?

Westbrock: It's unbelievably fast now. It amazes me when I come across an organization now that's surviving and they can't get a new product out the door in less than a year -- 18 months, 24 months. How in a world are they responding to what their customers are looking for, if it takes that long to get system changes products out the door?

BA is a means by which we can engage as IT professionals with the business leadership, the business decision-makers who are really deciding how the business is going to change.



We're looking at organizations trying monthly, every six weeks, every two months, quarterly to get significant product system changes out the door in production. You've got to be able to respond that quickly.

Gardner: So, in the past, the IT people had to really adapt and change to the technology that was so rapidly shifting around them, but now the IT people need to think about the rapidly shifting business environment around them.

Westbrock: "Think about," yes, but not "figure out." That's the whole point. BA is a means by which we can engage as IT professionals with the business leadership, the business decision-makers who are really deciding how the business is going to change.

Some of that change is a natural response to government regulations, competitive pressures, political pressures, and demographics, but some of it is strategic, conscious decisions, and there are implications and dependencies that come along with that.

Sometimes, the businesses are aware of them and sometimes they're not. Sometimes, we understand as IT professionals -- some not all -- about those dependencies and those implications. By having that meaningful dialogue on an ongoing basis, not just as a result of the big implementation, we can start to shorten that time to market.

Gardner: So, the folks who are practitioners of BA, rather than more narrowly EA, have to fill this role of Rosetta Stone in the organization. They have to translate cultural frames of mind and ideas about the priorities between that IT side and the business side.

Understanding your audience

Westbrock: This isn't a technical skill, but understanding your audience is a big part of doing this. We like to joke about executives being ADD and not really being into the details, but you know what, some are. We've got to figure out the right way to communicate with this set of leadership that's really steering the course for our enterprise.

That's why there's no, "This is the artifact to create." There's no, "This is the type of information that they require." There is no, "This is the specific set of requirements to discuss."

That's why we like to start broad. Can you build the picture of the enterprise on one page and have conversations maybe that zero in on a particular part of that? Then, you go down to other levels of detail. But, you don't know that until you start having the conversation.

Gardner: Okay, as we close out, you mentioned something called "strategic capability changes." Explain that for us?

. . . There's a missing linkage between that vision, that strategy, that direction, and the actual activities that are going on in an organization.



Westbrock: To me, so many organizations have great vision and strategy. It comes from their leadership. They understand it. They think about it. But, there's a missing linkage between that vision, that strategy, that direction, and the actual activities that are going on in an organization. Decisions are being made about who to hire, the kinds of projects we decide to invest in, and where we're going to build our next manufacturing facility. All those are real decisions and real activities that are going on on a daily basis.

This jump from high-level strategy down to tactical daily decision-making and activities is too broad of a gap. So, we talk about strategic capability changes as being the vehicle that folks can use to have that conversation and to bring that discussion down to another level.

When we talk about strategic capability changes, it's the answer to the question, "What capabilities do we need to change about our enterprise in order to achieve our strategy?" But, that's a little bit too high level still. So, we help people carve out the specific questions that you would ask about business capability changes, about information capability changes, system, and technology.

Gardner: We've been talking about the evolution from enterprise architecture to business architecture. Joining us has been Tim Westbrock, Managing Director of EAdirections. Thank you, Tim.

Westbrock: Thank you, Dana.

Gardner: This sponsored podcast discussion is coming to you from The Open Group’s Enterprise Architecture Practitioners Conference in Seattle.

I'm Dana Gardner, principal analyst at Interarbor Solutions, and you've been listening to BriefingsDirect. Thanks for joining, and come back next time.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor:The Open Group.

Transcript of a sponsored BriefingsDirect podcast on business architecture recorded live at The Open Group’s Enterprise Architecture Practitioners Conference in Seattle Feb. 2, 2010. Copyright Interarbor Solutions, LLC, 2005-2010. All rights reserved.

You may also be interested in:

New Definition of Enterprise Architecture Emphasizes 'Fit for Purpose' Across IT Undertakings

Transcript of a sponsored BriefingsDirect podcast with Enterprise Architecture expert Len Fehskens recorded live at The Open Group’s Enterprise Architecture Practitioners Conference in Seattle on Feb. 2, 2010.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: The Open Group.

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you’re listening to BriefingsDirect.

Today, we present a sponsored podcast discussion coming to you from The Open Group’s Enterprise Architecture Practitioners Conference in Seattle, the week of Feb. 1, 2010.

We're going to take a look at the definition of enterprise architecture (EA), the role of the architect and how that might be shifting. We're here with an expert from the Open Group, Len Fehskens, Vice President of Skills and Capabilities. Welcome to the show.

Len Fehskens: Thanks, Dana.

Gardner: I was really intrigued by your presentation, talking, with a great deal of forethought obviously, about the whole notion of EA, the role of the architect, this notion of "fit for purpose." We want to have the fit-for-purpose discussion about EA. What are the essential characteristics of this new definition?

Fehskens: You'll remember that one of the things I hoped to do with this definition was understand the architecture of architecture, and that the definition would basically be the architecture of architecture. The meme, so to speak, for this definition is the idea that architecture is about three things: mission, solution, and environment. Both the mission and the solution exist in the environment, and the purpose of the architecture is to specify essentials that address fitness for purpose.

There are basically five words or phrases; mission, solution, environment, fitness for purpose, and essentials. Those capture all the ideas behind the definition of architecture.
Also from the conference: Learn how The Open Group's Cloud Work Group is progressing.
Gardner: The whole notion of EA has been in works for 30 years, as you pointed out. What is it about right now in the maturity of IT and the importance of IT in modern business that makes this concept of enterprise architect so important?

Fehskens: A lot of practicing enterprise architects have realized that they can't do enterprise IT architecture in isolation anymore. The constant mantra is "business-IT alignment." In order to achieve business-IT alignment, architects need some way of understanding what the business is really about. So, coming from an architectural perspective, it becomes natural to think of specifying the business in architectural terms.

We need to talk to business people to understand what the business architecture is, but the business people don't want to talk tech-speak.



Enterprise architects are now talking more frequently about the idea of "business architecture." The question becomes, "What do we really mean by business architecture?" We keep saying that it's the stakeholders who really define what's going on. We need to talk to business people to understand what the business architecture is, but the business people don't want to talk tech-speak.

We need to be able to talk to them in their language, but addressing an architectural end. What I tried to do was come up with a definition of architecture and EA that wasn't in tech-speak. That would allow business people to relate to concepts that make sense in their domain. At the same time, it would provide the kind of information that architects are looking for in understanding what the architecture of the business is, so that they can develop an EA that supports the needs of the business.

Gardner: So, in addition to defining EA properly for this time and place, and with the hindsight of the legacy, development, and history of IT and now business, what is the special sauce for a person to be able to fill that role? It’s not just about the definition, but it's also about the pragmatic analog world, day-in and day-out skills and capabilities.

Borrowed skills

Fehskens: That's a really good question. I've had this conversation with a lot of architects, and we all pretty much agree that maybe 90 percent of what an architect does involves skills that are borrowed from other disciplines -- program management, project management, governance, risk management, all the technology stuff, social skills, consulting skills, presentation skills, communication skills, and all of that stuff.

But, even if you’ve assembled all of those skills in a single individual, there is still something that an architect has to be able to do to take advantage of those capabilities and actually do architecture and deliver on the needs of their clients or their stakeholders.

I don't think we really understand yet exactly what that thing is. We’ve been okay so far, because people who entered the discipline have been largely self-selecting. I got into it because I wanted to solve problems bigger than I could solve myself by writing all code. I was interested in having a larger impact then I could just writing a single program or doing something that was something that I could do all by myself.

That way, we filter out people who try to become architects. Then, there's a second filter that applies: if you don't do it well, people don't let you do it. We're now at the point where people are saying, "That model for finding, selecting, and growing architects isn't going to work anymore, and we need to be more proactive in producing and grooming architects." So, what is it that distinguishes the people who have that skill from the people who don't?

An architect also has to be almost Sherlock Holmes-like in his ability to infer from all kinds of subtle signals about what really matters.



If you go back to the definition of architecture that I articulated in this talk, one of the things that becomes clear is that an architect not only has to have good design skills. An architect also has to be almost Sherlock Holmes-like in his ability to infer from all kinds of subtle signals about what really matters, what's really important to the stakeholders, and how to balance all of these different things in a way that ends up focusing on an answer to this very squishily, ill-defined statement of the problem.

This person, this individual, needs to have that sense of the big picture -- all of the moving parts -- but also needs to be able to drill in both at the technical detail and the human detail.

In fact, this notion of fitness for purpose comes back in. As I said before, an architect has to be able to figure out what matters, not only in the development of an architectural solution to a problem, but in the process of discerning that architecture. There's an old saw about a sculptor. Somebody asked him, "How did you design this beautiful sculpture," and he says, "I didn't. I just released it from the stone."

What a good architect does is very similar to that. The answer is in there. All you have to is find it. In some respects, it's not so much a creative discipline, as much as it's an exploratory or searching kind of discipline. You have to know where to look. You have to know which questions to ask and how to interpret the answers to them.

Rarely done

Gardner: One of the things that came out early in your presentation was this notion that architecture is talked about and focused on, but very rarely actually done. If it's the case in the real world that there is less architecture being done than we would think is necessary, why do it at all?

Fehskens: There's a lot of stuff being done that is called architecture. A lot of that work, even if it's not purely architecture in the sense that I've defined architecture, is still a good enough approximation so that people are getting their problems solved.

What we're looking for now, as we aspire to professionalize the discipline, is to get to the point where we can do that more efficiently, more effectively, get there faster, and not waste time on stuff that doesn't really matter.

I'm reminded of the place medicine was 100 or 150 years ago. I hate to give leeches a bad name, because we’ve actually discovered that they're really useful in some medical situations. But, there was trepanning, where they cut holes in a person's skull to release vapors, and things like that. A lot of what we are doing in architecture is similar.

What we want to do is get better at that, so that we pick the right things to do in the right situations, and the odds of them actually working are much higher than better than chance.



We do stuff because it's the state of the art and other people have tried it. Sometimes, it works and sometimes, it doesn't. What we want to do is get better at that, so that we pick the right things to do in the right situations, and the odds of them actually working are much higher than better than chance.

Gardner: Okay, a last question. Is there anything about this economic environment and the interest in cloud computing and various sourcing options and alternatives that make the architecture role all the more important?

Fehskens: I hate to give you the typical architect signature which is, "Yes, but." Yes, but I don't think that's a causal a relationship. It's sort of a coincidence. In many respects, architecture is the last frontier. It's the thing that's ultimately going to determine whether or not an organization will survive in an extremely dynamic environment. New technologies like cloud are just the latest example of that environment changing radically.

It isn't so much that cloud computing makes good EA necessary, as much as cloud computing is just the latest example of changes in the external environment that require organizations to have enterprise architects to make sure that the organization is always fit for purpose in an extremely dynamically changing environment.

Gardner: We have been talking about the newer definitions and maturing definitions of EA. Joining us has been Len Fehskens, Vice President of Skills and Capabilities of The Open Group. Thank you.

Fehskens: Thank you very much, Dana.

Gardner: This sponsored podcast discussion is coming to you from The Open Group's Enterprise Architecture Practitioners conference in Seattle, the week of Feb. 1, 2010.

I'm Dana Gardner, principal analyst at Interarbor Solutions. You've been listening to BriefingsDirect. Thanks, and come back next time.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: The Open Group.

Transcript of a sponsored BriefingsDirect podcast with Enterprise Architecture expert Len Fehskens recorded live at The Open Group’s Enterprise Architecture Practitioners Conference in Seattle on Feb. 2, 2010. Copyright Interarbor Solutions, LLC, 2005-2010. All rights reserved.

You may also be interested in:

Part 4 of 4: Real-Time Web Data Services in Action at Deutsche Börse

Transcript of a sponsored BriefingsDirect podcast on an intriguing example of web data services in action, one of a series of presentations on web data services with Kapow Technologies.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Learn more. Sponsor: Kapow Technologies.


Dana Gardner: Hello and welcome to a special BriefingsDirect dual webinar and podcast presentation, "Real-Time Web Data Services in Action at Deutsche Börse." I'm your host and moderator, Dana Gardner, principal analyst at Interarbor Solutions.

As the culmination of a four-part series on web data services (WDS), we're here to examine a fascinating use-case for data services with Deutsche Börse Group in Frankfurt, Germany. An innovative information service recently created there highlights how real-time content and data assembled from various online sources scattered across the Web provides a valuable analysis service.

The offering supports energy traders seeking to track global fluctuations and micro trends in oil and other related markets. But, the need for real-time and precise data affects more than energy traders and financial professionals. More than ever, all sorts of businesses need to know what's going on in and what's being said about their respective markets, products, and services.

In this series with Kapow Technologies, we've examined the need for WDS and ways that WDS and related tools can be used broadly to solve these problems. Now, we are going to learn the full story of how Deutsche Börse took web data resources, and not only efficiently assembled knowledge from automated robots, cleansing tools, and analytics management, but from these capabilities they also created high value and focused WDS offerings onto itself.

Thanks for joining us, as we take an in-depth look at how the market for WDS has shaped up, quickly recap the major findings from our series so far, and then hear directly from the leader of the Deutsche Börse project, as well as from a key supplier that supported them in accomplishing their web services goal.

Access the full series of podcasts on web data services:
So to learn more about WDS as a business, please join me in welcoming our guests, Mario Schultz, director of Energy Facts at Deutsche Börse Group.

Mario Schultz: Hi. I'm happy to be here and looking forward to the session today.

Gardner: Stefan Andreasen is also with us. He is the CTO at Kapow Technologies in Palo Alto, California. Welcome back, Stefan.

Stefan Andreasen: Thank you Dana. It's a pleasure to be here.

Gardner: First, let me try to set the stage for how WDS becomes the grist for new analysis mills. We've been through quite a transition in the past 10 or 15 years. We have moved quickly as a result of the Web. We started not too long ago with very proprietary content, often bound in books and distributed by trucks, and it was perhaps six months or a year outdated, in terms of the facts and figures, by the time it was fully distributed.

Chaotic content

The Web really helped accelerate the time, but was still chaotic in terms of the types of content. It was really loosely coupled information and not very well structured or organized and wasn’t necessarily of a business-critical nature.

We quickly saw, during the late '90s and into the 2000s, that the use of middleware and objects and standards, like SQL, and use of relational databases started to cross over into what became more considered general content, not necessarily data or content, but what people used to do in business processes.

Now, we've moved along through how organizations manage their applications and data together through use of XML, web services, and service oriented architecture (SOA), to the point where we are now, at the level of WDS. We're beginning now to manage that much better and bring automation, low risk, and security to those uses.

It's interesting to me that we've moved beyond a level of static information to dynamic information and yet we still haven’t taken full advantage of everything that’s being developed and created across the Web.

But today’s market turbulence demands that we do that. We have to move into an era where we can take quality data and provide agility into how we can consume and distribute it. We're dealing with more diverse data sources. That means we need to have completeness and we need to be comprehensive, in order to accomplish the business information challenges each business faces.

The need now is for flexible, agile, and mixed sourcing of services and data together.



The need now is for flexible, agile, and mixed sourcing of services and data together. The content is often portable. That means it's ubiquitous across mobile devices and social networks in such a way that real-time analytics becomes extremely important. This cuts across many different verticals, from retail, to trading and finance, healthcare, defense, and government.

The use of data as a business is now coming to the fore. We're beginning to see value, not from just the assimilation of data for use internally, but as more and more businesses are starting to take advantage of the data that they create and have access to. They share that with their partners, create ecosystems of value, and then even perhaps sell outright the information, as well as insights and analysis from that information.

According to Forrester Research, WDS describes the end-to-end analytic information pipelining process, a stream of liquid intelligence. It's palatable and consumable. I've also looked at the Wikipedia definition, and it seems to me that we have gone well into the ability of mashup and reuse of information. It's really about the technologies around discovery extraction, moving into consolidation and access, and then external source and distribution.

To me, WDS really means the lifecycle of content use and reuse across the Web, not in a chaotic fashion, but in a managed fashion, with security permissions, access control, and the ability to bring it into play with other analytic applications and business intelligence (BI) processes.

I want to go now to Mario. When you think of WDS, how has this definition really impacted you and your business?

At the beginning

Schultz: I began by working on the exchange of information that we have in our own systems. We were proceeding with our ideas of enhancing our services and designing new products and services. We were then looking into the Web and trying to get more information from the data that we gather from websites -- or somewhere else on the global Web -- and to integrate this with our own company's internal information.

Everything we do focuses on the real-time aspect. Our WDS are always focusing on the real-time aspects of this.

Gardner: Before we get into the fuller Deutsche Börse story, I'd like to revisit our podcast series so far. In our first podcast we talked with Howard Dresner, a real leader and thought developer in BI. He told us quite a bit about the need for bringing more sources, just as Mario pointed out, both internal and external, into an analytic process.

The idea of extended data sources forms strong components of forecast and analytic activities that are now underway, according to Howard, and BI needs to be not constrained or limited by the need for timely and relevant information from any web source. Howard really reinforced the notion for me that the Web has become where structured data was 10 or 15 years ago and is important for enterprises doing analytic activity.

In the second podcast in our series, Forrester's Jim Kobielus talked about the need to know what's going on and how important it is for organizations to have a sense of what the people in the organization and outside the organization across the spectrum of their supplied chain and/or distribution networks and actual end-users are doing and saying.

We've really seen an increase in networking, social networks, and social media. There's all this buzz going on about business activities, products, and services, all of which can be extremely valuable. You can think of it as a massive real-time focus group, but only if you can access the information that's relevant. People are willing to tell you what they think, if you're able to scoop it up. And, it was about this ability to scoop up the data and information and inference that Jim Kobielus really honed in on.

He told us a lot about the identity gathering, cleansing, and the ability to then exercise the content in some sort of meaningful way. He also emphasized the need to manage this in terms of marts and warehouses. A lot of infrastructure has been put in place. But, again, the value of the infrastructure is only as good as the value of the actual content that's involved.

In the third part of the series -- we are now in the fourth and last part -- Seth Grimes, another thought leader in terms of web analytics and text analytics, talked about the need to analyze in real-time. He emphasized the need of structured data as important, but real-time data as being the next big thing to move us to the era of advanced analytics. We're not just telling what happened before in the pipeline or supply chain, but what's going to happen next. This, I think, bears quite a bit on what Mario is going to discuss.

So, let’s move along now to Deutsche Börse. Mario, I want to hear more about this organization for our listeners in North America. Tell us a little bit about your company, your organization, and what you do.

Small business lines

Schultz: Deutsche Börse is the German stock exchange in Frankfurt, Germany, and we offer all kinds of products and services around on-exchange trading and the adjacent processes. That means we have made small business lines at Deutsche Börse.

We have something that’s called Xetra, our electronic trading system for cash products. We have Eurex, our derivative business line, which is worldwide, well-known, where you can trade other derivatives on that platform.

We have a subsidiary that’s called Clearstream doing all the custody and clearing services after you have done your trade. And, we have the Market Data & Analytics (MD&A) business line, where I've been working for 10 years. The MD&A business line is responsible for the real-time delivery of information to the world outside.

We have a main system called CEF. It is our backbone IT solution for delivering data in real-time with milliseconds optimization. The data is mainly coming from our internal IT systems, like Xetra and Eurex, and we deliver this data to the outside world.


In addition, we calculate all the relevant indices, like the DAX, the flagship index for the German markets with 30 instruments, and more than 2,000 -- or nearly 3,000 -- indices that are distributed over the well-known data vendors, for example, Bloomberg or Reuters. They are our main distribution networks, where we are delivering all our information.

Germany is currently the most important market for energy and power trading in the middle of Europe.



For several years now, I've been responsible for developing new products and services around information for on-exchange or off-exchange trading. This is why we've invented and developed the Energy Facts service that is part of our discussion today.

Gardner: When you were thinking about the challenges around this opportunity, it strikes me you had many different sources of information you had to bring together. What were the challenges that you encountered as you started to pipeline these information sources together?

Schultz: One-and-a-half years ago, the idea was to develop new products and services where we could transform our know-how and this real-time connection, aggregation, and dissemination of data to other business lines where we were not currently working. This is why we looked into the energy trading sector, mainly focused on the power trading here in Europe.

Energy markets really got liberalized over the last years. It started with the Nordic area, Sweden and Norway. Ten or 12 years ago, they started with liberalizing the energy trading markets, and Germany is the next country that followed this trend. Germany is currently the most important market for energy and power trading in the middle of Europe.

We started to analyze the information needs in this sector, and recognized that it's a fundamentals-driven market. Traders are looking into the fundamental factors that affect the price of the energy or the power that you trade, whether it’s oil or whatever. That’s how we started with power trading.

You have the wind and other weather factors. You have temperature. You have the availability of power plants. So, you try to categorize and summarize these sectors. It's called the supply and the demand side regarding this energy trading.

Fundamental data models

By talking to well-known players in the market, we quickly recognized what they were doing on their trading and analytic side, and that we could build up a very powerful and fundamental data models. You have to collect all the relevant information to get an overview and to get an estimate about the price, in this case, where power could develop and in which direction it could develop.

The main issue and main task in the beginning was to collect the relevant data. Quite quickly, we were able to set up a big list of all relevant data sets or sources, especially for Germany and some adjacent countries. We came up with something around 70, 80, or even 100 different sources on the Web to grab information from. So, the main issue was how to collect and grab all this data in a manageable way into one data base. That was the first step.

In the second step, Kapow came into this play. We recognized that it’s really important to have a one-stop shopping inbound channel that collects all the information from these sources, so that you don’t have to have have several IT systems, or your own program, JavaScript, or whatever to get the information.

I wanted to have a responsible product manager for this project or for this new product. From the beginning, I had to have a good technology in place that would be able to handle all these kind of sources from the Web.

Gardner: Let me go to Stefan now at Kapow. When you heard about Deutsche Börse and some of these issues that they were facing and the challenges that they were trying to solve, what came to your mind in terms of how Kapow might apply?

Andreasen: It came to mind that, if these data sources exist somewhere on the Web, we can actually grab them where they are. What you traditionally do with information gathering is that you call every company or every entity that has data and ask them, "Will you please provide the data in this or this format?" But, with Kapow Web Data Services, you can just grab the data, wherever it is on the Web, and assemble this valuable data source much easier and much faster.

Gardner: Let’s go back to Mario. Tell us, as you progressed through the solution, what was the experience?

Schultz: Just to go back one step. We recognized that there are so many different data formats that we had to grab. There are all these different providers of information in Germany and other European countries. They have their own websites. Some give the data in HTML format. Others use XLS, CSV, or even PDFs.

Kapow tells us how to get this information from these different sources in quite different formats. This is a manageable way, with a process-driven or graphical user interface (GUI) driven tool, that would use the effort, the personal, the manpower efforts to collect and grab the data.

At our starting point, one-and-a-half years ago, there were a lot of things underway here in Germany or the other European counties with the Copenhagen Conference, the carbon-emission discussion, and the liberalization. There were discussions about the big players with the transmission nets and power plants and whether they had to split up these things. So really there are a lot of changes. If you have source a or website source known once, you can just take it, program the script, and then leave it. We have to always check it, and they are changing the structure.

Recognize change

New companies are built, and some transmission lines are transformed. So, other companies are building up a new website. There are a lot of things underway. You have 70, 80, or even 100 sources, I don't know. You always have to recognize change and then check whether you have to rework it.

I started to work with an internal solution that I thought could handing all that. After a few weeks of developing and discussing, we recognized that our internal solution was not appropriate and not capable of doing all that kind of stuff. We quickly came across Kapow and evaluated their possibilities. We decided, nearly from the beginning or just a few weeks into starting the project, that we had to use the Kapow tool to collect all these data from the websites.

Gardner: As I understand, you were involved with programming some robots and setting them up, and then you were able to adjust them dynamically to whatever the needs were of the analysis intent.

Schultz: The main focus in the beginning was to get all these different formats, even, for example, go into a PDF and describe the relevant data that we want to grab, not as text, but a figure that we needed for our further processing.

There are even some interesting JavaScript or Java-based websites where you have to click on the switch, and then, with a right-hand click on the mouse, get the dataset. We were able to do all these kind of things with the Kapow tool and these robots within Kapow to grab this kind of data automatically.

We decided, nearly from the beginning or just a few weeks into starting the project, that we had to use the Kapow tool to collect all these data from the websites.



Gardner: What have been some of the results? What business-development activities have you had? What's been the value add?

Schultz: The value add was to grab all this data into one common data format, one database, so we would be able to deliver this data to the vendors via web tool, web terminal, or even our existing CEF data feeds. A lot of the players in the market are trying to collect this data by themselves, or even manually, to get an overview of where the power price would develop over the next day, hours, weeks, months, whatever.


There are some other providers in the market focusing on the real-time delivery of data. In the general on-exchange or off-exchange business, we're talking millisecond optimization. That’s not the timing that we have here, but it's getting from a once-a-day PDF analyst commentary via email in the morning, to a real-time terminal, or even to go to Bloomberg or Reuters screen where you get our Energy Facts data for on-time and real-time information set for trading.

Gardner: I'm really intrigued by your ability to manage so many different sources in real time and, as you say, coming from all different sources, interfaces, and application formats. Can you give us a little demonstration and show us the application in action?

Schultz: Okay. You should see on your screen our Energy Facts web terminal. This is one of our delivery possibilities to bring this data in real-time to the end users.

In the first phase, we're just focusing on the German market -- Belgium, France, and the Netherlands. We decided to start with four European countries. I don’t want to go to other pages. I just decided to take two or maybe three of them to give a view of what's going on in this Energy Facts terminal.

Not only websites

Currently, we have 70 or 80 sources that we're grabbing. It's not only websites, but we have some third-party providers that are delivering information, for example, weather, temperature, and things like that. We have providers giving data via FTP service, and we even use Kapow for grabbing data from these third-party players. As I said, it's a one-stop shopping solution to get everything via one channel.

For example, an interesting thing in the energy trading space is availability. When a company is looking into the future, they want to know the availability of different power plants. You can see on the right hand side there will be a summary of nuclear power, for example, and lignite, hard coal, and water.

There are various sources in Germany giving all this information in different formats. We grab everything into one database, do quality checks, and then compile the information to the front-end that you can see down there with a graphical presentation. We have a table with all the figures and we even do some kind of analytic enrichments, so we have a deviation from what has been published the day before.

You can see, for example, that we have some changes in the hard coal availability for the next 30 days. We're taking those sources, collecting the information, doing quality checks and quality assurance, aggregating everything into one database, one data format, and then presenting it on the screen.

If you ask us to add other kinds of data, we can integrate it quite quickly into our service. No problem.



Gardner: If I'm a consumer of this, if I am trader that subscribes to your service, and I encountered some other form of information that I wanted to bring in the mix, do I have the option of approaching you and asking for you to bring that in, or is that out of the question?

Schultz: No. This is just our starting point. As I said, this is something where we tried to create a complete new business in the energy sector. We started with these four countries and datasets and we will enhance it to other countries. If you ask us to add other kinds of data, we can integrate it quite quickly into our service. No problem.

Just two other examples. One, for example, is something that in Germany is called Urgent Market Messages. In Germany, we have four big power plant providers or transmission-system operators. The power plant providers push out, in real time, as fast as possible, Urgent Market Messages, when a power plant has to go into maintenance mode or has an accident and they have to repair something.

We grab all these different kind of sources from all those power plant providers and then aggregate all these Urgent Market Messages in the table that you can see down there. If you go to other pages on our screen, you can see them on the left hand side, where you always get this Urgent Market Messages, the latest ones. If, for example, a nuclear power plant goes off the grid unexpectedly, this could dramatically change the power price on the market. This is another example of collecting data for Urgent Market Messages.

I don’t want to stretch this too much, but the last point is cross border. Germany is somewhat in the middle of all this trading in Europe, so we have a lot of connection points to the other countries. We have Denmark, Sweden, Poland, France, Belgium, and Luxembourg. So, we have so many grid lines going over the border to the other countries.

You always have to collect the data from these different transmission lines to the other countries, because they are auctioning the power, to transport power, for example, from France, Germany, or the way around. You have to get all this kind of information and a better understanding of pricing.

Power allocation

For example, in this case, it's either Germany to France and to connection point. Down there, you can see then how much of power has been allocated for a specific hour in a day. The red line is the price for the transportation in this case. In addition, you could show the price difference, for example, between Paris and Leipzig, the two exchanges for energy. Everything is collected and then put into one view, where you show the interesting figures on the one screen.

Gardner: Suffice it to say that there is an awful lot going on behind this little red line. It's not that easy to put this together. This is reflecting an awful lot of information and processing.

Schultz: This slide is from one of our pages in used for one provider for Germany to France. Now, I'll go to this button and show you the other ones, like the ones to Germany, then the Germany-Netherlands connection.

These are the four countries we're currently covering, and you can see all the connection points for this. Later on, we'll go on with Denmark and the other one. This is really the power of having all this kind of data in one tool, where the aggregation, quality check, and everything comes into play.

Gardner: Mario, I have to imagine that there are external forces that can come to bear on this, perhaps a massive snowstorm or some other disruption in the price of a major commodity, and that’s something that you can bring into this picture almost immediately, right?

Energy Facts focuses on the fundamental data collected in real-time, and aggregated into one service, the place where we saw it as the missing piece in Europe.



Schultz: Yes. For example, in what I just showed, if we go to this weather page, you see temperature. This is very interesting. Generally, in Germany we have something, as you see on the yellow curve, between 1 and 3 degrees Celsius as the general temperature in winter. You see the other forecast for temperature is something around -5. Some time ago, it was even -7 degrees. It's really big. This is normally an indication for higher power prices, because people will demand more power for heating their buildings or offices. So, this has really changed. This weather aspect has changed every six hours within our service.

Gardner: If these traders also wanted to try to find out why they were seeing certain effects in these analytic graphs, is there a way for them to then quickly go out and look at the news feeds or other information, so that they could determine what’s behind the curves?

Schultz: Currently, it's not part of our service, and we didn’t do that because there are other providers for this information. Generally, you have the on-exchange and off-exchange prices that are normally available from the existing data vendors. For example, they use Bloomberg or other service providers. Energy Facts focuses on the fundamental data collected in real-time, and aggregated into one service, the place where we saw it as the missing piece in Europe. If you want to go to the news site, traders have other providers for the news on their desks.

Gardner: I see. So, this is really focused on numeric, algorithmic, programmable types of information and data.

Schultz: This is what we call the fundamental data sets, what is fundamentally behind driving the power price -- the demand and supply side factors behind the price. The analysts or traders can get this information in real-time in one service to do better estimation of the pricing elements.

Gardner: That’s really impressive, I appreciate your walking us through it. I wonder if we can go back now to Stefan and talk a little bit about what Kapow and its values and services brought to the table to help support this really impressive application and service.

Impressive service

Andreasen: Sure, Dana. This is an extremely impressive service that Mario just showed us here, and I'm sure, if you're dealing with buying and selling energy, this is a must for you to be sure you made the right decision.

If we go back to what I talked about earlier, businesses are relying more and more on data to make the right decision, and their focus is on quality, completeness, and agility. Let's be more practical here and ask how you actually get this data.

There is a term, data integration, which is about accessing the data and providing it in standard API, so that you can actually leverage the measure of business application.

Energy Facts is accessing this data at the 70-80 different data sources, as Mario said, and providing it as a feed that depends on the volatility of the different data sources. Some of the data delivers every minute, and some deliver every four hours, etc., based on how quickly the data source changes. WDS is all about getting access to this data where it resides.

There are really two different kinds of data sources. One set of data sources is more like a real-time source data source. Let's say you go to a patent directory, and there are probably millions of patents. In that case you would use Kapow Data Server to wrap that data source into a service layer, and then you would be able to do real-time, as soon as you get real-time results back. So, that's real-time access, where you have vast amount of information.

Actually, all styles exist, but there is a tendency for many companies to actually access the data where it is, rather than trying to consolidate it to a new place.



The other scenario, and I think that's more what we see in the Energy Facts example here, is where you have a more limited data source, and you are actually trying to do a consolidation of the data into a database, and then you use that database to serve different customers or different applications.

With Kapow, you can actually go in and access the data, if you can see them on your browser. That's one thing. The other thing you need to do to make this data available to your business application is to transform and enrich the data, so that it actually matches the format that you want.

For example, on the website, it might have the date saying, "2 hours ago" or "3 minutes ago" and so on. That's really not useful. What you really want is a time stamp with the hour, the second, the minute, the months, the day, the year, so you can actually start comparing these. So, data cleansing is an extremely important part of data extraction and access.

The last thing, of course, is serving the data in the format you need. That can be a database, if you're doing consolidation, or it can be as an API, if you are doing more of a federated access to data, and leaving the data where it is.

Actually, all styles exist, but there is a tendency for many companies to actually access the data where it is, rather than trying to consolidate it to a new place.

Urgent messages

Schultz: Dana, I have a very good example for this one. I talked about Urgent Market Messages, where the power plant providers are sending out, as quickly as possible after an incident occurs, an Urgent Market Message regarding changes in the power plant availability. This is something that is a good aggregate, using Kapow, because we can schedule all these robots in a very good way.

Currently we're checking these Urgent Market Messages sources every minute. On all aggregated levels, we always can state whether this message is valid or invalid. I didn't focus on this is my presentation.


If we find the message on the website, we put it on our service. Maybe in the next minute the message disappears on the website. We have it still in our service, but then we flag this message as invalid. The user knows that this message had been on the source website, but now it disappeared. We still have the information, but we can separate between these two statuses, valid or invalid Urgent Market Message.

This is accomplished by accessing the source, enriching it into the database, doing some scheduling, and then giving feedback and checking the website again. By doing these three steps, we're able to offer this part of our Urgent Market Message presentation layer.

Gardner: Mario, I think you're really a pioneer in this. What intrigues me is how far this can go in addition to what you have done with it, and how this could affect the number of other industries and vertical businesses as well.

Kapow Technologies today has more than 400 customers. For them, our technology becomes a business critical part of what they do.



From your perspective, Stefan, how are other types of business, enterprises, and service providers likely to start using this and providing WDS-based, value added services as well?

Andreasen: That’s a very good question. Kapow Technologies today has more than 400 customers. For them, our technology becomes a business critical part of what they do. Let me try to explain that. Most information providers sell data to all the businesses. In the U.S., for example, there is a big business around background checking, both of people and companies. It's a fact in the U.S. that if you go into a bank to get a credit card, they're going to run a background check on you, before you can get this credit card.

One of the things they check are a lot of resources on the Web, for example, criminal records. On the Web, every courthouse has a website, where you can log in and search for criminal records for a certain person.

Most of these companies that are doing the background checking are Kapow customers, using Kapow's Web Data Services to service enable all these courthouses. When they go in and want a credit card in the background check, Kapow automatically goes out and gets that information from these courthouse websites and a lot of other data sources in real-time. Otherwise, they would have to have 50 or 60 people manually typing in, and they wouldn’t get the results until two days later.

Gardner: I suppose another effect also over the past 10 or 15 years, from my timeline earlier in the presentation, is that these web standards have kicked in, not only for looking up information across the Web, but it has also become a standardized way of accessing information internally. What about the use of this for corporate performance management and other aspects of the web data that’s inside of companies?

Available white paper

Andreasen: I encourage everybody to go to our website and download a white paper from one of our customers, called Fiserv. It's a large financial services company in the U.S. Fiserv has a lot of business partners, actually they have more than 300 banks in more than 10 countries as business partners. Because they're selling services, it's incredibly important for them to also monitor their customers to understand what's happening.

They had lot of people who logged into these 300 partner banks every day and grabbed some financial information, such as interest rates, etc., into an Excel spreadsheet, put it into a database, and then got it up on a dashboard.

The thing about this is that, first, you have a lot of human labor, which can cause human errors, and so on. You can only do it once a day, and it's a tedious process. So what they did is got Kapow in and automated the extraction of this data from all their business partners -- 300 banks in more than 10 countries.

They can now get that data in near real-time, so they don’t have to wait for data. They don’t have to go without on the weekend, because people are not working. They get that very business critical insights to the market and their partners instantly through our product.

I can give you another example. A large car manufacturer is spending almost a billion dollars a year in advertising on television. Of course, there are several parameters that are important for them to understand about how should they spend the advertising money the best possible way.

These data sources are, for example, the lead reporting, understanding what are the leads they're getting in and understanding the market data they're are getting from business information providers about trends in the markets and so on. What is the reporting they get from ad campaigns? How can they see how many people clicked on this or watched these television shows? Also, how many cars are getting registered, their models versus their competitors?

So, it's just another example again about how WDS can help the market analyst, the product manager, and a lot of people who have to make very vital business decisions in the companies out there.



By using Kapow, they could hook up to all of these data sources in real time and suddenly get complete insights to the effectiveness of how they spend their advertising dollar, and having very, very good return on the investment.

So, it's just another example again about how WDS can help the market analyst, the product manager, and a lot of people who have to make very vital business decisions in the companies out there.

Gardner: Great. I appreciate your input Stefan. Today’s discussion on how the Deutsche Börse Group in Frankfurt, Germany is using Kapow Technologies for a real-time web data analysis service comes as a culmination of a four-part series on WDS.

We have seen how an innovative information service that’s been created rapidly elegantly demonstrates how real-time content and data, assembled from various online sources, provides a valuable service and an analysis capability as a business.

What's happening with WDS is that it's gone beyond an internal enterprise focus. It's become a business onto itself. So, there are lots of value opportunities. We can sell new value across business solutions. We can look for ways that strategies internally are enhanced, and we can create ecosystems of partnership.

I think what we are going to see, when cloud computing starts to really take off, rather than be discussed so much, is the opportunity for companies that are in partnership to really encourage competitive advantage by sharing data and analytics effectively. It also drives more business strategy and execution and creates new and additional revenue streams as a result.

So, I want to thank Mario at Deutsche Börse for his participation here. I think they're a real poster child for how real-time analytics can be brought together. So, thanks to you, Mario, for joining us.

Schultz: It was a pleasure, Dana. Thank you.

Gardner: And, certainly, I also want to give the opportunity for viewers and listeners to learn more about some of the topics we have discussed from Kapow. There are a lot of different resources available there in order to take some next steps or continue to educate yourselves on some of these issues.

This is Dana Gardner, principal analyst at Interarbor Solutions, your host and moderator. I also want to thank Stefan Andreasen. He is the CTO of Kapow.

Andreasen: Thank you very much, Dana.

Gardner: You've been enjoying a BriefingsDirect presentation. Thanks again for joining us, and come back next time.


Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Learn more. Sponsor: Kapow Technologies.

Transcript of a sponsored BriefingsDirect podcast on information management for business intelligence, one of a series on web data services with Kapow Technologies. Copyright Interarbor Solutions, LLC, 2005-2010. All rights reserved.

You may also be interested in: