Tuesday, May 04, 2010

Confluence of Global Trends Ups Ante for Improved IT Governance to Prevent Costly Business 'Glitches'

Transcript of a sponsored BriefingDirect podcast on the growing danger from faulty software and how to overcome it.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: WebLayers.

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you’re listening to BriefingsDirect.

Today, we present a sponsored podcast discussion on the nature of, and some possible solutions for, a growing parade of enterprise-scale glitches. The headlines these days are full of big, embarrassing corporate and government "gotchas."

These complex snafus cost a ton of money, severely damage a company’s reputation, and most importantly, can hurt or even kill people.

From global auto recalls to bank failures, and the cyber crime that can uproot the private information from millions of users, the scale and damage that technology-accelerated glitches can inflict on businesses and individuals has probably never been higher. So what is at the root?

Is it a technology run amok problem, or a complexity spinning out of control issue -- and why is it seemingly worse now?

A new book is coming out this summer that explores the relationship between glitches and technology, specifically the role of software use and development in the era of cloud computing.

It turns out the role and impact of governance over people, process, and technology comes up again and again in the new book.

We have with us here today the author of the book as well as a software expert from IBM to delve into the causes and effects of glitches and how governance relates to the problem and fixes.

Please join me in welcoming our guests, Jeff Papows, President and CEO of WebLayers, and the author of Glitch: The Hidden Impact of Faulty Software. Welcome to the show, Jeff.

Jeff Papows: Thanks, Dana. Thanks for having us on.

Gardner: We're also here with Kerrie Holley, IBM fellow and Chief Technology Officer for IBM’s SOA Center of Excellence. Welcome to the show, Kerrie.

Kerrie Holley: Thank you, very much.

Gardner: Jeff, let me start with you. Now, the general trends around these complex issues are affecting business and probably affecting just about everyone’s lives. How do these seem to be something that’s different? Is there an inflection point? Is there something different now that 20 years ago in terms of the intersection of business with technology?

Papows: There is. I’ve done a lot of research in the past 10 months and what we're actually seeing is the confluence of three primary factors that are creating an information technology perfect storm of sorts. Some of these are obvious, but it’s the convergence of the three that’s creating problems on the scale that you are describing here.

The first is a loss of intellectual capital. For the first time in our careers -- the three of us have all been at this for a long time now -- we saw, between 2000 and 2007, the first drop in computer science graduates. That's the other side of the dot-com implosion.

Mainframe adoption patterns

While it’s not always popular or glamorous to talk about, 70 percent of the world’s critical infrastructure still runs on IBM mainframes. Yet, the focus of most of our new computer science graduates and early life professionals is on Java, XML, and the open and more modern development languages.

For the first time in our lifetimes and careers, the preponderance of that COBOL-based analytical community is retiring and/or -- God forbid -- aging and dying. That’s created a significant problem, concurrent with a time where the merger and consolidation activity -- the other side of the recession of 2008 -- have created this massive complexity in these giant mash-ups and critical back-office systems. For example, the mergers between Bank of America and Countrywide, and on and on.

The third factor is just the sheer ubiquity of the technological complexity curve. It’s the magnitude of technology that’s now part of our social fabric, whether it’s literally one million transistors that now exist for every human being on the planet or the six billion network devices that exist in the world today, all of which are accessing the same critical, in many cases, back-office structures.

It's reached the point, Dana, from a consumer standpoint, where 60 percent of the value of our automobiles now consists of networked electronic components -- not the drive trains, engines, and the other things. Look at the recent glitches you have seen at places like Toyota.

You take those three meta-level factors and put them together and we're making the morning broadcast news cycles now on a daily basis with, as you said, more and more of these embarrassing things coming to light. They're not just inconvenient, but there are monumental economic consequences -- and we're killing people.

Gardner: Kerrie Holley, we've looked at some of these issues -- society issues, organizational issues, and the technology behind them -- but technology has also been part of the solution or the ability to scale and manage and automate. I think service oriented architecture (SOA) has a major impact on that.

So, are we at a point where the ability of technology to keep up with the rate of growth is out of whack? What do you sense is behind some of this and why hasn't the technology been there to fix it along the way?

Holley: Jeff brought up some excellent points, which are spot-on. The other thing that we see is that we've had this growth of distributed computing. The easy stuff we've actually accomplished already.

If we look at a lot of what businesses are trying to accomplish today, whether it’s a new business model, differentiation, or whatever they're trying to do compete, what we are finding is that the complexity of that solution is pretty significant.

It's something that we obviously can do. If we look at a lot of technologies that are out in the market place, unfortunately, in many cases they are siloed. They repair or they help with a part of the problem, but perhaps they're not holistic in dealing with the whole life-cycle that is necessary to create some of this value.

Secondly -- this is a point-in-time statement -- we're seeing rapid improvements in the technology to solve this. With Jeff's company and other organizations, we are seeing that today. It hasn’t caught up, but I think it will. In summary, Jeff brought up several points in terms of the fact that we have ubiquitous devices and a tremendous amount of computing power. We have programming available to the masses. We have eight-year-olds, grandmothers, and everyone in between, writing software.

Connecting devices

We have a tremendous need to connect mobile devices and front-ends. We have 3D Internet. We just have an explosion of technologies that we have to integrate. Along with that comes some of the challenges in terms of how we make this agile, and how we make it such that it doesn't break. How do we make sure that we actually get the value propositions that we see? Clearly, SOA is a part of the solution, but it's certainly not the end-all in terms of how we repair and how we get better.

Gardner: One of the things that intrigues me about SOA is the emphasis on governance. To get the best out of a distributed services-orientation, you need to think at the very beginning and throughout the process about how to manage, automate, and reuse, as well as the feedback loops into the process -- all on an ongoing basis.

It strikes me that if that works for SOA, it probably also works for management and organizations, and it works for the relationship between workers and customers. Let me take this back to you, Jeff. Is governance also in catch-up mode? Do we have a sense of how to govern the technology, but not necessarily the process? Is that what's behind some of it?

Papows: You're right, Dana. There's a cultural maturation process here. Let's look at a couple of the broad economic planks that have affected how we got here, because I've been in the software industry for 30 years now. Remember that the average computer scientist, at least in North America, on average, makes 32 percent more than the mean average in the U.S. economy. And, software, computer services and infrastructure has accounted for about 37 percent of the growth in the gross domestic product in the United States and Asia in the last decade.

So the economic impact and success of our industry almost can’t be overstated. Because of that, we've grown up for decades now where we just threw more and more bodies at the problem, as
the technological curve grew.

All that means is automating those best practices and turning them inward, so that we’re governing ourselves as an industry the way that we would automate or govern many things.



There was always this never-ending economic rosy horizon, where you would just add more IT professionals and you would acquire and you’d merge systems, but rarely would you render
portions of those workforces redundant.

In 2008, the economic malaise that we’re managing our way through changed all of that. Now, the only way out of this complexity curve that we’ve created, to use Kerrie's terms, is turning the innovation that has been the hallmark of our industry back on ourselves.

That means automating and codifying all of the best practices and human capital that’s been in-place and learning for decades in the form of active policy management and inference engines in what we typically think of as SOA and design-time governance.

Really, all that means is automating those best practices and turning them inward, so that we’re governing ourselves as an industry in the same way that we would automate or govern many things. But now it’s no longer a "nice to have." I would argue that it’s critical, because the complexity curve and the economics have crossed and there is no way to put this genie back in the bottle. There is no way to go backward.

Gardner: Kerrie, any thoughts about what’s perhaps now a critical role for governance, perhaps governance up and down the technology spectrum, design time, runtime, but also governance in terms of how the people and processes come together?

Holley: Absolutely. One of the nice things that the attention to SOA has brought to our marketplace is the recognition that we do need to focus on governance. I don’t know of a single client who’s got an SOA implementation who has not, as a minimum, thought about governance. They may not be doing everything they want to do or should be doing, but governance is clearly on the attention span of everyone in terms of recognizing that it needs to be done.

So, when we look at governance and when we look at it around SOA, IT governance is something that we’ve had for a long time. SOA governance is a subset, you could say. It complements, but at the same time, it focuses our attention on, what some of the deltas have brought to the marketplace that require improved governance.

Services lifecycles

That governance is not only around the technology. It’s not only around the life-cycle of services. It’s not only around the use of addressing processes and addressing application development. Governance also focuses on the convergence that’s required between business and IT.

The synergistic relationship that we seek will be promoted through the use of governance. Change management specifically brings about a pretty significant focus, meaning that there will be a focus on the part of the business and the IT organizations and teams to bring about the results that are sought.

Examples of problems

Gardner: Jeff, in your book you identify some examples. Are there any that really stand out I that we can trace back to some root cause in the software lifecycle?

Papows: There are, and it’s unfortunate. The ones that make the greatest memory points and often the national headlines, characteristically are the ones that affect the consumer broadly as opposed to the corporate ones.

Obviously, Toyota is in the headlines everyday now. Actually, there was another news cycle recently about Toyota’s Lexus vehicles. The new models apparently have a glitch in the software that controls the balance system.

The ones that make the greatest memory points and often the national headlines, characteristically are the ones that affect the consumer broadly as opposed to the corporate ones.



One of the most heartbreaking things in the research for the book was on software that controls the radiation devices in our hospitals for cancer treatment. I ran across a bunch of research where, because of some software glitches and policy problems in terms of the way those updates were distributed, people with fairly nominal cancers received massive overdoses in radiation.

The medical professionals running these machines -- like much of our culture, because something is computerized -- just assume that it’s infallible. Because of the problems in governance or lack of governance policy, people were being over-radiated. Instead of targeting small tumors in a very targeted way, people’s entire upper torsos, and unfortunately, in one case, head and neck were targeted.

There are lots of examples like that in the book that may not be as ubiquitous as Toyota, but there are many cases of widespread health, power, energy, and security risks as a consequence of the lack of policy management or governance that Kerrie was speaking to just a few minutes ago.

Gardner: Well, these examples certainly are very poignant and clearly something to avoid. I wonder if these are also perhaps just the tip of the iceberg. In addition to things that are problematic at a critical level, is there also a productivity hit? Are large aspects of work in process not nearly as optimal as they could be or are plagued by mistakes that drag down the process?

I want to take this over to Kerrie. IBM has its Smarter Planet approach. I think they're talking about the issue that we're just not nearly as efficient as we could be. What makes the headlines are these terrible issues, but what we're really talking about is a tremendous amount of waste. Aren’t we?

Things we could do better

Holley: We are. That’s exactly what inefficiency is. It speaks to a lot of waste and a lot of things we could do better. A lot of what we’ve been talking about from a Smarter Planet standpoint is actually the exact issues that Jeff has talked about, which is that the world is getting more instrumented. There are more sensors. There is a convergence of a lot of different technology, SOA, business process management, mobile computing, and cloud computing.

Clearly, on one end of the spectrum, it’s increasing the complexity. On the other end of the spectrum, it’s adding tremendous value to businesses, but it mandates this attention to governance.

Gardner: Jeff, in your book do you offer up some advice or solutions about what companies ought to be doing in this governance arena to deal with these glitches?

Papows: We do. We talk about what I call the IT Governance Manifesto, for lack of another catchy phrase. I make the argument that it’s almost reached the point now where we need to lobby for legislation that requires more stringent reporting of software glitches in cases where there is human health and life at stake. Or, alternately, that we impose fines upon individuals or organizations responsible for cover-ups that put people at risk. Or, we simply require a level of IT governance at organizations that produce products that directly affect productivity and quality of life issues.

Kerrie said this really well, Dana. Remember that about 70 percent of our computer scientists in a given year are basically contending with maintaining the existing application inventories that run all of our financial transactions in core sub-systems and topologies. So, 70 percent of our human capital is there to basically keep the stuff that’s in place running.

So, 70 percent of our human capital is there to basically keep the stuff that’s in place running.



Concurrently, we have this smarter planet, where we’ve got billions of RFID tags in motion and 64-bit microprocessors have reached a price point where they are making the way into our dishwashers. We’ve got this plethora of hand-held devices and applications that’s exploding.

All of that is against the backdrop of this more difficult economy, where we can’t just hire more people without automation. We haven't a prayer keeping our noses about water here.

So, God forbid that we ask the federal government, which moves at a dinosaur’s pace relative to Internet speed, to intercede and insist on some of the stuff. But, if we don’t police our own industry, if we don’t get more serious about this governance, whether it’s IBM or WebLayers or some other technological help, we run the risk of seeing the headlines we’re seeing today become completely ubiquitous.

Gardner: Kerrie, I understand that you’re also penning a book, and it’s focused on SOA. First, could you tell us about it, but then are there any aspects of it that address this issue of governance, maybe from a self-help perspective and of not waiting for some legislation or external direction on it.

Holley: The book that’s going to be out later this year is 100 SOA Questions: Asked and Answered. What my co-author [Ali Arsanjani] and I are trying to accomplish in the book, which distinguishes us from other SOA books in the marketplace, is based on thousands of questions that we’ve experienced over the decade in hundreds of projects where we’ve had first-hand roles in as consultants, architects, and developers. We provide the audience with a hands-on, prescriptive understanding of some of the more difficult questions, and not just have platitudes as answers, but really give the reader an answer they can act on.

We’ve organized the content in a way that you can go by domain. If you’re a business stakeholder, you can go to particular areas. That gets back to your question, because business clearly has a big role to play here. The convergence or the relationship between business and IT has a big role to play.

You can go directly into those sections. We do talk about governance. The book is not about governance, but a good percentage of the questions are on governance. What we try to do is help organizations, clients, practitioners, and executives understand what works what doesn’t work.

Always a choice

One of the examples, a small example, is that we always have a choice when we do a project. We can do it in multitude of ways, but we have a lot of evidence that when governance is not applied, when it’s not automated, when it’s not thought about upfront, the expense on the back-end side is enormous. That expense could be the cost of not having the agility that you foresaw.

The expense could be not having the cost reduction that you foresaw. The expense could be the defects that Jeff has spoken about -- the glitches. There is a tremendous downside to not focusing on governance on the front-side, not looking at it in the beginning. The book really tries to ask and answer the toughest SOA questions that we’ve seen in the marketplace over the last decade.

Gardner: We’ll certainly look forward to that. Back to you Jeff. When we think about governance, it has a bit of a siloed history itself. There's the old form of management, the red-light, green-light approach to IT management. We’ve seen design-time governance, but it seems to be somewhat divorced from, even on a different plane than, runtime or operational governance.

What needs to happen in order to make governance more holistic, more end-to-end?

Papows: It’s a good question, Dana. It’s like everything else in our industry. We’re sometimes our own worst enemy and we get hung up on language, and God forbid, we create yet another acronym headache.

There's an old expression, "Everybody wants governance, but nobody wants to be governed." We run the risk, and I think we’ve tripped over it several times, where we get to the point where developers don’t want to be slowed down. There is this Big Brother-connotation at times to governance. We’ve got to explore a different cultural approach to it.

Governance, whether it’s design time or run time, is really about automating and codifying best practices.



Governance, whether it’s design-time or run-time, is really about automating and codifying best practices, and it’s not done generically as was once taught. It can be, in my experience, very specific. The things we see Ford Motor Co. doing are very different. They're germane to their IT culture and organization, and very different than what we see the Bank of America do, as an example.

To Kerrie’s point about the cost of a lack of automated best practices, if we can use the new verb, it isn’t always quantitative. Look at the brand damage to a bank when they shut customers out of their ATM network, the other side of turning the switch when they merged back-office systems. Look at the number of people whose automated payment systems and whatnot were knocked out of kilter.

The brand damage affecting major corporations is a consequence of having these inane debates about whether SOA is alive or dead, whether you need design-time governance or run-time governance. What you need is a way to automate what you are doing, so that your best practices are enforced throughout the development lifecycle.

Kerrie answered your question well when he said it really is about waste. It’s not just about wasted human capital or wasted productivity or cycles. It’s about wasted go-to-market opportunity. Remember, we're now living in the era of market-facing systems. For almost every major business enterprise, our digital footprint is directly accessible in the marketplace, whether it’s an ATM network or a hand-held device. The line between our back-office infrastructure and our consumer experience is being obliterated.

I'd argue that rather than making distinctions between design and run-time governance, companies simply, one way or another, need to automate their best practices. The business mandates of the corporations need to be reflected in an automated way that makes it manageable across the information technology life-cycle -- or you exist at your own peril.

Gardner: Kerrie, any thoughts on this concept of governance and how we make it more ubiquitous and more enforced as the pain and the problems grow evident? The solution at a high level seems pretty clear. It seems to be the implementation where we stumble.

Governance mindset

Holley: You hit it on the head, and Jeff made the point as well. A lot of people think governance is onerous, that it’s a structure that forces people to do things a certain way. They look at it as rigid, inflexible, unforgiving. They think it just gets in the way.

That’s a mindset that people find themselves in, and it’s a reason not to do something. But when you think about the goals that you're seeking, most goals have something to do with efficiency, lower cost, customers, and making the company more agile. When you think about this, pretty much everybody in the marketplace knows that you don’t get those goals for free. There is some cultural change that’s often necessary to bring those goals about, some organizational change.

There's automation. You don’t start with automation. You actually start with the problem, the processes, and picking the right tool. But, automation has to be a part of that solution. One end of the spectrum, we’ve got to address this mindset that governance gets in the way, that it’s overhead, and that it’s unnecessary.

We know that organizations that are very successful, that are achieving many of their goals, when we peel the onion back, we see them focused on governance. One advice that we all know is that you shouldn’t boil the ocean, that you should do incremental change. We also need to do this in governance.

We need to have these incremental successes, where we are focused on automation holistically and looking at the life-cycle, not just looking at the part-of-the-problem space.

Looking for automation as a way out of the hole that has been created is a consequence of the industry’s own success.



Gardner: Jeff, it sounds like governance needs a makeover. Is there an opportunity? You are going to be discussing this book at the IBM Impact Conference 2010, their SOA conference? Is this a good opportunity? You have a lot of IT executive and software executives from the variety of enterprises on hand, but what would you tell them in terms of how to make governance a bit more attractive?

Papows: We all need to say, "I am a computer science professional. We have reached a point in the complexity curve where I no longer scale." You have to start with an admission of fact. And the reality is that the demands placed on today's IT organizations, the magnitude of the existing infrastructure that needs to continue to be cared for, the magnitude of application demands for new systems and access points from all of this new technology, simply is not going to correlate without a completely different highly automated approach.

Kerrie is right. You can't boil the ocean and you can’t do it at once, but you have to start with an honest self-assessment that, as an industry, we can't continue to go forward at the rate and pace that we have grown, given everything we know and that we see, without finally eating our own cooking.

Looking for automation as a way out of the hole that has been created is a consequence of the industry’s own success. We didn't get here because we failed to be fair to all of those developers in the audience. They're going to listen to this and say, "Why am I the bad guy?" They're not the bad guys.

The reality is, as I said, that we're responsible for the greatest percentage of growth in the gross domestic product. We're responsible for the greatest percentage workforce productivity. We've changed the way civilization lives and works. We've dealt with a quantum leap -- and the texture of human existence is a consequence of this technology.

It's time that we simply admit that we need to turn back on ourselves in order to continue to manage this or we, literally, I believe, are on the precipice of that digital equivalent of a Pearl Harbor, and the economic and productivity consequences of failing are extreme.

Gardner: Well, we'll have to leave it there. We're about out of time. We've been discussing how glitches in business have highlighted a possible breakdown in the continuity of technology and that governance is an important factor in making technology continue on its productivity curve, without falling at some degree under its own weight.

I want to thank our guests. We have been joined today by Jeff Papows, President and CEO of WebLayers, and the author of the new book, Glitch: The Hidden Impact of Faulty Software. Thank you so much, Jeff.

Papows: Thank you, Dana, and thank you, Kerrie.

Gardner: And, we have been joined also by Kerrie Holley, an IBM Fellow as well as the CTO for IBM’s SOA Center of Excellence. Thanks for your input, and we will look forward to your book as well.

Holley: Thank you, Dana, and thank you, Jeff.

Gardner: This is Dana Gardner, principal analyst at Interarbor Solutions. You've been listening to a sponsored BriefingsDirect podcast. Thanks for listening and come back next time.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: WebLayers.

Transcript of a sponsored BriefingDirect podcast on the growing danger from faulty software and how to overcome it. Copyright Interarbor Solutions, LLC, 2005-2010. All rights reserved.

Thursday, April 15, 2010

Information Management Takes Aim at Need for Improved Business Insights From Complex Data Sources

Transcript of a sponsored BriefingsDirect podcast on how companies are leveraging information management solutions to drive better business decisions in real time.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Learn more. Sponsor: HP.

Get a free white paper on how Business Intelligence enables enterprises to better manage data and information assets:
Top 10 trends in Business Intelligence for 2010

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you’re listening to BriefingsDirect. Today's sponsored podcast discussion delves into how to better harness the power of information to drive and improve business insights.

We’ll examine how the tough economy has accelerated the progression toward more data-driven business decisions. To enable speedy proactive business analysis, information management (IM) has arisen as an essential ingredient for making business intelligence (BI) for these decisions pay off.

Yet IM itself can become unwieldy, as well as difficult to automate and scale. So managing IM has become an area for careful investment. Where then should those investments be made for the highest analytic business return? How do companies better compete through the strategic and effective use of its information?

We’ll look at some use case scenarios with executives from HP to learn how effective IM improves customer outcomes, while also identifying where costs can be cut through efficiency and better business decisions.

To get to the root of IM best practices and value, please join me in welcoming our guests, Brooks Esser, Worldwide Marketing Lead for Information Management Solutions at HP. Welcome, Brooks.

Brooks Esser: Hi, Dana. How are you today?

Gardner: I’m great. We’re also here with John Santaferraro, Director of Marketing and Industry Communications for BI Solutions at HP. Hello, John.

John Santaferraro: Hi Dana. I’m glad to be here, and hello to everyone tuning into the podcast.

Gardner: And also, we’re here with Vickie Farrell, Manager of Market Strategy for BI Solutions at HP. Welcome to the show.

Vickie Farrell: Hi, Dana, thanks.

Gardner: Let me take our first question out to John. IM and BI in a sense come together. It’s sort of this dynamic duo in this era of cost consciousness and cost-cutting. What is it about the two together that you think is the right mix for today’s economy?

Santaferraro: Well, it’s interesting, because the customers that we work with tend to have very complex businesses, and because of that, very complex information requirements. It used to be that they looked primarily at their structured data as a source of insight into the business. More recently, the concern has moved well beyond business intelligence to look at a combination of unstructured data, text data, IM. There’s just a whole lot of different sources of information.

Enterprise IM

The idea that they can have some practices across the enterprise that would help them better manage information and produce real value and real outcomes for the business is extremely relevant. I’d like to think of it as actually enterprise IM.

Very simply, first of all, it’s enterprise, right? It’s looking across the entire business and being able to see across the business. It’s information, all types of information as we identify structured, unstructured documents, scanned documents, video assets, media assets.

Then it’s the management, the effective management of all of those information assets to be able to produce real business outcomes and real value for the business.

Gardner: So the more information you can manage to bring into an analytics process, the higher the return?

Santaferraro: I don’t know that it’s exactly just "more." It’s the fact that, if you look at the information worker or the person who has to make decisions on the front line, if you look at those kinds of people, the truth is that most of them need more than just data and analysis. In a lot of cases, they will need a document, a contract. They need all of those different kinds of data to give them different views to be able to make the right decision.

Gardner: Brooks, tell me a little bit about how you view IM. Is this a life cycle we’re talking about? Is it a category? Where do we draw the boundaries around IM? Is HP taking an umbrella concept here?

Esser: We really are, Dana. We think of IM as having four pillars. The first is the infrastructure, obviously -- the storage, the data warehousing, information integration that kind of ties the infrastructure together. The second piece, which is very important, is governance. That includes things like data protection, master data management, compliance, and e-discovery.

The third, to John’s point earlier, is information processes. We start talking about paper-based information, digitizing documents, and getting them into the mix. Those first three pillars taken together really form the basis of an IM environment. They’re really the pieces that allow you to get the data right.

The fourth pillar, of course, is the analytics, the insight that business leaders can get from the analytics about the information. The two, obviously, go hand in hand. Rugged information infrastructure for your analytics isn’t any better than poor infrastructure with solid analytics. Getting both pieces of that right is very, very important.

Gardner: Vickie, if we take that strong infrastructure and those strong analytics and we do it properly, are we able to take the fruits of that out to a wider audience? Let’s say we are putting these analytics into the hands of more people that can take action.

Very important

Farrell: Yes, it is very important that you do both of those things. A couple of years ago, I remember, a lot of pundits were talking about BI becoming pervasive, because tools have gotten more affordable and easier to use. Therefore anybody with a smartphone or PDA or laptop computer was going to be able to do heavy-duty analysis.

Of course, that hasn’t happened. There is more limiting the wide use of BI than the tools themselves. One of the biggest issues is the integration of the data, the quality of the data, and having a data foundation in an environment where the users can really trust it and use it to do the kind of analysis that they need to do.

What we’ve seen in the last couple of years is serious attention on investing in that data structure -- getting the data right, as we put it. It's establishing a high level of data quality, a level of trust in the data for users, so that they are able to make use of those tools and really glean from that data the insight and information that they need to better manage their business.

Esser: We can’t overemphasize that, Dana. There's a great quote by Mark Twain, of all people, who said it isn’t what you don’t know that gets you into trouble -- it’s what you know for certain that just isn’t so. That really speaks to the point Vickie made about quality of data and the importance of having high-quality data in our analytics.

Gardner: We’re defining IM fairly broadly here, but how do we then exercise what we might consider due diligence in the enterprises -- security, privacy, making the right information available to people and then making sure the wrong people don’t have it? How do you apply that important governance pillar, when we’re talking about such a large and comprehensive amount of information, Brooks?

Esser: I think you have to define governance processes, as you’re building your information infrastructure. That’s the key to everything I talked about earlier -- the pillars of a solid IM environment. One of the key ones is governance, and that talks about protecting data, quality, compliance, and the whole idea of master data management -- limiting access and making sure that right people have access to input data and that data is of high-quality.

Farrell: In fact, we recently surveyed a number of data warehouse and BI users. We found that 81 percent of them either have a formal data governance process in place or they expect to invest in one in the next 12 months. There's a lot of attention on that, as Brooks was talking about.

Gardner: Now, as we also mentioned earlier, the economy is still tough. There is less discretionary spending than we’ve had in quite some time. How do you go to folks and get the rationale for the investment to move in this direction? Is it about cost-cutting? Is it about competitiveness? Is it about getting a better return on their infrastructure investments? John, do you have a sense of how to validate the market for IM?

Santaferraro: It’s really simple. By effectively using the information they have and further leveraging the investments that they’ve already made, there is going to be significant cost savings for the business. A lot of it comes out of just having the right insight to be able to reduce costs overall. There are even efficiencies to be had in the processing of information. It can cost a lot of money to capture data, to store it, and cleanse it.

Cleansing can be up to 70 percent of the cost of the data, trying to figure out your retention strategies. All of that is very expensive. Obviously, the companies that figure out how to streamline the handling and the management of their information are going to have major cost reductions overall.

Gardner: What about the business outcomes? Brooks, do we have a sense of what companies can do with this? If they do it properly, as John pointed out, how does that further vary the profitability, their market penetration, or perhaps even their dominance?

The way to compete

Esser: Dana, it’s really becoming the way that leading edge companies compete. I’ve seen a lot of research that suggests that CEOs are becoming increasingly interested in leveraging data more effectively in their decision-making processes. It used to be fairly simple. You would simply identify your best customers, market like heck to them, and try to maximize the revenue derived from your best customers.

Now, what we’re seeing is emphasis on getting the data right and applying analytics to an entire customer base, trying to maximize revenue from a broader customer base. We’re going to talk about a few cases today where entities got the data right, they now serve their customers better, reduced cost at the same time, and increased their profitability.

Gardner: We’ve talked about this at a fairly high level. I wonder if we could get a bit more specific. I’m curious about what is the problem that IM solves that then puts us in a position to leverage the analytics, put it in the hands of the right people, and then take those actions that cut the costs and increase the business outcome. I’m going to throw this out to anybody in our panel. What are the concrete problems that IM sets out to solve?

Esser: I’ll pick that up, Dana. Organizations all over the world are struggling with an expansion of information. In some companies, you’re seeing data doubling one year over the next. It’s creating problems for the storage environment. Managers are looking at processes like de-duplication to try to reduce the quantity of information.

Lots of information is still on paper. You’ve got to somehow get that into the mix, into your decision-making process. Then you have things like RFID tags and sensors adding to the expansion of information. There are legal requirements. When you think about the fact that most documents, even instant messages, are now considered business records, you’ve got to figure a way to capture that.

The challenge for a CIO is that you’ve got to balance the cost of IT, the cost of governance and risk issues involved in information, while at the same time, providing real insight to your business unit customer.



Then, you’re getting pressure from business leaders for timely and accurate information to make decisions with. So, the challenge for a CIO is that you’ve got to balance the cost of IT, the cost of governance and risk issues involved in information, while at the same time, providing real insight to your business unit customer. It’s a tough job.

Santaferraro: If I could throw another one in there, Dana, I recently talked to a couple of senior IT leaders, and both of them were in the same situation. They’ve been doing BI and IM for 10-plus years in their organization. They had fairly mature processes in place, but they were concerned with trying to take the insight that they had gleaned and turn it into action.

Along with all of the things that were just described by Brooks, there are a lot of companies out there that are trying to figure out how to get the data that last mile to the person on the front line who needs to make a decision. How do I get it to them in a very simple format that tells them exactly what they need to do?

So, it’s turning that insight into action, getting it to the teller in a bank, getting it to the clerk at the point of sale, or the ATM machine, or the web portal, when somebody is logging onto a banking system or a retail site.

Along with all of that, there is this new need to find a way to get the data that last mile to where it impacts a decision. For companies, that’s fairly complex, because that could mean millions of decisions every day, as opposed to just getting a report to an executive.

That whole world of the information worker and the need to use the information has changed as well, driving the need for IM.

Analyze the data

Farrell: Dana, you asked what the challenges are, and one that we see a lot is that people need to analyze the data. They'll traipse from data mart to data mart and pull data together manually. It’s time-consuming and it’s expensive. It’s fraught with error, and the fact that you have data stored in all these different data marts, just indicates that you’re going to have redundant data that’s going to be inconsistent.

Another problem is that you’ll end up with reports from different people and different departments, and they won’t match. They will have used different calculations, different definitions for business terms. They will have used different sources for the data. There is really no consistent reconciliation of all of this data and how it gets integrated.

This causes really serious problems for companies. That’s really what IM is going to help people overcome. In some cases, it doesn’t really cost as much as you’d think, because when you do IM properly, you're actually going to see some savings and correction of some of those things that I just talked about.

Gardner: It also seems to me, if you look at a historic perspective, that many of these information workers we're talking about didn’t even try to go after this sort of analytic information. They knew that it wasn’t going to be available to them. They’d probably have to wait in line.

But, if we open the floodgates and make this information available to them, it strikes me that they are going to want to start using it in new and innovative ways. That’s a good thing, but it could also tax the infrastructure and the processes that have been developed.

Without that close alignment between business and IT, a tie of the IT project to real business outcomes, and that constant monitoring by that group, it could easily get out of hand.



How do we balance an expected increase in the proactive seeking of this information? I guess we are starting to talk about the solution to IM. If we're good at it and people want it, how do we scale it? How do we ramp it up? What about that, John? How do we start in on the scaling and the automation aspect of IM?

Santaferraro: With our customers, some of the strategy and planning that we do up front helps them define IM practices internally and create things like an enterprise information competency center where the business is aligned with IT in a way that they are actually preparing for the growth of information usage. Without that close alignment between business and IT, a tie of the IT project to real business outcomes, and that constant monitoring by that group, it could easily get out of hand. The strategy and planning upfront definitely helps out.

Farrell: I'll add to that. The more effectively you bring together the IT people and the business people and get them aligned, the better the acceptance is going to be. You certainly can mandate use of the system, but that’s really not a best practice. That’s not what you want to do.

By making the information easily accessible and relevant to the business users and showing them that they can trust that data, it’s going to be a more effective system, because they are going to be more likely to use it and not just be forced to use it.

Esser: Absolutely, Vickie. When you think about it, it really is the business units within most enterprises that fund activities via a tax or however they manage to pay for these things. Doing it right means having those stakeholders involved from the very beginning of the planning process to make sure they get what they need out of any kind of an IT project.

Access a free white paper on how Business Intelligence enables enterprises to better manage data and information assets:
Top 10 trends in Business Intelligence for 2010

Gardner: It strikes me that we have a real virtuous cycle at work here, where the more people get access to better information, the more action they can take on, the more value is perceived in the information, the more demand for the information, the more that the IT folks can provide it and then so on and so forth.

Has anybody got an example of how that might show up in the real world? Do we have any use cases that capture that virtuous adoption benefit?

Better customer service

Farrell: Well, one comes to mind. It’s an insurance company that we have worked with for several years. It’s a regional health insurance company faced with competition from national companies. They decided that they needed to make better use of their data to provide better services for their members, the patients as well as the providers, and also to create a more streamlined environment for themselves.

And so, to bring the IT and business users together, they developed an enterprise data warehouse that would be a common resource for all of the data. They ensured that it was accurate and they had a certain level of data quality.

They had outsourced some of the health management systems to other companies. Diabetes was outsourced to one company. Heart disease was outsourced to another company. It was expensive. By bringing it in house, they were able to save the money, but they were also able to do a better job, because they could integrate the data from one patient, and have one view of that patient.

That improved the aggregate wellness score overall for all of their patients. It enabled them to share data with the care providers, because they were confident in the quality of that data. It also saved them some administrative cost, and they recouped the investment in the first year.

Gardner: Any other examples, perhaps examples that demonstrate how IM and HP’s approach to IM come together?

More real-time applications and more mission-critical applications are coming and there is not going to be the time to do the manual integration.



Farrell: Another thing that we're doing is working with several health organizations in states in the US. We did one project several years ago and we are now in the midst of another one. The idea here is to integrate data from many different sources. This is health data from clinics, schools, hospitals, and so on throughout the state.

This enables you to do many things like run programs on childhood obesity, for example, assess the effectiveness of the program, and assess the overall cost and the return on the investment of that program. It helps to identify classes of people who need extra help, who are at risk.

Doing this gives you the opportunity to bring together and integrate in a meaningful way data from all these different sources. Once that’s been done, that can serve not only these systems, but also some of the potential systems more real-time systems that we see coming down the line, like emergency surveillance systems that would detect terrorist threat, bioterrorism threats, pandemics, and things like that.

It's important to understand and be able to get this data integrated in a meaningful way, because more real-time applications and more mission-critical applications are coming and there is not going to be the time to do the manual integration that I talked about before.

Gardner: It certainly sounds like a worthwhile thing. It sounds like the return on investment (ROI) is strong and that virtuous adoption is very powerful. So, John Santaferraro, what is that HP does that could help companies get in the IM mode?

Obviously, this is not just something you buy and drop in. It's more than just methodologies as well. What are the key ingredients, and how does HP pull them together?

Bringing information together

Santaferraro: We find that a lot of our customers have very disconnected sets of intelligence and information. So, we look at how we can bring that whole world of information together for them and provide a connected intelligence approach. We are actually a complete provider of enterprise class industry-specific IM solutions.

There are a lot of areas where we drill down and bring in our expertise. We have expertise around several business domains like customer relationship management, risk, and supply chain. We go to market with specific solutions from 13 different industries. As a complete solution provider, we provide everything from infrastructure to financing.

Obviously, HP has all of the infrastructure that a customer needs. We can package their IM solution in a single finance package that hits either CAPEX or OPEX. We've got software offerings. We've got our consulting business that comes in and helps them figure out how to do everything from the strategy that we talked about upfront and planning to the actual implementation.

We can help them break into new areas where we have practices around things like master data management or content management or e-discovery.

Across the entire IM spectrum, we have offerings that will help our customers solve whatever their problems are. I like to approach our customers and say, "Give us your most difficult and complex information challenge and we would love to put you together with people who have addressed those challenges before and with technology that’s able to help you do it and even create innovation as a business."

Everyone in the IM market partners with other firms to some extent.



When we've come in and laid the IM foundation for our customers and given them a solid technology platform -- Neoview is a great example -- we find that they began to look at what they've got. It really triggers a whole lot of brand-new innovation for companies that are doing IM the right way.

Gardner: Given these vertical industries, I imagine there are some partners involved there, a specialist in specific regions as well as specific industries. Brooks, is there an ecosystem at work here as well, and how does that shape up?

Esser: Absolutely, Dana. Everyone in the IM market partners with other firms to some extent. We've chosen some strategic partners that complement our capabilities as well. For example, we team with Informatica for our data integration platform and SAP BusinessObjects and MicroStrategy for our BI platform.

We work with a company called Clearwell, and we leverage their e-discovery platform to deliver a solution that helps customers leverage the information in their corporate email systems. We work with Microsoft to deliver HP Enterprise Content Management Solution. So we really have an excellent group of go-to-market partners to leverage.

Gardner: We've talked about the context of the market, why the economy is important, and we looked at some of the imperatives from a business point of view, why this is essential to compete, what problems you need to overcome, and the solution.

So, in order to get towards this notion of a payback, it's important to know where to get started. There seem to be so many inception points, so many starting points. Let me take this to you, John. The holistic approach of being comprehensive, but at the same time, breaking this into parts that are manageable, how do you do that?


Best practices

Santaferraro: One of the things that we have done is made our best practices available and accessible to our customers. We actually operationalize them. A lot of consulting companies will come and plop a big fat manual on the desk and say we have a methodology.

We've created an offering called the methodology navigator which actually walks the customers through the entire project in an interactive environment, where depending on whatever step of the project they are in, they can click on a little box that represents that step and quickly access templates, accelerators, and best practices that are directly relevant to that particular step.

We look at this holistic approach, but we also break it down into best practices that apply to every single step along the way.

Gardner: This whole thing sounds like a no-brainer to me. I don’t know whether I am overly optimistic, but I can see applying more information to your personal life, your small business as well as your department and then of course, your comprehensive enterprise.

I think we're entering into a data-driven decade. The more data, the more better decisions, the more productivity. It's how you grow. Brooks, why do you think it’s a no-brainer? Am I overstating the case?

It's how leading edge companies are going to compete, particularly in a tough and the volatile economy.



Esser: I don’t think you are, Dana. It's how leading edge companies are going to compete, particularly in a tough and the volatile economy, as we have seen over the last 5, 7, 8 years. It's really simple. Better information about your customers can help you drive incremental revenue from your existing customer base. The cool part about it is that better information can help you prevent loss of customers that you already have. You know them better and know how to keep them satisfied.

Every marketer knows that it's a lot less expensive to keep a current customer than it is to go out and acquire a new one. So the ROI for IM projects can be phenomenal and, to your point, that makes it kind of a no-brainer.

Gardner: Vickie, we apply this to customers, we apply it to patients, payers, end-users, but are there other directions to point this at? Perhaps supply chain, perhaps thinking about cloud computing and multiple sources of finding social media metadata about processes, customers, suppliers. Are we only scratching the surface in a sense of how we apply IM?

Farrell: I think we probably are. I don’t know that there are any industries that can't make use of better organizing their data and better analyzing their data and making use of that insight that they’ve gained to make better decisions. In fact, across the board, one of the biggest issues that people have is making better decisions.

In some cases, it's providing information to humans through reports or queries, so that they can make the decisions. What we're going to be seeing -- and this gets to what you were talking about -- is that when data is coming in in real time from sensors and things like that, it has location context. It's very rich data, and it provides you with a lot of information and a lot of variables to make the best decisions based on all those variables that are taking place at that time.

Where once we were maybe developing a handful of possible scenarios and picking the closest one, we don’t have to do that anymore. We can really make use of all of that information and make the absolute best decision right then and there. I don’t really think that there are any industries or domains that can't make use of that kind of capability.

Capturing more data

Santaferraro: Dana, I love what we are doing in the oil and gas industry. We have taken the sensors from our printers, and they are some of the most sensitive sensors in the world, and we are doing a project with Shell Oil, where we are actually embedding our sensors at the tip of a drill head.

As it goes down, it's going to capture seismic data that is 100 times more accurate than anything that's been captured in the past. It's going to send it up through a thing called IntelliPipe which is a five-megabyte feed is this correct that goes up through the drill pipe and back up to the well head, where we will be capturing that in real time.

Seismic data tends to be dirty by nature. It needs to be cleansed. So, we're building a real-time cleansing engine to cleanse that data, and then we are capturing it on the back-end in our digital oil field intelligence offering. It's really fun to see as the world changes, there are all these new opportunities for collecting and using information, even in industries that tend to be a little more traditional and mechanical.

Gardner: That's a very interesting point that the more precise we get with instrumentation, the more data, the more opportunity to work with it and then to crunch that in real-time offers us the predictive aspect rather than a reactive aspect.

As I said, it's been compelling and a no-brainier for me. John, you mentioned an on ramp to this, that it's really the methodological approach. Are there any resources, are there places people can go to get more information, to start factoring where in their organization they will get their highest returns, perhaps focus there and then start working outward towards that more holistic benefit?

It's really up to the customers in terms of how they want to start out.



Let me go to you first, Brooks. Where can people go for more information?

Esser: Of course, I'm going to tell folks to talk to their HP reps. In the course of our discussion today, it's pretty obvious that IM projects are huge undertakings, and we understand that. So, we offer a group of assessment and planning services. They can help customers scope out their projects.

We have a couple of ways to get started. We can start with a business value assessment service. This is service that sets people up with a business case and tracks ROI, once they decide on a project. But, the interesting piece of that is they can choose to focus on data integration, master data management, what have you.

You look at the particular element of IM and build a project around that. This assessment service allows people to identify the element in their IM environment, their current environment, that will give them the best ROI. Or, we can offer them a master planning service which generates really comprehensive IM plan, everything from data protection and information quality to advanced analytics.

So, it's really up to the customers in terms of how they want to start out, taking a look at the element of their IM environment, or if they want us to come in and look at the entire environment, we can say, "Here's what you need to do to really transform the entire IM environment."

Obviously, you can get details on those services and our complete portfolio for that matter at www.hp.com/go/bi and www.hp.com/go/im.

Gardner: Vickie, any sense of where you would point people when they ask do I get started, where can I get more information?

Farrell: Well, I think Brooks covered it. All of our information is at www.hp.com/go/bi. We also have another site that's www.hp.com/go/neoview. There is some specific information about the Neoview Advantage enterprise data warehouse platform there.

Gardner: Very well. John Santaferraro, how about from professional services and solutions perceptive; any resources that you have in mind?

Santaferraro: Probably the hottest topic that I have heard from customers in the last year or so has been around the development of the BI competency center. Again if you go to our BI site, you will find some additional information there about the concept of a BICC.

And the other trend that I am seeing is that a lot of companies are wanting to move from just the BI space with that kind of governance. They want to create an enterprise information competency center, so expanding beyond BI to include all of IM.

We have got some great services available to help people set those up. We have customers that have been working in that kind of a governance environment for three or four years. The beautiful thing is that companies that have been doing this for three or four years are doing transformational things for their business.

They are really closely tied to business mission, vision, and objective, versus other companies that are doing a bunch of one-off projects. One customer recently had spent $11 million in a project over the last year, and they were still trying to figure out where they were going to get value out of the project.

Again, heading over to our BI website -- type in BICC, do a search -- there is some great documentation there I think that you will find to help set up some of the governance side.

Gardner: Well great. We've been talking about a natural progression towards data-driven business decisions and using IM to scale that and bring more types of data and content into play. I want to thank our guests for toady's podcast. We've been joined by Brooks Esser, Worldwide Marking Lead for Information Management Solutions at HP. Thank you, Brooks.

Esser: Thanks very much for having me, Dana.

Gardner: John Santaferraro. He is the Director of Marketing and Industry Communications for BI Solutions. Thank you, John.

Santaferraro: Thanks, Dana. Glad to be here.

Gardner: And also, Vickie Farrell, Manager of Market Strategy for BI Solutions. Thanks so much.

Farrell: Thank you, Dana. This is a pleasure.

Gardner: This is Dana Gardner, Principal Analyst at Interarbor Solutions. You’ve been listening to a sponsored BriefingsDirect podcast. Thanks for listening, and come back next time.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Learn more. Sponsor: HP.

Access a free white paper on how Business Intelligence enables enterprises to better manage data and information assets:
Top 10 trends in Business Intelligence for 2010

Transcript of a sponsored BriefingsDirect podcast on how companies are leveraging information management solutions to drive better business decisions in real time. Copyright Interarbor Solutions, LLC, 2005-2010. All rights reserved.

You may also be interested in:

Tuesday, April 13, 2010

Fog Clears on Proper Precautions for Putting More Enterprise Data Safely in Clouds

Transcript of a sponsored BriefingsDirect podcast on how enterprises should approach and guard against data loss when placing sensitive data in cloud computing environments.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: HP.

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you’re listening to BriefingsDirect. Today we present a sponsored podcast discussion on managing risks and rewards in the proper placement of enterprise data in cloud computing environments.

Headlines tell us that Internet-based threats are becoming increasingly malicious, damaging, and sophisticated. These reports come just as more companies are adopting cloud practices and placing mission-critical data into cloud hosts, both public and private. Cloud skeptics frequently point to security risks as a reason for cautiously using cloud services. It’s the security around sensitive data that seems to concern many folks inside of enterprises.

There are also regulations and compliance issues that can vary from location to location, country to country and industry by industry. Yet cloud advocates point to the benefits of systemic security as an outcome of cloud architectures and methods. Distributed events and strategies based on cloud computing security solutions should therefore be a priority and prompt even more enterprise data to be stored, shared, and analyzed by a cloud by using strong governance and policy-driven controls.

So, where’s the reality amid the mixed perceptions and vision around cloud-based data? More importantly, what should those evaluating cloud services know about data and security solutions that will help to make their applications and data less vulnerable in general?

We've assembled a panel of HP experts to delve into the dos and don’ts of cloud computing and corporate data. Please join me in welcoming Christian Verstraete, Chief Technology Officer for Manufacturing and Distributions Industries Worldwide at HP. Welcome back, Christian.

Christian Verstraete: Thank you.

Gardner: We’re also here with Archie Reed, HP's Chief Technologist for Cloud Security, the author of several publications including, The Definitive Guide to Identity Management and he's working on a new book, The Concise Guide to Cloud Computing. Welcome back to the show, Archie.

Archie Reed: Hey, Dana. Thanks.

Gardner: It strikes me that companies around the world are already doing a lot of their data and applications activities in what we could loosely call "cloud computing," cloud computing being a very broad subject and the definition being rather flexible.

Let me take this first to you, Archie. Aren’t companies already doing a lot of cloud computing? Don’t they already have a great deal of transactions and data that’s being transferred across the Web, across the Internet, and being hosted on a variety of either internal or external servers?

Difference with cloud

Reed: I would certainly agree with that. In fact, if you look at the history that we’re dealing with here, companies have been doing those sorts of things with outsourcing models or sharing with partners or indeed community type environments for some time. The big difference with this thing we call cloud computing, is that the vendors advancing the space have not developed comprehensive service level agreements (SLAs), terms of service, and those sorts of things, or are riding on very thin security guarantees.

Therefore, when we start to think about all the attributes of cloud computing -- elasticity, speed of provisioning, and those sorts of things -- the way in which a lot of companies that are offering cloud services get those capabilities, at least today, are by minimizing or doing away with security and protection mechanisms, as well as some of the other guarantees of service levels. That’s not to dismiss their capabilities, their up-time, or anything like that, but the guarantees are not there.

So that arguably is a big difference that I see here. The point that I generally make around the concerns is that companies should not just declare cloud, cloud services, or cloud computing secure or insecure.

It’s all about context and risk analysis. By that, I mean that you need to have a clear understanding of what you’re getting for what price and the risks associated with that and then create a vision about what you want and need from the cloud services. Then, you can put in the security implications of what it is that you’re looking at.

Gardner: Christian, it seems as if we have more organizations that are saying, "We can provide cloud services," even though those services have been things that have been done for many years by other types of companies. But we also have enterprises seeking to do more types of applications and data-driven activities via these cloud providers.

So, we’re expanding the universe, if you will, of both types of people involved with providing cloud services and types of data and applications that we would use in a cloud model. How risky is it, from your perspective, for organizations to start having more providers and more applications and data involved?

Verstraete: People need to look at the cloud with their eyes wide open. I'm sorry for the stupid wordplay, but the cloud is very foggy, in the sense that there are a lot of unknowns, when you start and when you subscribe to a cloud service. Archie talked about the very limited SLAs, the very limited pieces of information that you receive on the one hand.

On the other hand, when you go for service, there is often a whole supply chain of companies that are actually going to join forces to deliver you that service, and there's no visibility of what actually happens in there.

Considering the risk

I’m not saying that people shouldn't go to the cloud. I actually believe that the cloud is something that is very useful for companies to do things that they have not done in the past -- and I’ll give a couple of examples in a minute. But they should really assess what type of data they actually want to put in the cloud, how risky it would be if that data got public in one way, form, or shape, and assess what the implications are.

As companies are required to work more closely with the rest of their ecosystem, cloud services is an easy way to do that. It’s a concept that is reasonably well-known under the label of community cloud. It’s one of those that is actually starting to pop up.

A lot of companies are interested in doing that sort of thing and are interested in putting data in the cloud to achieve that and address some of the new needs that they have due to the fact that they become leaner in their operations, they become more global, and they're required to work much more closely with their suppliers, their distribution partners, and everybody else.

It’s really understanding, on one hand, what you get into and assessing what makes sense and what doesn’t make sense, what’s really critical for you and what is less critical.

Gardner: Archie, it sounds as if we’re in a game of catch-up, where the enticements of the benefits of cloud computing have gotten ahead of the due diligence and managing of the complexity that goes along with it. If you subscribe to that, then perhaps you could help us in understanding how we can start to close that gap.

People are generally finding that as they realize they have risk, more risk than they thought they did, they’re actually stepping back a little bit and reevaluating things.



To me one recent example was at the RSA Conference in San Francisco, the Cloud Security Alliance (CSA) came out with a statement that said, "Here’s what we have to do, and here are the steps that need to be taken." I know that HP was active in that. Tell me if you think we have a gap and how the CSA thinks we can close it.

Reed: We’re definitely in a situation where a number of folks are rushing toward the cloud on the promise of cost savings and things like that. In fact, in some cases, people are generally finding that as they realize they have risk, more risk than they thought they did, they’re actually stepping back a little bit and reevaluating things.

A prime example of this was just last week, a week after the RSA Conference, the General Services Administration (GSA) here in the U.S. actually withdrew a blanket purchase order (BPO) for cloud computing services that they had put out only 11 months before.

They gave two reasons for that. The first reason was that technology had advanced so much in that 11 months that their original purchase order was not as applicable as it was at that time. But the second reason, perhaps more applicable to this conversation, was that they had not correctly addressed security concerns in that particular BPO.

Take a step back

In that case, it shows we can rush toward this stuff on promises, but once we really start to get into the cloud, we see what a mess it can be and we take a step back. As far as the CSA, HP was there at the founding. We did sponsor research that was announced at RSA around the top threats to cloud computing.

We spoke about what we called the seven deadly sins of cloud. Just fortuitously we came up with seven at the time. I will point out that this analysis was also focused more on the technical than on specific business risk. But, one of the threats was data loss or leakage. In that, you have examples such as insufficient authentication, authorization, and all that, but also lack of encryption or inconsistent use of encryption, operational failures, and data center liability. All these things point to how to protect the data.

One of the key things we put forward as part of the CSA was to try and draw out key areas that people need to focus on as they consider the cloud and try and deliver on the promises of what cloud brings to the market.

Gardner: Correct me if I am wrong, but one of the points that the CSA made was the notion that, by considering cloud computing environments and methodologies and scenarios, you can actually make your general control and management of data improved by moving in this direction. Do you subscribe to that?

Reed: Although cloud introduces new capabilities and new options for getting services, commonly referred to as infrastructure or platform or software, the posture of a company does not need to necessarily change significantly -- and I'll say this very carefully -- from what it should be. A lot of companies do not have a good security posture.

You need to understand what regs, guidance, and policies you have from external resources, government, and industry, as well as your own internal approaches, and then be able to prove that you did the right thing.



When we talk to folks about how to manage their approach to cloud or security in general, we have a very simple philosophy. We put out a high-level strategy called HP Secure Advantage, and it has three tenets. The first is to protect the data. We go a lot into data classification, data protection mechanisms, the privacy management, and those sorts of things.

The second tenet is to defend the resources which is generally about infrastructure security. In some cases, you have to worry about it less when you go into the cloud per se, because you're not responsible for all the infrastructure, but you do have to understand what infrastructure is in play to feed your risk analysis.

The third part of that validating compliance is the traditional governance, risk, and compliance management aspects. You need to understand what regulations, guidance, and policies you have from external resources, government, and industry, as well as your own internal approaches -- and then be able to prove that you did the right thing.

So this seems to make sense, whether you're talking to a CEO, CIO, or a developer. And it also makes sense, whether you are talking about internal resources or going to the cloud. Does that makes sense?

Gardner: Sure, it does. So getting it right means that you have more options in terms of what you can do in IT?

Reed: Absolutely.

Gardner: That seems like a pretty obvious direction to go in. Now, Christian, we talked a little bit about the technology standards methods for approaching security and data protection, but there is more to that cloud computing environment. What I'm referring to is compliance, regulation, and local laws. And this strikes me that there is a gap -- maybe even a chasm -- between where cloud computing allows people to go, above where the current laws and regulations are.

Perhaps you could help us better understand this gap and what organizations need to consider when they are thinking about moving data to the cloud vis-a-vis regulation.

A couple of caveats

Verstraete: Yes, it's actually a very good point. If you really look at the vision of the cloud, it's, "Don't care about where the infrastructure is. We'll handle all of that. Just get the things across and we'll take care of everything."

That sounds absolutely wonderful. Unfortunately, there are a couple of caveats, and I'll take a very simple example. When we started looking at the GS1 Product Recall service, we suddenly realized that some countries require information related to food that is produced in that country to remain within the country's boundaries.

That goes against this vision of clouds, in which location becomes irrelevant. There are a lot of examples, particularly around privacy aspects and private information, that makes it difficult to implement that complete vision of dematerialization, if I can put it that way, of the whole power that sits behind the cloud.

Why? Because the EU, for example, has very stringent rules around personal data and only allows countries that have similar rules to host their data. Frankly, there are only a couple of countries in the world, besides the 27 countries of the EU, where that's applicable today.

This means that if I take an example, where I use a global cloud with some data centers in the US and some data centers in Europe, and I want to put some private data in there, I may have some issues. How does that data proliferate across the multiple data centers that service actually uses? What is the guarantee that all of the data centers that will host and contain my data and its replication and these backups and others are all within the geographical boundaries that are acceptable by the European legislation?

The bottom line is that data can be classed as global, whereas legislation is generally local. That's the basis of the problem here.



I'm just taking that as an example, because there is other legislation in the US that is state-based and has the same type of approach and the same type of issues. So, on the one hand, we still are based with a very local-oriented legislative body and we are there with a globally oriented vision for cloud. In one way, form, or shape we'll have to address the dichotomy between both for the cloud to really be able to take off from a legal perspective.

Reed: Dana, if I may, the bottom line is that data can be classed as global, whereas legislation is generally local. That's the basis of the problem here. One of the ways in which I would recommend folks consider this -- when you start talking about data loss, data protection and that sort of stuff -- is having a data-classification approach that allows you to determine or at least deploy certain logic and laws and thinking how you're going to use it and in what way.

If you go to the military, the government, public sector, education, and even energy, they all have very structured approaches to the data that they use. That includes understanding how this might be used by third parties and things like that. You also see some recent stuff.

Back in 2008, I think it was, the UK came up with a data handling review, which was in response to public sector data breaches. As a result, they released a security policy framework that contains guidance and policies on security and risk management for the government departments. One of the key things there is how to handle data, where it can go, and how it can be used.

Trying to streamline

What we find is that, despite this conflict, there are a lot of approaches that are being put into play. The goal of anyone going into this space, as well as what we are trying to promote with the CSA, is to try to streamline that stuff and, if possible, influence the right people that are trying to avoid creating conflicting approaches and conflicting classification models.

Ultimately, when we get to the end of this, hopefully the CSA or a related body that is either more applicable or willing will create something that will work on a global scale or at least as widely as possible.

Gardner: So, for those companies interested in exploring cloud it's by no means a cakewalk. They need to do their due diligence in terms of technology and procedures, governance and policies, as well as regulatory issues compliance and, I suppose you could call it, localization types of issues.

Is there a hierarchy that appears to either of you about where to start in terms of what are the safe types of data, the safer or easier types of applications, that allows you to move toward some of these principles that probably are things you should be doing already, but that allow you to enjoy some of the rewards, while mitigating the risks?

Reed: There are two approaches there. One of the things we didn't say at the outset was there are a number of different versions of cloud. There are private clouds and public clouds. Whether you buy into private cloud as a model, in general, the idea there is you can have more protections around that, more controls, and more understanding of where things are physically.

If it's unprotected, if it's publicly available, then you can put it out there with some reasonable confidence that, even if it is compromised, it's not a great issue.



That's one approach to understanding, or at least achieving, some level of protection around the data. If you control the assets, you're allowed to control where they're located. If you go into the public cloud, then those data-classification things become important.

If you look at some of the government standards, like classified, restricted, or confidential, once you start to understand how to apply the data models and the classifications, then you can decide where things need to go and what protections need to be in place.

Gardner: Is there a progression, a logical progression, that appears to you about how to approach this, given that there are still disparities in the field?

Reed: Sure. You start off with the simplest classification of data. If it's unprotected, if it's publicly available, then you can put it out there with some reasonable confidence that, even if it is compromised, it's not a great issue.

Verstraete: Going to the cloud is actually a very good moment for companies to really sit down and think about what is absolutely critical for my enterprise and what are things that, if they leak out, if they get known, it's not too bad. It's not great in any case, but it's not too bad. And, that data classification that Archie was just talking about is a very interesting exercise that enterprises should do, if they really want to go to the cloud, and particularly to the public clouds.

I've seen too many companies jumping in without that step and being burnt in one way, form, or shape. It's sitting down and think through that, thinking through, "What are my key assets? What are the things that I never want to let go that are absolutely critical? On the other hand, what are the things that I quite frankly don't care too much about?" It's building that understanding that is actually critical.

Gardner: Perhaps there is an instance that will illustrate what we're talking about. I hear an awful lot about platform as a service (PaaS), which is loosely defined as doing application development activities in a cloud environment. I talk to developers who are delighted to use cloud-based resources for things like testing and to explore and share builds and requirements in the early stages.

At the same time, they're very reluctant to put source code in someone else's cloud. Source code strikes me as just a form of data. Where is the line between safe good cloud practices and application development, and when would it become appropriate to start putting source code in there as well?

Combination of elements

Verstraete: There are a number of answers to your question and they're related to a combination of elements. The first thing is gaining an understanding as much as you can, which is not easy, of what are the protection mechanisms that fit in the cloud service.

Today, because of the term "cloud," most of the cloud providers are getting away with providing very little information, setting up SLAs that frankly don't mean a lot. It's quite interesting to read a number of the SLAs from the major either infrastructure-as-a-service (IaaS) or PaaS providers.

Fundamentally, they take no responsibility, or very little responsibility, and they don't tell you what they do to secure the environment in which they ask you to operate. The reason they give is, "Well, if I tell you, hackers can know, and that's going to make it easier for them to hack the environment and to limit our security."

There is a point there, but that makes it difficult for people who really want to have source code, as in your example. That's relevant and important for them, because you have source code that’s not too bad and source code that's very critical. To put that source code in the cloud, if you don't know what's actually being done, is probably worse than being able to make an assessment and have a very clear risk assessment. Then, you know what the level of risk is that you take. Today, you don't know in many situations.

Gardner: Alright, Archie.

Reed: There are a couple of things or points that need to be made. First off, when we think about things like source code or data like that, there is this point where data is stored and it sits at rest. Until you start to use it, it has no impact, if it's encrypted, for example.

Putting the source code into the cloud, wherever that happens to be, may or may not actually be such a risk as you're alluding to, if you have the right controls around it.



So, if you're storing source code up there, it's encrypted, and you hold the keys, which is one of the key tenets that we would advocate for anyone thinking about encrypting stuff in the cloud. then maybe there is a level of satisfaction and meeting compliance that you have with that type of model.

Putting the source code into the cloud, wherever that happens to be, may or may not actually be such a risk as you're alluding to, if you have the right controls around it.

The second thing is that we're also seeing a very nascent set of controls and guarantees and SLAs and those sorts of things. This is very early on, in my opinion and in a lot of people's opinion, in the development of this cloud type environment, looking at all these attributes that are given to cloud, the unlimited expansion, the elasticity, and rapid provisioning. Certainly, we can get wrapped around the axle about what is really required in cloud, but it all ultimately comes down to that risk analysis.

If you have the right security in the system, if you have the right capabilities and guarantees, then you have a much higher level of confidence about putting data, such as source code or some sets of data like that, into the cloud.

Gardner: To Christian’s point of that the publicly available cloud providers are basically saying buyer beware, or in this case, the cloud practitioner beware, the onus to do good privacy, security compliance, and best practices falls back on the consumer, rather than the provider.

Community clouds

Reed: That's often the case. But, also consider that there are things like community clouds out there. I'll give the example of US Department of Defense back in 2008. HP worked with the Defense Information Systems Agency (DISA) to deploy cloud computing infrastructure. And, we created RACE, which is the Rapid Access Computing Environment, to set things up really quickly.

Within that, they share those resources to a community of users in a secure manner and they store all sorts of things in that. And, not to point fingers or anything, but the comment is, "Our cloud is better than Google's."

So, there are secure clouds out there. It's just that when we think about things like the visceral reaction that the cloud is insecure, it's not necessarily correct. It's insecure for certain instances, and we've got to be specific about those instances.

In the case of DISA, they have a highly secured cloud, and that's where we expect things to go and evolve into a set of cloud offerings that are stratified by the level of security they provide, the level of cost, right down to SLA’s and guarantees, and we’re already seeing that in these examples.

Gardner: So, for that cloud practitioner, as an organization, if they take those steps towards good cloud computing practices and technologies, it’s probably going to benefit them across the board in their IT infrastructure, applications, and data activities. But does it put them at a competitive advantage?

What's important for customers who want to move and want to put data in the cloud is to identify what all of those different types of clouds provide as security and protection capabilities.



If you do this right, if you take the responsibility yourself to figure out the risks and rewards and implement the right approach, what does that get for you? Christian, what’s your response to that?

Verstraete: It gives you the capability to use the elements that the cloud really brings with it, which means to have an environment in which you can execute a number of tasks in a pay-per-use type environment.

But, to come back to the point that Archie was making, one of the things that we often have a tendency to forget -- and I'm as guilty as anybody else in that space -- is that cloud means a tremendous amount of different things. What's important for customers who want to move and want to put data in the cloud is to identify what all of those different types of clouds provide as security and protection capabilities.

The more you move away from the traditional public cloud -- and when I say the traditional public cloud, I’m thinking about Amazon, Google, Microsoft, that type of thing -- to more community clouds and private clouds, the more important that you have it under your own control to ensure that you have the appropriate security layers and security levels and appropriate compliance levels that you feel you need for the information you’re going to use, store, and share in those different environments.

Gardner: Okay, Archie, we’re about out of time, so the last question is to you and it’s going to be the same question. If you do this well, if you do it right, if you take the responsibility, perhaps partner with others in a community cloud, what do you get, what’s the payoff, why would that be something that’s a competitive advantage or cost advantage, and energy advantage?

Beating the competition

Reed: We’ve been through a lot of those advantages. I’ve mentioned several times the elasticity, the speed of provisioning, the capacity. While we’ve alluded to, and actually discussed, specific examples of security concerns and data issues, the fact is, if you get this right, you have the opportunity to accelerate your business, because you can basically break ahead of the competition.

Now, if you’re in a community cloud, standards may help you, or approaches that everyone agrees on may help the overall industry. But, you also get faster access to all that stuff. You also get capacity that you can share with the rest of the community. If you're thinking about cloud in general, in isolation, and by that I mean that you, as an individual organization, are going out and looking for those cloud resources, then you’re going to get that ability to expand well beyond what your internal IT department.

There are lots of things we could close on, of course, but I think that the IT department of today, as far as cloud goes, has the opportunity not only to deliver and better manage what they’re doing in terms of providing services for the organization, but also have a responsibility to do this right and understand the security implications and represent those appropriately to the company such that they can deliver that accelerated capability.

Gardner: Very good. We’ve been discussing how to manage risks and rewards and proper placement of enterprise data in cloud-computing environments. I want to thank our two panelists today, Christian Verstraete, Chief Technology Officer for Manufacturing and Distribution Industries Worldwide at HP. Thank you, Christian.

Verstraete: You’re welcome.

Gardner: And also, Archie Reed, HP's Chief Technologist for Cloud Security, and the author of several publications including, The Definitive Guide to Identity Management and he's working on a new book, The Concise Guide to Cloud Computing. Thank you, Archie.

Reed: Hey, Dana. Thanks for taking the time to talk to us today.

Gardner: This is Dana Gardner, Principal Analyst at Interarbor Solutions. You’ve been listening to a sponsored BriefingsDirect podcast. Thanks for joining us, and come back next time.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: HP.

Transcript of a sponsored BriefingsDirect podcast on how enterprises should approach and guard against data loss when placing sensitive data in cloud computing environments.Copyright Interarbor Solutions, LLC, 2005-2010. All rights reserved.

You may also be interested in: