Monday, July 09, 2012

The Open Group and MIT Experts Detail New Advances in Identity Management to Help Reduce Cyber Risk

Transcript of a BriefingsDirect podcast in conjunction with the upcoming Open Group Conference on the current state and future outlook for identity management.

Listen to the podcast. Find it on iTunes/iPod. Download the transcript. Sponsor: The Open Group.

Register for The Open Group Conference
July 16-18 in Washington, D.C. Watch the live stream.

Dana Gardner: Hello, and welcome to a special BriefingsDirect thought leadership interview series coming to you in conjunction with The Open Group Conference this July in Washington, D.C. I'm Dana Gardner, Principal Analyst at Interarbor Solutions, and I'll be your host throughout these discussions.

The conference will focus on enterprise architecture (EA), enterprise transformation, and securing global supply chains. Today, we're here to focus on cyber security, and the burgeoning role that identity (ID) management plays in overall better securing digital assets and systems.

We’ll examine the relationship between controlled digital identities in cyber risk management, and explore how the technical and legal support of ID management best practices have been advancing rapidly. We’ll also see how individuals and organizations can better protect themselves through better understanding and managing of their online identities.

Joining us now to delve into this fast-evolving area are few of the main speakers at the July 16 conference. We are here with Jim Hietala, the Vice President of Security at The Open Group. Welcome, Jim. [Disclosure: The Open Group is a sponsor of BriefingsDirect podcasts.]

Jim Hietala: Thanks Dana, good to be with you.

Gardner: We are also here with Thomas Hardjono, Technical Lead and Executive Director of the MIT Kerberos Consortium. Welcome, Thomas.

Thomas Hardjono: Hello, Dana.

Gardner: And we are joined by Dazza Greenwood, President of the CIVICS.com consultancy, and lecturer at the MIT Media Lab. Welcome, Dazza.

Dazza Greenwood: Hi. Good to be here.

Gardner: Jim, first question to you. Let’s describe the lay of the land for our listeners. What is ID management generally and how does it form a fundamental component of cyber security?

Hietala: ID management is really the process of identifying folks who are logging onto computing services, assessing their identity, looking at authenticating them, and authorizing them to access various services within a system. It’s something that’s been around in IT since the dawn of computing, and it’s something that keeps evolving in terms of new requirements and new issues for the industry to solve.

Particularly as we look at the emergence of cloud and software-as-a-service (SaaS) services, you have new issues for users in terms of identity, because we all have to create multiple identities for every service we access.

You have issues for the providers of cloud and SaaS services, in terms of how they provision, where they get authoritative identity information for the users, and even for enterprises who have to look at federating identity across networks of partners. There are a lot of challenges there for them as well.

Gardner: Is it fair to say, Jim, that as we expand the boundaries of process and commerce beyond the four walls of the enterprise, that this becomes even more urgent, more of an issue?

Key theme

Hietala: I do think it’s fair to say that. Figuring out who is at the other end of that connection is fundamental to all of cyber security. As we look at the conference that we're putting on this month in Washington, D.C., a key theme is cyber security -- and identity is a fundamental piece of that. So, yeah, I think that’s a fair characterization.

Gardner: Let’s go to you, Thomas. How have you been viewing this in terms of an evolution? Are we at a plateau that we're now starting to advance from? Has this been a continuous progression over the past decade? How has ID management been an active topic?

Hardjono: So it’s been at least a decade since the industry began addressing identity and identity federation. Someone in the audience might recall Liberty Alliance, the Project Liberty in its early days.

One notable thing about the industry is that the efforts have been sort of piecemeal, and the industry, as a whole, is now reaching the point where a true correct identity is absolutely needed now in transactions in a time of so many so-called Internet scams.

The number attacks have increased, including attacks from state-sponsored co-terrorists, all the way to so-called Nigerian scammers. This brings to the forefront the fact that we need two things right now, yesterday even, namely, identity under federation and also a scalable authorization mechanism that’s linked to this strong identity.

Gardner: Dazza, is there a casual approach to this, or a professional need? By that, I mean that we see a lot of social media activities, Facebook for example, where people can have an identity and may or may not be verified. That’s sort of the casual side, but it sounds like what we’re really talking about is more for professional business or eCommerce transactions, where verification is important. In other words, is there a division between these two areas that we should consider before we get into it more deeply?

Greenwood: Rather than thinking of it as a division, a spectrum would be a more useful way to look at it. On one side, you have, as you mentioned, a very casual use of identity online, where it may be self-asserted. It may be that you've signed a posting or an email.

On the other side, of course, the Internet and other online services are being used to conduct very high value, highly sensitive, or mission-critical interactions and transactions all the time. When you get toward that spectrum, a lot more information is needed about the identity authenticating, that it really is that person, as Thomas was starting to foreshadow. The authorization, workflow permissions, and accesses are also incredibly important.

In the middle, you have a lot of gradations, based partly on the sensitivity of what’s happening, based partly on culture and context as well. When you have people who are operating within organizations or within contexts that are well-known and well-understood -- or where there is already a lot of not just technical, but business, legal, and cultural understanding of what happens -- if something goes wrong, there are the right kind of supports and risk management processes.

There are different ways that this can play out. It’s not always just a matter of higher security. It’s really higher confidence, and more trust based on a variety of factors. But the way you phrased it is a good way to enter this topic, which is, we have a spectrum of identity that occurs online, and much of it is more than sufficient for the very casual or some of the social activities that are happening.

Higher risk

But as the economy in our society moves into a digital age, ever more fully and at ever-higher speeds, much more important, higher risk, higher value interactions are occurring. So we have to revisit how it is that we have been addressing identity -- and give it more attention and a more careful design, instead of architectures and rules around it. Then we’ll be able to make that transition more gracefully and with less collateral damage, and really get to the benefits of going online.

Gardner: Jim Hietala, before we go into what’s been happening in the field around ID management, I just wanted to get a better sense of the urgency here. We hear quite a bit about consumerization of IT trends in the enterprise, driven in many respects by mobile use. But it seems to me that there is a need here to move rapidly away from de facto single sign-on through some of the social networks and get more of a mission-critical approach to this.

Do you agree that people have been falling into a consumer’s sense of security for single sign-on, but that it really needs to be better, and therefore we need to ramp up the urgency around it?

Hietala: I do agree with that. It’s not just mobile. You can look at things that are happening right now in terms of trojans, bank fraud, scammers, and attackers, wire transferring money out of company’s bank accounts and other things you can point to.

There are failures in their client security and the customer’s security mechanisms on the client devices, but I think there are also identity failures. They need new approaches for financial institutions to adopt to prevent some of those sorts of things from happening. I don’t know if I’d use the word "rampant," but they are clearly happening all over the place right now. So I think there is a high need to move quickly on some of these issues.

They need new approaches for financial institutions to adopt to prevent some of those sorts of things from happening.



Gardner: I sense that the legacy or historical approach was piecemeal, somewhat slow to react to the marketplace. Then, there is this other side, where the social mechanisms have crept in, and in the middle of this big hole you could drive a truck through.

So let’s talk about what’s going to be happening to shore this up and pull it together? Let’s look at some of the big news. What are some of the large milestones? We’ll start with you Jim for ID management leading up to the present.

Hietala: Well, so I think the biggest recent news is the US National Strategy for Trusted Identities in Cyber Space (NSTIC) initiative. We’ll probably talk about that as we go through this discussion, but that clearly shows that a large government, the United States government, is focused on the issue and is willing to devote resources to furthering an ID management ecosystem and construct for the future.

To me that’s the biggest recent news. You can look on the threat and attack side, and see all sorts of instances, where even the LinkedIn attacks from the last week or so, demonstrate that identity and the loss of identity information is a big deal. You don’t have to look far in the news headlines these days to see identity taking front and center as a big issue that needs to be addressed.

Gardner: Let’s go to you Dazza. What do you see as the big news or milestones of the day. Then, maybe a secondary question on what Jim just mentioned -- that it’s not just about protecting ID, that the bad guys are often trying to take IDs away from others?

At a crossroads

Greenwood: I think that’s right. Where we are just now is at a crossroads where finally industry, government, and increasingly the populations in general, are understanding that there is a different playing field. In the way that we interact, the way we work, the way we do healthcare, the way we do education, the way our social groups cohere and communicate, big parts are happening online.

In some cases, it happens online through the entire lifecycle. What that means now is that a deeper approach is needed. Jim mentioned NSTIC as one of those examples. There are a number of those to touch on that are occurring because of the profound transition that requires a deeper treatment.

NSTIC is the US government’s roadmap to go from its piecemeal approach to a coherent architecture and infrastructure for identity within the United States. It could provide a great model for other countries as well.

People can reuse their identity, and we can start to address what you're talking about with identity and other people taking your ID, and more to the point, how to prove you are who you said you were to get that ID back. That’s not always so easy after identity theft, because we don’t have an underlying effective identity structure in the United States yet.

I just came back from the United Kingdom at a World Economic Forum meeting. I was very impressed by what their cabinet officers are doing with an identity-assurance scheme in large scale procurement. It's very consistent with the NSTIC approach in the United States. They can get tens of millions of their citizens using secure well-authenticated identities across a number of transactions, while always keeping privacy, security, and also individual autonomy at the forefront.

Practically everywhere you look, you see news and signs of this transition that’s occurring, an exciting time for people interested in identity.



There are a number of technology and business milestones that are occurring as well. Open Identity Exchange (OIX) is a great group that’s beginning to bring industry and other sectors together to look at their approaches and technology. We’ve had Security Assertion Markup Language (SAML). Thomas is co-chair of the PC, and that’s getting a facelift.

That approach was being brought to match scale with OpenID Connect, which is OpenID and OAuth. There are a great number of technology innovations that are coming online.

Legally, there are also some very interesting newsworthy harbingers. Some of it is really just a deeper usage of statutes that have been passed a few years ago -- the Uniform Electronic Transactions Act, the Electronic Signatures in Global and National Commerce Act, among others, in the US.

There is eSignature Directive and others in Europe and in the rest of the world that have enabled the use of interactions online and dealt with identity and signatures, but have left to the private sector and to culture which technologies, approaches, and solutions we’ll use.

Now, we're not only getting one-off solutions, but architectures for a number of different solutions, so that whole sectors of the economy and segments of society can more fully go online. Practically everywhere you look, you see news and signs of this transition that’s occurring, an exciting time for people interested in identity.

Gardner: Before we define a few of these approaches, Thomas, a similar question to you, but through a technical lens. What’s most new and interesting from your perspective on what’s being brought to bear on this problem, particularly from a technology perspective?

Two dimensions

Hardjono: It's along two dimensions. The first one is within the Kerberos Consortium. We have a number of people coming from the financial industry. They all have the same desire, and that is to scale their services to the global market, basically sign up new customers abroad, outside United States. In wanting to do so, they're facing a question of identity. How do we assert that somebody in a country is truly who they say they are.

The second, introduces a number of difficult technical problems. Closer to home and maybe at a smaller scale, the next big thing is user consent. The OpenID exchange and the OpenID Connect specifications have been completed, and people can do single sign-on using technology such as OAuth 2.0.

The next big thing is how can an attribute provider, banks, telcos and so on, who have data about me, share data with other partners in the industry and across the sectors of the industry with my expressed consent in a digital manner.

Gardner: Let’s drill down a little bit. Dazza, tell us a bit about the MIT Core ID approach and how this relates to the Jericho Forum approach. I suppose you'd have to just do a quick explanation of what Jericho is in the process of explaining it.

Greenwood: I would defer to Jim of The Open Group to speak more authoritatively on Jericho Forum, which is a part of Open Group. But, in general, Jericho Forum is a group of experts in the security field from industry and, more broadly, who have done some great work in the past on deperimeterized security and some other foundational work.

With a lot of the solutions in the market, your different aspects of life, unintentionally sometimes or even counter-intentionally, will merge.



In the last few years, they've been really focused on identity, coming to realize that identity is at the center of what one would have to solve in order to have a workable approach to security. It's necessary, but not sufficient, for security. We have to get that right.

To their credit, they've come up with a remarkably good list of simple understandable principles, that they call the Jericho Forum Identity Commandments, which I strongly commend to everybody to read.

It puts forward a vision of an approach to identity, which is very constant with an approach that I've been exploring here at MIT for some years. A person would have a core ID identity, a core ID, and could from that create more than one persona. You may have a work persona, an eCommerce persona, maybe a social and social networking persona and so on. Some people may want a separate political persona.

You could cluster all of the accounts, interactions, services, attributes, and so forth, directly related to each of those to those individual personas, but not be in a situation where we're almost blindly backing into right now. With a lot of the solutions in the market, your different aspects of life, unintentionally sometimes or even counter-intentionally, will merge.

Good architecture

Sometimes, that’s okay. Sometimes, in fact, we need to be able to have an inability to separate different parts of life. That’s part of privacy and can be part of security. It's also just part of autonomy. It's a good architecture. So Jericho Forum has got the commandments.

Many years ago, at MIT, we had a project called the Identity Embassy here in the Media Lab, where we put forward some simple prototypes and ideas, ways you could do that. Now, with all the recent activity we mentioned earlier toward full-scale usage of architectures for identity in US with NSTIC and around the world, we're taking a stronger, deeper run at this problem.

Thomas and I have been collaborating across different parts of MIT. I'm putting out what we think is a very exciting and workable way that you can in a high security manner, but also quite usably, have these core identifiers or individuals and inextricably link them to personas, but escape that link back to the core ID, and from across the different personas, so that you can get the benefits when you want them, keeping the personas separate.

Also it allows for many flexible business models and other personalization and privacy services as well, but we can get into that more in the fullness of time. But, in general, that’s what’s happening right now and we couldn’t be more excited about it.

Gardner: Of course, you'll be discussing this in greater detail at The Open Group Conference coming up on July 16, so we look forward to that. When it comes to this notion of a core ID, where might that be implemented and instantiated? Where would I keep my core ID, so that I could develop these other personas, have a form of federation as a result, but managed through my own core? Where would that core reside?

It's important to recognize that people are not computer scientists and hardware manufacturers, and don't run data centers in their basements.



Greenwood: I'll say a couple of words on that and I think Thomas has a few words as well. The Jericho Forum is pretty definite that they favor having the individual human being have a hardware device of some kind, a cryptographically hardened module of some kind, within which the data that comprises the core identifier.

Also some wrapping data that Thomas and I are putting forward in the proposed architecture would reside on it, and that would be literally owned and under control of, in the pocket of, the person, so they can treat it almost like their wallet. It maybe would become part of the future wallet, or what we come to think of this as a wallet, with digital walletized services on phones and other devices people have with them.

So there is that high dimension, a very basic answer where the data would reside. It's important to recognize that people are not computer scientists and hardware manufacturers, and don't run data centers in their basements. There is always a critical role for service providers to make this easy for people, so there would be simple products and simple services that people can use to have the issuance and management of each of layers of their identity.

Part of what we have done is come up with an architecture with the right types of institutions. Mixes of governments and other highly-trusted institutions that for hundreds or more years have already been the authoritative source for identity, as opposed to new startups, would have their appropriate role. Then, layers of service providers that provide personalization, eCommerce, and other services, whatever their appropriate roles within the ecosystem we’re looking toward to help support and enable within the architecture we’re putting up. Thomas may have some more on that.

Hardjono: I agree with Dazza. For a global infrastructure for core identities to be able to develop, we definitely need collaboration between the governments of the world and the private sector. Looking at this problem, we were searching back in history to find an analogy, and the best analogy we could find was the rollout of a DNS infrastructure and the IP address assignment.

Register for The Open Group Conference
July 16-18 in Washington, D.C. Watch the live stream.

Here today

It's not perfect and it's got its critics, but the idea is that you could split blocks of IP addresses and get it sold and resold by private industry, really has allowed the Internet to scale, hitting limitations, but of course IPv6 is on the horizon. It's here today.

So we were thinking along the same philosophy, where core identifiers could be arranged in blocks and handed out to the private sector, so that they can assign, sell it, or manage it on behalf of people who are Internet savvy, and perhaps not, such as my mom. So we have a number of challenges in that phase.

Gardner: Very interesting. Does this relate to the MIT Model Trust Framework System Rules project? If so, please explain how and how this notion of a directory -- either private, public or in some combination -- would help to move this core ID concept forward.

Greenwood: The Model Trust Framework System Rules project that we are pursuing in MIT is a very important aspect of what we're talking about. Thomas and I talked somewhat about the technical and practical aspects of core identifiers and core identities. There is a very important business and legal layer within there as well.

So these trust framework system rules are ways to begin to approach the complete interconnected set of dimensions necessary to roll out these kinds of schemes at the legal, business, and technical layers.

What’s really missing is the business models, business cases, and of course the legal side.



They come from very successful examples in the past, where organizations have federated ID with more traditional approaches such as SAML and other approaches. There are some examples of those trust framework system rules at the business, legal, and technical level available.

Right now it’s CIVICS.com, and soon, when we have our model MIT under Creative Commons approach, we'll take a lot of the best of what’s come before codified in a rational way. Business, legal, and technical rules can really be aligned in a more granular way to fit well, and put out a model that we think will be very helpful for the identity solutions of today that are looking at federate according to NSTIC and similar models. It absolutely would be applicable to how at the core identity persona underlying architecture and infrastructure that Thomas, I, and Jericho Forum are postulating could occur.

Gardner: Thomas, anything to add to what Dazza just said?

Hardjono: No. I'm looking back 10-15 years. We engineers came up with all sorts of solutions and standardized them. What’s really missing is the business models, business cases, and of course the legal side.

How can a business make revenue out of the management of identity-related aspects, management of attributes, and so on and how can they do so in such a manner that it doesn’t violate the user’s privacy. But it’s still user-centric in the sense that the user needs to give consent and can withdraw consent and so on. And trying to develop an infrastructure where everybody is protected.

Gardner: So it sounds as if you are proposing a chartered or regulated industry, perhaps modeled somewhat on ICANN and the way that DNS has been managed to be the facilitator of these core IDs and then further into federation. Is that fair?

Almost an afterthought

Hardjono: It's only an analogy. Unfortunately if you look at history, people say that ICANN is an organization that was put together quickly, slapped together quickly, because the Internet was growing so fast. It's almost an afterthought for how to regulate the management of IP addresses.

I am hoping that this time around, for identity, we have a more planned and thought-out process that would allow an infrastructure to remain for the next 50 years or 100 years and scale for the needs of technology 50 years from now and 100 years from now.

Greenwood: I’ll just pick up on that a little bit. What you described there was like be a regulated industry. Perhaps one day, but that’s not today and that’s not tomorrow. What we have today is just reality, as it exists, and so what we're coming up with is something that works in a few levels. One of them is a vision in line with the Jericho Forum’s vision. It's a future state vision. It's a very good vision to work towards to help organize our thinking and to get out for discussion and dialogue on ideal amendments or consensus.

Meanwhile, from this trust framework system rules approach and some of the skunkworks project that we'll be able to share at The Open Group Conference in D.C. out of MIT, we're showing in a stepwise way how can we get there from here, what constructive things can we do that are in alignment with this vision today.

The system rules at the business, legal, and technical level in this model trust framework system rules approach are great because they are very flexible. There are lots of examples in payment systems, supply chains, identity federations, and other places, where they use multilateral contractual approaches, can allow multiple stakeholders to get together right now to define their liability, choose the technologies, establish the business processes, and so forth and get rolling.

I am hoping that this time around, for identity, we have a more planned and thought-out process that would allow an infrastructure to remain for the next 50 years or 100 years.



So we are attempting to offer something that can work today. One day perhaps there may be an industry or industries that may be regulated without really presuming how exactly that will come out. Those are decisions, as Thomas said, that are best made, because they are infrastructural really, by a number of different parties over time.

Gardner: Jim Hietala, at The Open Group, being a global organization focused on the collaboration process behind the establishment of standards, it sounds like these are some important aspects that you can bring out to your audience, and start to create that collaboration and discussion that could lead to more fuller implementation. Is that the plan, and is that what we're expecting to hear more of at the conference next month?

Hietala: It is a plan, and we do get a good mix at our conferences and events of folks from all over the world, from government organizations and large enterprises as well. So it tends to be a good mixing of thoughts and ideas from around the globe on whatever topic we're talking about -- in this case identity and cyber security.

At the Washington Conference, we have a mix of discussions. The kick-off one is a fellow by the name Joel Brenner who has written a book, America the Vulnerable, which I would recommend. He was inside the National Security Agency (NSA) and he's been involved in fighting a lot of the cyber attacks. He has a really good insight into what's actually happening on the threat and defending against the threat side. So that will be a very interesting discussion. [Read an interview with Joel Brenner.]

Then, on Monday, we have conference presentations in the afternoon looking at cyber security and identity, including Thomas and Dazza presenting on some of the projects that they’ve mentioned.

Cartoon videos

Then, we're also bringing to that event for the first time, a series of cartoon videos that were produced for the Jericho Forum. They describe a lot of the commandments that Dazza mentioned in a more approachable way. So they're hopefully understandable to laymen, and folks with not as much understanding about all the identity mechanisms that are out there. So, yeah, that’s what we are hoping to do.

Gardner: Do you sense that what MIT has been working on, and what Dazza and Thomas have been describing, are some important foundational blocks to where you see this going. Are they filling a need that you can bring to bear on the discussions and some of the standardization work at The Open Group?

Hietala: Absolutely. They fill a void in the market in terms of organizations that are willing to do that sort of work. The Jericho Forum tends to do forward-looking, thought-leadership kinds of work, looking at problems at the highest level and providing some guidance. Doing model trust frameworks and those sorts of things is that next layer of detail down that’s really critical to the industry. So we encourage it and are happy it's happening.

Gardner: We’re coming up on our time limit, but I did want to dive a little bit deeper into NSTIC. We mentioned that earlier on as an important aspect. Now that we’ve talked a bit more about what's going on with Core ID concepts and trust framework activities, perhaps we could now better explain what NSTIC is and does, but in the context of what we’ve already understood. Who would like to take a step at that?

Greenwood: The best person to speak about NSTIC in the United States right now is probably President Barrack Obama, because he is the person that signed the policy. Our president and the administration has taken a needed, and I think a very well-conceived approach, to getting industry involved with other stakeholders in creating the architecture that’s going to be needed for identity for the United States and as a model for the world, and also how to interact with other models.

In general, NSTIC is a strategy document and a roadmap for how a national ecosystem can emerge.



Jeremy Grant is in charge of the program office and he is very accessible. So if people want more information, they can find Jeremy online easily in at nist.gov/nstic. And nstic.us also has more information.

In general, NSTIC is a strategy document and a roadmap for how a national ecosystem can emerge, which is comprised of a governing body. They're beginning to put that together this very summer, with 13 different stakeholders groups, each of which would self-organize and elect or appoint a person -- industry, government, state and local government, academia, privacy groups, individuals -- which is terrific -- and so forth.

That governance group will come up with more of the details in terms of what the accreditation and trust marks look like, the types of technologies and approaches that would be favored according to the general principles I hope everyone reads within the NSTIC document.

At a lower level, Congress has appropriated more than $10 million to work with the White House for a number of pilots that will be under a million half dollars each for a year or two, where individual proof of concept, technologies, or approaches to trust frameworks will be piloted and put out into where they can be used in the market.

In general, by this time two months from now, we’ll know a lot more about the governing body, once it’s been convened and about the pilots once those contracts have been awarded and grants have been concluded. What we can say right now is that the way it’s going to come together is with trust framework system rules, the same exact type of entity that we are doing a model of, to help facilitate people's understanding and having templates and well-thought through structures that they can pull down and, in turn, use as a starting point.

Circle of trust

S
o industry-by-industry, sector-by-sector, but also what we call circle of trust by circle of trust. Folks will come up with their own specific rules to define exactly how they will meet these requirements. They can get a trust mark, be interoperable with other trust framework consistent rules, and eventually you'll get a clustering of those, which will lead to an ecosystem.

The ecosystem is not one size fits all. It’s a lot of systems that interoperate in a healthy way and can adapt and involve over time. A lot more, as I said, is available on nstic.us and nist.gov/nstic, and it's exciting times. It’s certainly the best government document I have ever read. I'll be so very excited to see how it comes out.

Gardner: A good read for the summer, no doubt. Before we close out, let's affirm for our audience how important this is. Clearly, we are at a crossroads, as you mentioned, Dazza. It seems to me that the steam, the pressure, for having a better means of ID management is building rapidly from things like the use of multiple mobile devices, location-based commerce, the fact that more of our personal business and economic lives are moving to the cyber realm.

Being able to continue to gain productivity from that really falls back to this issue about maintaining a core and verifiable identity, and being able to use that effectively in more-and-more types of activities.

Being able to continue to gain productivity from that really falls back to this issue about maintaining a core and verifiable identity.



Do you agree? What would be some of the future trends that are going to drive even more demand to solve this problem? Let’s start with you, Jim, and go through our panel. What’s coming down the pike that’s going to make this yet more important?

Hietala: I would turn to the threat and attacks side of the discussion and say that, unfortunately, we're likely to see more headlines of organizations being breached, of identities being lost, stolen, and compromised. I think it’s going to be more bad news that's going to drive this discussion forward. That’s my take based on working in the industry and where it’s at right now.

Gardner: Thomas, same question.

Hardjono: I mentioned the user consent going forward. I think this is increasingly becoming an important sort of small step to address and to resolve in the industry and efforts like the User Managed Access (UMA) working group within the Kantara Initiative.

Folks are trying to solve the problem of how to share resources. How can I legitimately not only share my photos on Flickr with data, but how can I allow my bank to share some of my attributes with partners of the bank with my consent. It’s a small step, but it’s a pretty important step.

Gardner: Dazza, what future events or trends are going to drive this more rapidly to the public consciousness and perhaps even spur the movement towards some resolution?

Greenwood: I completely agree with Thomas, keep your eyes on UMA out of Kantara. Keep looking at OASIS, as well, and the work that’s coming with SAML and some of the Model Trust Framework System Rules.

Most important thing

In my mind the most strategically important thing that will happen is OpenID Connect. They're just finalizing the standard now, and there are some reference implementations. I'm very excited to work with MIT, with our friends and partners at MITRE Corporation and elsewhere.

That’s going to allow mass scales of individuals to have more ready access to identities that they can reuse in a great number of places. Right now, it's a little bit catch-as-catch-can. You’ve got your Google ID or Facebook, and a few others. It’s not something that a lot of industries or others are really quite willing to accept to understand yet.

They've done a complete rethink of that, and use the best lessons learned from SAML and a bunch of other federated technology approaches. I believe this one is going to change how identity is done and what’s possible.

They’ve done such a great job on it, I might add It fits hand in glove with the types of Model Trust Framework System Rules approaches, a layer of UMA on top, and is completely consistent with the architecture rights, with a future infrastructure where people would have a Core ID and more than one persona, which could be expressed as OpenID Connect credentials that are reusable by design across great numbers of relying parties getting where we want to be with single sign-on.

I believe this one is going to change how identity is done and what’s possible.



So it's exciting times. If it's one thing you have to look at, I’d say do a Google search and get updates on OpenID Connect and watch how that evolves.

Gardner: Very good. We've been talking about cyber security and the burgeoning role that identification management plays in the overall securing of assets and systems. We've learned quite a bit about how individuals in organizations could begin to better protect themselves through better understanding and managing of their online identities.

This special BriefingsDirect discussion comes to you in conjunction with The Open Group Conference from July 16 to 20 in Washington, D.C. You’ll hear more from these and other experts on the ways that IT and enterprise architecture support enterprise transformation.

I’d like to thank our panel for this fascinating discussion, Jim Hietala, the Vice President of Security at The Open Group. Thank you, Jim.

Hietala: Thank you, Dana.

Gardner: We are also here with Thomas Hardjono, Technical Lead and Executive Director of the MIT Kerberos Consortium. Thank you so much, Thomas.

Hardjono: Thank you, Dana.

Gardner: And also Dazza Greenwood, President of the CIVICS.com consultancy and a lecturer at the MIT Media Lab. Thanks very much, Dazza.

Greenwood: Thanks. It's been a pleasure.

Gardner: I look forward to your presentations in Washington and I encourage our readers and listeners to look at this conference, register if you can, go to learn more about what’s going to be happening there and some of the activities will be streamed live for you to consume regardless of where you are.

Thank you all too, the audience, for listening. This is Dana Gardner, Principal Analyst at Interarbor Solutions. Don’t forget to come back next time.

Listen to the podcast. Find it on iTunes/iPod. Download the transcript. Sponsor: The Open Group.

Register for The Open Group Conference
July 16-18 in Washington, D.C. Watch the live stream.

Transcript of a BriefingsDirect podcast in conjunction with the upcoming Open Group Conference on the current state and future outlook for identity management. Copyright The Open Group and Interarbor Solutions, LLC, 2005-2012. All rights reserved.

You may also be interested in:

Tuesday, July 03, 2012

Roundtable: Revlon, SAP and VMware Describe Accretive Benefits from Aggressive Adoption of Cloud Computing

Transcript of a sponsored podcast on how cloud and virtualization deliver benefits in cost, efficiency, and agility.

Listen to the podcast. Find it on iTunes/iPod. Download the transcript. Sponsor: VMware.

Dana Gardner: Hi, this is Dana Gardner, Principal Analyst at Interarbor Solutions, and you're listening to BriefingsDirect.

Today we present a sponsored podcast discussion focused on two prime examples of organizations that have gleaned huge benefits from high degrees of virtualization and aggressive cloud computing adoption.

We're joined by executives from Revlon and SAP, who recently participated in a VMware-organized media roundtable event in San Francisco. The event, attended by industry analysts and journalists, demonstrated how mission-critical applications supported by advanced virtualization strategies are transforming businesses. [Disclosure: VMware is a sponsor of BriefingsDirect podcasts.]

We're going to learn more about the full implications of IT virtualization, and how they're being realized -- from bringing speed to business requests, to enhancing security, to strategic disaster recovery (DR), and to unprecedented agility in creating and exploiting applications and data delivery value.

With that, please join me now in welcoming our guests, David Giambruno, Senior Vice President and CIO of Revlon. Welcome back, David.

David Giambruno: Thanks a lot, Dana.

Gardner: We're also here with Heinz Roggenkemper, Executive Vice President of Development at SAP Labs. Welcome Heinz.

Heinz Roggenkemper: Welcome, Dana.

Gardner: Heinz, let me begin with you, if you don’t mind. Describe for our listeners your internal cloud approach that you've been using to make training and development applications readily available. What's going on with that internal cloud, and why is the speed and agility so important for you?

Roggenkemper: If you look at SAP, you find literally thousands of development systems. You find a lot of training systems. You find systems that support sales activities for pre-sales. You find systems that support our consulting organization in developing customer solutions.

From a developer's perspective, the first order of business is to get access to a system fast. Developers, by themselves, don’t care that much about cost. They want the system and they want it now. For development managers and management in general, it’s a different story.

For training, it's important that the systems are reliable and available. Of course again for management, it's the cost perspective. For people in custom development, they need the right system quickly to build up the correct environment for the particular project that they're working on.

Better supported

A
lso these requirements are much better supported in the virtualized environment than they were before. We can give them the system quickly. We can give them the systems reliably. We can give them the systems with good performance, and from a corporate perspective, do it at a much better cost than we did before.

Our business agility and ability to respond to market drivers is greatly improved by this.

Gardner: One of the things that was intriguing to me was the training instance, where people were coming in and needed a full stack of SAP applications, perhaps third-party applications that were mission critical. Tell me how the training application in particular, or the use of virtualization in that instance, demonstrates some of the more productive aspects of cloud?

Roggenkemper: The most interesting part about that is that you don’t need a vanilla system, but a system that is prepared for a particular class, which has the correct set of data. You need a system that can be reset to a controlled stage very quickly after the end of a training class, so that it’s ready for the next training class.

So there are two aspects to it. One is the reliable infrastructure on which the systems run, and second part is to get the correct system for that particular class ready in a short period of time.

Gardner: On the issues of control of the data, security, and even licensing, are there unintended consequences or unintended benefits that come when you approach the delivery of these applications through the full virtualization and this cloud model?

One is the reliable infrastructure on which the systems run, and second part is to get the correct system for that particular class ready in a short period of time



Roggenkemper: For unintended benefits, the thing that comes to my mind is that it allows us to take advantage of new computing infrastructure more quickly. We reduce the use of power, which is always a good thing.

For an unintended downside, the only thing that would come to my mind is that when in development, you are tuning for performance. That is a slightly different thing. In some areas, if you do general tuning, where you run a couple of iterations instead of just running to identify where your hotspots are, and if it’s a highly critical component, you might have to go to dedicated hardware to get to the last few percentages.

So in that area, you have to behave differently, but it affects only a small window of your total development time. Most of the time, you still take full advantage of a virtualized environment. Once you go into tuning, then you move the system to dedicated hardware and do your job there. If you average it out, you still have a substantial advantage.

Gardner: This idea of agility when producing these applications with their full data and production ready, even if you are in a training and development environment, where you're not necessarily facing their customers, proves this concept of IT as a service. Do you see it that way, and if so, is it something that you are going to be bringing to other applications within SAP?

Roggenkemper: Absolutely. And obviously, what we use internally benefits our customers as well. To have these systems available in a much shorter period of time for the customer’s development environment is as important for them as it is for us.

Future plans


Gardner: And a question about future plans. It sounds as if this works for you. Then the virtual desktop infrastructure (VDI) approach of delivering entire client environments with apps, data, and full configuration would be a natural progression. Is that something that you're looking at or perhaps you're already doing?

Roggenkemper: Some things we're already doing, We have a hefty set of terminal servers in our environment, as well, which people, especially if they are on the road or work from home, take full advantage of.

Gardner: David, let’s go to you. I was very interested to hear today your version of IT as a service, really a vision that you painted. I think essentially you're saying that advances in pervasive virtualization and cloud methods are transforming how IT operates, but it’s giving you the ability of, as you said, saying yes when your business leaders come calling. What have you have been able to say yes to that exemplifies this shift in IT?

Giambruno: I can’t equate that to numbers. We've increased our project throughput over the past couple of years by 300. So my job is to say, "yes." I'm just here to help. I'm a service. Services are supposed to deliver. What this cloud ecosystem has delivered for us is our ability to say yes and get more done faster, better, cheaper.

The correlating effect of that is we have seen not only this massive increase in our ability to deliver projects for the business, because that’s really what business alignment is. I do what they want and I give them some counsel along the way.

The second piece is that we've seen a 70 percent reduction in the time it takes us to deliver applications, because we have all of these applications available to us in the task and development site which is part of our DR.

So this ability to move massive amounts of information where everything is just the file, bring that up and let our development teams at it, has added this whole speed, accuracy, and ability to deliver back to the business.

Gardner: So we understood with SAP that they're a very big, global delivery of business applications for all sorts of companies. They have an internal cloud that they're using for some specific training and some specific development activities.

That’s really what business alignment is. I do what they want and I give them some counsel along the way.



But Revlon is also a global company. Tell us a little bit about the role that you have for our listeners who might not be familiar, the extent to which your applications are being used, and the type of mission-critical activities that you're involved in?

Giambruno: It’s probably easier to quantify it this way. We have 531 applications running on our internal cloud. Our internal cloud makes roughly 15,000 automated application moves a month. Our transaction rate is roughly 14,000 transactions a second. Our data change rate is between 17 and 30 terabytes a week. Over 90 percent of our corporate workload sits on our internal cloud, and it runs most of our footprint globally.

Gardner: We're talking about mission-critical apps here -- ERP, manufacturing, warehousing, business intelligence. Did you start with mission-critical apps or did you end up there? How did you progress?

Trust, but verify


Giambruno: I have a couple of "isms" that I live by. The first one is “Crawl, Walk, Run” and the second one is “Trust, but Verify.” When we started our journey roughly five years ago, we started with "Crawl" -- very much "Crawl" and “Trust – but Verify.” At Revlon, we didn’t spend any more to put this in. We changed how we spent our money.

We were going through a server refresh, and instead of buying all the servers, we only bought roughly 20 percent. With the balance of that money, we bought the VMware licenses. We started putting in our storage area network (SAN), and although core component pieces, and we took some of our low-hanging fruit file systems and started moving all that.

As we did that, we started sharing with the business. We showed them what we're doing and that it still worked. Then, we started the walk phase of putting applications on it. We actually ran north of six nines.

System availability went up. Performance went up. And after this "Crawl Walk Run," "Trust and Verify," it became "Just keep Going." We accelerated the whole process and we have these things that we call "fuzzies," things that we can do for the business that they weren't expecting. Every couple of months, we would start delivering new capabilities.

One of the big things that we did was that we internalized all our DR. We kept taking external money that we were spending and were able to give it back to the business and essentially invest in ourselves, because at Revlon I'm not going to be a profit center.

We kept taking external money that we were spending and were able to give it back to the business and essentially invest in ourselves.



For Revlon, the more money R&D has to develop new products to get to our consumers and for marketing to tell that product story and get it out to our channels and use the media to talk about our glamorous products, that really drives growth in Revlon.

What we've done is focused on those things, taking the complexity out, but delivering capability to the business while either avoiding or saving money that that the business can now use to grow.

Gardner: So you've been able to say yes when they come and ask you for new services and capabilities. You've been able to keep your costs at or below the previous levels. That’s pretty impressive. Do you credit that to virtualization, to cloud, to the entire modernization? How do you describe it?

Giambruno: To me it’s the interaction of the entire ecosystem. It is a system. Virtualization is a huge part of that. That’s where all it started. As you look through the transition, it's really been interesting. I'm going to segue back to the saying yes pieces and what it’s allowed us to be.

We have this thing called Oneness. I always talk about being the Southwest [Airlines] of computing, and I live inside of very simple triangle. The triangle has three sides, obviously. One side is our application inventory, the other side is our infrastructure capabilities, and the other side is my skill-sets.

Saying yes

I
f you're inside that space I can say yes, very quickly. What’s happened inside that space helped us contain cost . When we first started work, our ratio was one physical to seven virtual. A couple years later, we're at 1:35. It’s roughly a 500 percent increase in capacity without any commensurate cost. I give credit to my team for owning the technology and for wielding the technology for the benefit of the business and to get the most out of it.

The frame of reference to keep ourselves grounded is that we make lipstick, and it’s really how much money we can save and how well we can wield that technology to deliver value and do more with less. That’ll enable our company to grow.

We love simplicity and we have this Southwest computing model of taking a very complex ecosystem and making it simple to use. To a large degree it's kind of like an iPad, where the business wants to touch it, but they don’t care what’s going on underneath.

It's our job to deliver that, to deliver that experience and capability back to the business, without them having to think about it. I just want them to ask that we’re here to help and that we can figure a way to deliver it and keep exercising our technical capabilities to wield the technology to do more.

Gardner: I'm intrigued by this notion of the ecosystem being a whole greater than the sum of the parts. One of the things that you've been able to do, in addition to saying yes and keep your costs in line, is to improve your data and manage your data lifecycle, according to what I heard today.

It's our job to deliver that experience and capability back to the business, without them having to think about it.



Tell me about this notion you said of all the data becoming structured. What are some of the upsides on the data, when it comes to this ecosystem approach?

Giambruno: When you were talking to Heinz, you talked about unintended consequences. One of the things that we have is a big gestalt after our cloud was live. We literally had all of our data in one place.

One of the big challenges historically was that we had all these applications geographically dispersed. The ability to touch them, feel them, get access, access controls, all of these things were monumentally challenging. In Revlon, as we went to the Southwest or Oneness model, we organized globally our access controls and those little things.

So when we had all this data and all these applications now sitting at one place, with our ability to look at them and understand them, we started a fairly big effort for our master data model. We’re structuring our data on the way in So when we're trying to query the data, we already know where it is and what it does in its relationships, instead of trying to mine through unstructured data and make reasoning out of it. It’s been this big data structure.

I’d say we "chewed glass." We spent a couple of years chewing glass, structuring all this data, because the change rate is so big, but there's value in information to the business. I joke, if you've missed at this, we’re in the information age. So how well we can wield our information and give our leadership team information to act on is a differentiator. The ability to do this big data and this master data model has been really what we see as the golden egg going forward, the thing that can really make a difference with the business.

Gardner: While we’re on this notion of unintended consequences and unintended benefits, does anything along the lines of security or licensing also come to mind?

Self selection


Giambruno: From a licensing perspective, along the journey we called it self-selection. Licensing is important. Everybody has to make money. We live in capitalism. So from a procurement perspective, we always want to make sure we’re legal, but at the same time, vendors will self-select, depending on their licensing model in the virtualization world. That's our triangle. That's our infrastructure. Through that, we’ve had to manage relationships and we’ve done that.

From a security model, the structuring of all of our infrastructure, putting the in the Southwest model of computing, this Oneness, getting our data, our access controls, all of that plus with greatly simplified security, all of that is completely ubiquitous. There were even some of the crazy things that we did --we restructured the IP-ing of everything in Revlon to make all of our IP blocks contiguous. So when we move things around the world inside our cloud, we move entire blocks of IP addresses.

As you look forward, one of the interesting things that I find is that, as you look at streaming our applications, there is a huge security paradigm shift. Essentially no data will ever leave my data center and sit on a device.

In five years, that would be my goal. I think I can do it in 24 months, but really from a horizon, it’s like five years. At that point, I can literally encrypt my data center. Think about PCI and HIPAA and all the controls around that. Encryption is one of those big first checkmarks. If you can do that, you solve a lot of your compliance challenges.

Second, you have this trusted computing model, where I know the person from an access control. I know the device. I know what that person is supposed to have access to. I've encrypted my entire data center, so when that person comes in, I can let them have access only to what they’re supposed to have in the context that they're supposed to have, and decrypt it on the way out. They’re only viewing a device, and no data ever lives on a device.

They’re only viewing a device, and no data ever lives on a device.



So bring your own device. I wouldn’t care, because there's almost no security concerns at that point. I've encrypted. I know the user. Going one step further, as companies progress, you’re going to look at these internal marketplaces that everyone is going to build.

What the iPad has done is make it so I want to turn it on. I want to click on the app that gives me the information to do my job. I want my workflow, my exception management, the information I need to do for the day or do my planning, whatever I need to do. But they want that information in context.

Roll the tape forward a couple of years, and the capabilities that’s coming out on VMware, we fully expect to take care of that, to adopt that model, and that’s what we’re pushing for.

Gardner: It’s fascinating hearing you talking about large-scale virtualization and internal cloud. This has allowed you to have a much better grasp over your costs and deliver your apps and services readily, so that you can say yes to your business users.

In addition, you're getting master data management (MDM) benefits. You’re getting a better handle on licensing. You’re seeing great improvements in security now, and perhaps more to come, as you stream apps to a more virtualized client model.

Symbiotic relationship

Y
ou also mentioned something when it came to disaster recovery (DR) that piqued my interest. It sounds almost as if there is a symbiotic positive relationship between high levels of virtualization and DR. It almost sounds like DR has become the ability to move entire data centers as assets that are fungible, and that that gives you a lot more capability, in addition to being able to recover.

Is that true? Tell me how this DR plays into this larger set of values.

Giambruno: We’ve actually done this. No one was hurt, but last year, our factory in Venezuela burned. It was on a Sunday afternoon and they had what we call a drib. If you look at VMware architecture, they have data center in a box. I always joke that we’re years ahead of them in that. We use dribs, strategically placed throughout the world where we push capacity to for our cloud. They largely run dark.

So our drib "phoned home" that it was getting hot. We were notified that the building was on fire. It took us an hour and 45 minutes, and most of that time was finding one of my global storage guys who was at the beach. We found Ben, and got him to do his part, which was to tell the cloud to move from Venezuela to our disaster site in New Jersey.

So we joke that our model in DR is that we just copy everything. We don’t even think about tiering or anything. It’s this model, sometimes a Casio is just better than a Rolex. Simplicity rules, and not thinking about it ensures that we have all the data available. Again, it goes back to our cloud and virtualization. Everything is just a file. We just copy the deltas all the time. We never stop.

For us it was available in less than 15 minutes. We went in, we broke the synchronization, we made sure everything was up-to-date, and we told our F5s and our info blocks that Venezuela is now New Jersey. Everything swung, we got everything in, we contacted the business units to test everything and verify everything.

It's this whole idea of simplicity, where you're just not putting the complexity into the system.



Then we brought up all the virtual desktops and we used Riverbed mobile devices. We e-mailed their client to everyone. So people either worked from home or we had some very good partners that gave us some office space where people could use the computers. They loaded the Riverbed mobile devices on those computers. They brought the virtual desktops, people went to work, and the business didn’t go away.

Gardner: So you were able to say yes, even when a factory burned to the ground. That's pretty impressive.

Giambruno: This is a real-world example of how you can do it, and it wasn't a lot of effort. It's this whole idea of simplicity, where you're just not putting the complexity into the system. I always go back to this iPad view of the world, where the business just wants to know what's available and we will do the rest underneath.

This high degree of virtualization lets us move all of this data around the world, and it's for DR, development, and a myriad of capabilities that we keep finding new ways to use this capability.

Gardner: I suppose it elevates the concept of fit for purpose to that data-center level?

Redundancy and expense

Giambruno: Correct. And some of the other unintended consequences are interesting. You talk about redundancy and expense. Two is one and one is none in a data center. Do you really need to be fully redundant, because if something happens we'll just switch to the other data center?

I only need one core switch or whatever. You start to challenge all these old precepts of up-time, because it's almost cheaper for me or less-expensive. I can just roll the computer over here for a little while. I get that fixed, if I have a four-hour service-level agreement (SLA) with my vendors for repairs.

You can start to question a lot of the “old ways of doing things” or what was the standard in figuring out new ways to operate. One of the interesting things I love about my job is you can question yourself and figure out what you can do next.

Gardner: One last item that I suppose also fits into this unintended positive consequences issue. You've mentioned something about supply-chain value and getting to the point where you can take your external cloud, push it out to your suppliers and contractors, and begin sharing with permissions and control. This is a much better approach than the old way of virtual private networks (VPNs) and the headaches around access and so forth. So tell me about this extended business-process value that you're starting to explore?

Giambruno: One of the things we realized is that we could start extending our cloud. We spend a lot of time managing security and VPNs, and the audits that have to go around that.

At the end of the day, it's about collaboration with our community of vendors and suppliers, and enabling them to interact with us easily.



If I could just push out a piece of my application or make that available to them, they could update their data, reduce the number of APIs, the number of connections, all of that complexity that goes out there, and extend our MDM.

Then we can interface our MDM through our cloud to do some of this translation for us that they can enter data, or we can take it from their systems, from our cloud edge securely and in context and bring that back into our systems.

We think there are huge possibilities around automating and simplifying. But at the end of the day, it's about collaboration with our community of vendors and suppliers, and enabling them to interact with us easily.

So you're always trying to foster those relationships and get whatever synergies you can. If we make it easier on them to interact with us from a system’s perspective, it just makes everybody happier. We've got some projects slated for deployment this year. Maybe in a year, if you come back, I can tell you how well we’ve done or what we’ve done. But one of the things that we are looking is we can think really change how we operate as a company.

Gardner: That's fascinating. You talked about a lot of efficiency, reducing your footprint on the physical plant, on energy, keeping your cost in line, spinning up more applications and data. But now we are talking about not just efficiencies, but actually doing things entirely differently, things that could not have been done before because of cloud. That to me is really the essence of where we are going to be talking in the next few years.

So, David, thanks so much for your time. We have to leave it there. You've been listening to a sponsored podcast discussion in conjunction with a VMware-organized media roundtable event in San Francisco.

We've been exploring two prime examples of organizations that have gained huge benefits from high degrees of virtualization and aggressive cloud computing adoption with mission-critical applications. The two organizations of course have been Revlon and SAP.

I’d like to thank our guests David Giambruno, Senior Vice President and CIO of Revlon. Thanks so much, David.

Giambruno: My pleasure.

Gardner: We have also been here with Heinz Roggenkemper, Executive Vice President of Development at SAP Labs.

This is Dana Gardner, Principal Analyst at Interarbor Solutions. Thanks to our audience for joining, and come back next time.

Listen to the podcast. Find it on iTunes/iPod. Download the transcript. Sponsor: VMware.

Transcript of a sponsored podcast on how cloud and virtualization deliver benefits in cost, efficiency, and agility. Copyright Interarbor Solutions, LLC, 2005-2012. All rights reserved.

You may also be interested in:

Tuesday, June 26, 2012

HP Expert Chat Explores How Insight Remote Support and Insight Online Bring Automation, Self-Solving Capabilities to IT Problems

Transcript of a BriefingsDirect expert chat with HP on new frontiers in automated and remote support.

Listen to the podcast. Find it on iTunes/iPod. Download the transcript. Sponsor: HP.

Dana Gardner: Welcome to a special BriefingsDirect presentation, a sponsored podcast created from a recent HP Expert Chat discussion on new approaches to data center support, remote support, and support automation.

Data centers must do whatever it takes to make businesses lean, agile, and intelligent. Modern support services then need to be able to empower the workers and IT personnel alike to maintain peak control, and to keep the systems and processes performing reliably at lowest cost.

This is Dana Gardner, Principal Analyst at Interarbor Solutions. To help find out more about how to best implement improved and productive IT support processes, I recently moderated an HP Expert Chat session with Tommaso Esmanech, Director of Automation Strategies at HP Technology Services. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

Tommaso has more than 16 years of HP IT support experience, and has been a leader in designing new innovations in support automation.

In our discussion now, you’ll hear the latest on how HP is revolutionizing support to offer new innovations in support automation and efficiency.

As part of our discussion, we're also joined by two other HP experts, Andy Claiborne, Usability Lead for HP Insight Remote Support, and Paddy Medley, Director of Enterprise Business IT for HP Technology Services.

Our overall discussion begins now with a brief overview from me of the data center agility market, and need for improved IT support capabilities.

I begin by looking at why industry and business leaders are forcing a rethinking of data centers and their support. Agility is the key nowadays. The speed of business has really never been faster, and it needs to be ever more responsive. It seems that even more time compression is involved in reacting to customers. And reacting to markets now is more than essential, it's about survival. Those that can't keep up are in a pretty tough -- even perilous -- situation.

Modern data centers therefore must serve many masters, but ultimately, it's primarily a tool of business, and it must perform therefore at the speed of business. For example, nowadays the impacts of big data are demanding that decisions are increasingly data-driven. A lot more data needs to be tapped and mined. Decisions need to be made based on data -- and those business decisions need to be conducted with ongoing visibility, performance analytics, and, of course, time is important, so near real-time.

But even as data centers support these new levels of agility and analysis, they also need to become cost-reduction centers. Modern IT must do more for less, and that extends especially to ongoing operations and support, which for many people are their largest long-term costs in their total cost equation.

Big date requirements

But not only are data centers supporting many types of converged infrastructure, and now increasingly virtualized technical workloads, too. They're supporting big data requirements -- as we pointed out, data continues to explode -- but they must do this all efficiently, with increased automation as a key component of that efficiency. And moving towards lower energy costs is increasingly important as well.

To accomplish this high efficiency and to exploit the best in performance management and operational governance, these new requirements are all essential to delivering that never failing reliability. And we can also move now toward proactive types of support -- to continue the ongoing improvement and to maintain systems with those high expectations met.

In a nutshell, data centers must do whatever it takes to make businesses lean, agile and intelligent, as businesses and innovate and excel in their fast-changing markets. Modern support services need to be able to empower the workers and the IT personnel alike need to be able to maintain peak control, even within an ecosystem of support, so constituents can keep these systems and processes performing reliably, at the lowest possible cost.

Fortunately, today's modern data centers are like no others before. For the first time, data centers can accommodate the interrelated short-term tactical imperatives and the long-term strategic requirements demanded by their dynamic business demands and requirements.

By delivering fit-for-purpose utilization and converged infrastructure control -- and by putting a priority on energy conservation and automated support -- total costs are no longer spiraling out of control. By doing all of this correctly -- managing your data center for efficiency and putting in proactive support to continue operational efficiency – you can gain huge payoffs.

Fortunately, today's modern data centers are like no others before.



But there are big challenges in getting there as well. So it's important to execute properly to keep that efficiency continuing and building over time. This is, after all, a journey. So today, we're going to learn about how modern data centers are being built for business demands first and foremost, and we'll see how converged infrastructure methods and technologies are being used to retrofit older data centers into fleet, responsive engines of innovation.

We'll also hear specifically on how HP is redefining modern data-center support, enabling far more insights into performance and operation and modernizing through efficiency projects like Voyager, Moonshot and Odyssey, the big initiatives at HP that we've heard quite a bit about, and that are changing the very definition of the data center.

Moreover, we're going to see how HP Technology Services places a proactive edge on service support. And they’re pioneering support automation and remote support, with all of this designed to make IT more responsive so that the businesses themselves can stay adaptive.

I now have the pleasure of introducing our main speaker, Tommaso Esmanech, Director of Automation Strategies at HP in the Technology Services Group. He's going to provide an overview on how HP is revolutionizing support to offer new innovation in support automation. Tommaso leads the Deployment and Business Impact of Web Services implementation, change management, and technologies intended to distribute faster and more customer-oriented services via the Internet.

Support automation

Tommaso Esmanech: Thank you, Dana, and good day to everyone joining today. Before we dive into how HP is implementing support automation and enabling a new and a next generation of data centers, we need to understand what HP is trying to achieve with support automation.

Our intent is to automate the entire support processes, eliminate minor work, and improve production and activities for the entire enterprise. This involves finding solutions for software and hardware, and making hardware and software work seamlessly together by providing a best-in-class customer experience.

What we need to understand is that the world is changing. Customers are using devices that now are providing a new, innovative experience. Their front end is becoming easier. Customers demand integrated capabilities and are requesting a seamless experience, though the back end, the data center, is still complex, articulated, and provided by multiple vendors.

You have network storage and management software that needs to start working together. We began a the journey about 18 months ago in HP to make that change, and we’ve called it Converged Infrastructure. HP took on the journey, mostly because we're the only provider in the industry that provides all the components to make the data center run seamlessly. We're the only provider for data-center network solutions, storage, servers, and management software.

Let’s put this in context of support automation. When you have hardware and software working together and you’re supplying services within that chemistry, you achieve a powerful position for customers. Furthermore, if you're able to automate the entire support and service process, you provide a win-win situation for you, our customers, our HP partners, and for HP, of course.

When you have hardware and software working together and you’re supplying services within that chemistry, you achieve a powerful position for customers.



Now, let’s sit back and look at how this support has changed throughout the years. Support used to be very manual. A lot of the activities used to reside on site where a very qualified workforce, customer engineers and system engineers, would interact to resolve and manage situations.

In the early '90s, we saw a change with infrastructure support moving from decentralized to centralized global and regional centers, moving routine activities into those centers and providing a new role for the customer engineers by focusing on value-added infrastructure and capabilities.

In the '90s, we saw the explosion of the Internet. The basic task was to move to the Web sales, service, our system knowledge base, chat, support cases and case management. A lot of these activities were still manual, relying on human factor activities, to determine the root cause of a problem.

In 2000, we saw more growth of machine-to-machine diagnostics. Now, imagine that we can completely revolutionize that experience. We can integrate the entire delivery support processes, leveraging the machine experience, incorporating that with customer options of all the information with the customer in control, and really blending a remote support, onsite, phone, Web and machine-to-machine into a new automated experience. We believe that unimaginable efficiency can be achieved.

Gardner: Tommaso, I just have a quick question. As we talk about support automation, how is this actually reaching the customer? How do these technologies get into the sites where they’re needed, and what are some of the proof points that this is making an impact?

Intelligent devices

Esmanech: Let me talk about how we’re bringing the support automation to the customer. It starts with how we build intelligence and connectivity into the devices. You probably followed the announcement in February of our new ProLiant servers, our Generation 8 servers.

We have basically embedded more support capabilities into the DNA. We call it Insight Online. As of December 2012, we will be able to support in a similar fashion the existing installed base. This provides the customer a truly one-stop-shop experience for the entire IT data center.

Now that it is easier to utilize and take advantage of an automated support infrastructure, what are the key points? You don't have to make, or necessarily have to make, a phone call. You don't have to wait for a document provide a description. All those activities are automated, because the machine tells us how it’s feeling and what is its health status.

Furthermore, if we compare our support infrastructure to standard human interaction and technical support, we've seen a 66 percent improvement in problem resolution. All these numbers are great for your business.

How much does it cost in downtime? What if your individual servers are impacting your factory? For us, it's about keeping your systems up and running, making sure that you meet the customer commitments, and delivering your products on time.

If we compare our support infrastructure to standard human interaction and technical support, we've seen a 66 percent improvement in problem resolution.



You may say, “Well, machine-to-machine support automation existed before.” Yes, some of them did. What we added just recently is a new customer experience. The management of the infrastructure, the access to the information, how it’s performing, was very much limited to the local management, with access only to the technical few, and they knew how to use it, they knew how to read it.

With Insight Online, accessible through the Web, we now provide secure, personalized anytime/anywhere access to the information. We're totally changing the dynamics from few who had access to those who need to have access to the information. That reduces high learning times that were necessary before, and moves to the user-friendly, innovative, and integrated content that our customers are requesting.

Furthermore, Insight Online is integrated in real-time with a back end. It's not just a report or dashboard of information that is routinely updated. It truly becomes a management tool, when you can view the infrastructure.

One of the other key aspects with Insight Online, this new Web experience, is that we didn’t want to create a new portal. We had made a conscious decision in integrating it with the existing capabilities that you're using to do basic support tasks like accessing a knowledge base, downloading drivers and patches, downloading documentation, and making the infrastructure run seamlessly. The access to the information has to be seamless.

We've also leveraged HP Passport, the identification methodology that you use within your HP experience, providing one infrastructure and not multiple access points.

Gardner: Tommaso, can you give us a bit more detail about how it all comes together, the server management and the support experience?

Customer connectivity

Esmanech: It starts with the connectivity on the customer side. We have a new Generation 8 with embedded DNA that directly connects to the HP back end through Insight Remote Support. Through Insight Remote Support, we're able to collect information and provide alerts about events, warranty, case-management status, and collect all the information necessary for us to deliver on the customer commitments.

In this new version, we've embedded new functions. For example, we allow you to provide identification on the HP service partner that is working on managing your environment. It could be HP, or it could be a certified HP service partner. We have authentication through HP Passport that allows and permits access to the information on Insight Online. Last but not least, we've been able to achieve a faster installation process, eliminating a lot of those hurdles that made it more difficult. It's now significantly easier to adopt Insight Online.

What's important to recognize is that as we collected the bulk of knowledge and information on how these patches are performing, Insight Remote Support does role matching and event correlation.

It not only provides, as we say, traffic-light alerts. You're able to correlate an event with other events to propose a multipurpose action and, in the end, trigger the appropriate delivery and support processes. For example, we can automatically send the right part to you in case you need to manage the device. We link with the standard support processes.

When information is flowing from the customer side into HP support, they have access to the customer in Insight Online. We have access to a customer through our dashboard. This provides alerts and information about how the devices are performing and automatically links warranties. It informs the staff of when they're going to expire, so you can take more proactive actions about renewing it. They also automatically link support cases to events, and with one click, you can navigate to the website.

We have access to a customer through our dashboard. This provides alerts and information about how the devices are performing.



One new feature of Insight Online is access for our HP partners. I talked about having to identify a partner that is actually working on the device? What we have is now a new partner view, again, through HP servers and Insight Online. This uses a new tab called My Customers, and now others can be part of the entire interaction by being able to manage devices on behalf of the customer.

You don’t have to install any of your own software. You don’t have to develop it. We are providing the tools to be more productive, right from the start, by installing the HP server, HP infrastructure, data network storage, and giving you new tools to give you more efficiency.

HP Support Center with Insight Online also provides access to multiple users. You could be an account manager, managing infrastructure, who is going to meet the customer and you want to talk about that infrastructure, how it's performing. You log onto Insight Online and review the information.

Your HP partner can automatically view the information before even going on site and taking actions on a customer device. You will have everything accessible. If users complain that the infrastructure is not performing, you will view the management software and know what is actually going on.

You can actually gain that without having to be in the environment. It is kind of giving the life back, that is the way I would like you to see. Now, let’s also look at this in terms of security. You have information flowing from your data center back into HP and now accessible online.

Security and privacy

First of all, security and privacy are extremely important. We actually compare our privacy policy against all the countries that we do business in. Security is highly scrutinized. We've been audited and certified for our security, and it’s extremely important for us to take care of your security concerns.

Gardner: Tommaso, one of the things I hear quite a bit from folks is that they’re trying to understand how this all works in a fairly complex environment, like a data center, with many people involved with support. There are individuals working on the customer IT infrastructure internally, self-maintainers as well, within that group.

But they’re also relying on partners, and there are other vendors and other devices and equipment and technologies involved. So how does the support automation capability that you have been describing address and manage a fairly fragmented support environment like that?

Esmanech: It is indeed one of the questions we asked ourselves, when we started looking at how do we solve today's problem? How do we give something more than just management software. It’s all about the users that need to access the information.

As I said before, access through a management console is limited to the few that can have access to that environment, because they're within the network or they have the knowledge how to use the tools. With the new experience, by providing cloud-based service in support automation, we're able to provide tools to the customer to enable access to the right people to do the right job.

We've created a new portfolio of services that is taking advantage of this new knowledge and infrastructure to provide new value to the customer.



HP shares devices or views devices or groups of devices with multiple users through the Web-based capabilities that we have with Insight Online. The customers then create groups. Also all customers manage. So you're in control of setting up those groups, saying who has the right to view the information and what he is able to do with such information.

Another important aspect is the security when employees move on. It's part of life. You have somebody working for you, and tomorrow he’s going to move to another organization. You don’t want that individual to have access to your information any longer. So we've given the ability to control who is accessing information and eventually removing the user's right to go into HP Support Center Insight Online and see your environment. So it’s not only providing access, but also controlling access.

Let me take another look how things are changing. We have this easy-to-adopt Insight Remote Support. You have this new access methodology and you have all this knowledge, information, and content flowing from the customer environment into the hands of the right people to keep the system up and running.

If you are under warranty, which is the minimal requirement to take advantage of this infrastructure, you still have a self-solve capability. You have to figure out what you have to do in some cases. While there's information provided, it's still up to you.

We've created a new portfolio of services that is taking advantage of this new knowledge and infrastructure to provide new value to the customer.

Proactive care

O
n the technology side, we need to look at proactive care service. First of all, a technical account manager is assigned as a single point of contact for the software. Several components and reports are sent or made available to the customer. Incorporated incident reports are reviewed with a technical account manager.

This allows them to decide configuration, performance, and security, match it against best practices. It allows them to understand what is the current version of software to keep the infrastructure up and running at the optimal level.

I want to close with few takeaways. First of all, products and services have come together to provide an innovative and exciting user experience, helping to guarantee a 24x7 coverage, and providing access to anywhere/anytime cloud-based and secure support, while managing who can receive such information.

We've embedded this also with a new portfolio to take advantage of old HP expertise and knowhow. Now, partners, customers, and HP experts work together to dramatically increase uptime and achieve efficiency at 66 percent.

This concludes our main presentation, and I want to turn it back to you, Dana, for our Q&A session.

Products and services have come together to provide an innovative and exciting user experience, helping to guarantee a 24x7 coverage.



Gardner: Thank you, Tommaso, and I’d like to introduce to our audience a couple of more experts that we have with us today.

We're here with Andrew Claiborne, Usability Lead for HP Insight Remote Support. Andy has developed HP remote support solutions for a half-dozen years within HP’s internal development labs. He also developed portions of the HP Insight Remote Support capabilities with a special focus on usability.

We're also here with Paddy Medley, Director of Enterprise Business IT for HP Technology Services. Paddy has more than 25 years of experience in the R&D of technology solutions for the HP services organization, responsible for the formulation and execution of technology solutions that are underpinning the delivery of HP technology services. Welcome to you both.

Let me start with you, Paddy, about licensing. Do we use the full functions of iLO 4 and the new HP SIM without any licensing issues?

Eliminate licensing issues

Paddy Medley: The good news is, Dana, is that what we’re trying to do with the solution here is to make it as pervasive as possible and to eliminate licensing issues. HP SIM is essentially a product attribute. Once a customer purchases a storage server from HP or they’ve got such device that’s under service contracts, they are actually entitled to HP SIM by default.

With iLO, iLO really comes in two formats, the standard format and advanced format. The standard format is effectively free, and the advanced format is for fee. The advanced format has additional facilities, such as supporting virtual media, directory support, and so on.

Gardner: Thank you. We have a question here directed at Insight Remote Support. It’s about the software. They're asking, is it included, and is it difficult to install?

Medley: The preface of the first answer applies to this answer as well. What we’ve done with our overall solution is make it as easy to install as possible for the huge amount of human factor effort in behind that. At its most basic level, what’s required is Insight Remote Support software, and that needs to be installed on a Windows-based system or a VMware guest or Windows guest. That’s pretty pervasive.

The actual install process is pretty straightforward and very intuitive. As I said, it's an area where we’ve gone through extensive human factors to make that as easy as possible to install.

The actual install process is pretty straightforward and very intuitive.



The other part of that is if the customer has Insight Manager already installed, they'll actually inherit its features, and there is an integration point there. For instance, if Insight Manager has already discovered a number of devices on the customer’s environments, we’ll inherit those with Insight Remote Support, and for pertinent events occurring in those systems, we’ll try to trace them through Insight Manager into Insight Remote Support and back to HP.

Gardner: Andy Claiborne, a question for you. Our viewers say that they're working to modernize their infrastructure and virtualize their environment. They'd like to implement support automation like Insight Remote Support, but they feel the cost is too high. What does it cost to implement this?

Andy Claiborne: Previous versions of Insight Remote Support were very challenging to get installed, especially at large customer sites. Trying to address that has been one of the key features that we've been trying to bake into our latest release of our support automation tools.

If you have just a couple of Gen8 ProLiants that you want to deploy in your environment and support using our support automation solutions, those systems are able to connect directly to HP, and that capability is just baked into their firmware. So it's really straightforward to set those up.

Hosting device

If you have a bunch of legacy devices in your environment, you’d have to set up what we call a hosting device, which is one system that sits in your environment that listens to all of your devices and sends service events back to HP. For our latest release, we've dramatically reduced the amount of time that it takes to set up, install, and configure the hosting device and implement remote support in your environment.

In the labs, we have cases that used to take our expert testers 45 minutes to get through. Our testers can now get through them in five minutes. So it should be a dramatic improvement, and it should be relatively easy.

Gardner: Here's a related question. How soon can we recover the upfront cost of implementing HP support automation? I think this is really getting to the return-on-investment (ROI) equation.

Claiborne: We look at two aspects. What does it cost to deploy it, and what benefit do you get from having remote support? As we said, the cost is greatly reduced from previous releases.

The benefit, as Tommaso mentioned, is in looking at our case resolution data across thousands of cases that have been opened, we see a 66 percent reduction in problem resolution time. When you think about just how incredibly expensive it is if one of your critical system goes down and how much that costs every second that that system is down, the benefits can be huge. So the payoff should be pretty quick.

Through the entire support processes and collection of the data, we're able to provide a great value proposition for our customers.



Gardner: Okay, Tommaso, a question for you. They ask, why is Insight Remote Support mandatory for proactive care?

Esmanech: If you think about the amount of data that we need to collect to deliver against the proactive care, if we were to all do that activity manually, that would definitely make the value proposition of proactive care through event and revision management, almost impossible to manage or to adapt as a value proposition. So we separate those. Through the entire support processes and collection of the data, we're able to provide a price quantity that is very interesting and a great value proposition for our customers.

A customer can choose as a part of our portfolio, foundation care, but of course, the price point and the value it will provide is going to be different.

Gardner: Here is a question that gets to the heart of the issue about your getting data from inside of other people's systems. They ask, our company has very strict security requirements. How does HP ensure the security of this data?

Esmanech: That is really one of the most asked questions. After we start talking with the security experts at the customer sites, we're able to solve all the problems.

Our security is multilayer. It starts with information collected at the customer site. First of all, the customer has visibility into everything that we collect. When we collect it and transfer it to HP’s back end, all that information is encrypted. When we talk about providing access on Insight Online through the Web, the access goes through HTTPS, so it's encrypted access of information.

For a password, for example, a minimum set of characters is required for an alphanumeric password. Also, the customer has knowledge and information about who is accessing his and viewing his devices. Last but not least, we have certified our environment end-to-end for eTrust, which is one of the most important certifications of security for these type of services in infrastructure.

Product support


Gardner: Paddy, a question from an organization with ProLiant servers as well as HP storage and networking products. Will Insight Remote support all of those products, or is it just the ProLiant servers?

Medley: We've had our initial release of the new Insight Remote Support and Insight Online solution. The initial solution covers Gen8 products only. In parallel with that, we're working on the second release, and that will be coming out in the summer.

That will, in effect, provide similar support for all of our legacy devices, network storage, and server spaces with the exception of three private tools, which we are looking at delivering in a future release. Our objective here is to have pervasive coverage across all of our enterprise-based products.

Gardner: Okay, is there an upgrade path for Insight Remote Support, so that older versions can gain some of the new capabilities?

Medley: There is indeed. We have our legacy remote support solution, which has very significant usage in customer sites. We're providing an upgrade path to customers to migrate from that legacy solution to our new solution, and that’s part of the bundle that will go with the summer release that I just spoke about.

We're providing an upgrade path to customers to migrate from that legacy solution to our new solution.



Gardner: Andy, we have a question here from another user. They have a lot of ProLiant servers running Insight Remote Support today and they are purchasing some of the new ProLiant Gen8s. Will different versions of Insight Remote Support interact, and how so, how would that work?

Claiborne: A lot of you might have spent a lot of time and energy deploying our current generation of remote support tools and you're wondering what does it do to the mix when we add a Gen8 ProLiant.

First, if you're happy with your current set of features, you can monitor the Gen8 ProLiants with the current Insight Remote Support tools, just as you would with any other ProLiant using agents running on the operating system. If you want to get some of the benefits of the new HP Insight Online portal or use the baked-in firmware-enabled remote support features of the new Gen8 ProLiants, you would have to upgrade to the latest version of Insight Remote Support, and we’ve tried to make this as easy as possible. Today, we have Remote Support Standard and Remote Support Advanced.

Our next release of Remote Support, Version 7.0.5, will allow most Remote Support Standard customers and some Remote Support Advanced customers to upgrade automatically. We made this upgrade as seamless as possible. It should be hands-off. We will import all of your device data, credentials, site information, contact information, and event history, into our new tool.

Also, we’ve gone through extensive testing to make sure that, for example, if you had an Open Service event in your current Version 5 solution and you upgrade to Version 7, the service event will still be visible in your user interface and you’ll be able to get updates for it.

Hands-off upgrade

F
or the remainder of Remote Support Advanced customers, if you have mission-critical features -- you're monitoring like an XP Array or a dynamic smartcooling device, things like that -- support for those will come in the subsequent release, Version 7.1. With that, we will also implement a seamless hands-off comprehensive upgrade process.

Gardner: A user asks, Do I need a dedicated server to run Insight Remote Support?

Claiborne: If you're running Insight Remote Support, you have this hosting device in your environment that listens to events from all of your devices in the environment. That doesn't need to be a dedicated server and it doesn't need to be running on HP hardware either. You can run that on any computer that meets the minimum system requirements, and you can even run that on a VMware box.

We end up doing a lot of our testing in the lab in VMware systems, and we’ve realized that a lot of you out there are probably implementing VMware systems in your customer environments. So VMware is supported as well.

The one thing to remember, though, is that this box is the conduit for service events from your environment to HP. So you need to make sure that the box is available and turned on and that it's not a box that’s going to be accidentally powered off over the weekend or something like that.

Gardner: Back to Tommaso, and the question is, what is the difference between Insight Online and Insight Remote Support?

We’ve realized that a lot of you out there are probably implementing VMware systems in your customer environments. So VMware is supported as well.



Esmanech: That’s come up before. The easy way to describe these is that Insight Online is the Web access of Insight Remote Support. It's part of the entire support of the information ecosystem. While we do recognize that Insight Remote Support has a management console, where you can view events and view the devices, that's limited to access within the environment, within the VPN, and only to the few people that know how to manage the environment.

You also have to recognize that Insight Remote Support goes beyond just a management console. It has event correlation and it collects all the data. As Andy said, it's a conduit back to HP. The conduit back to HP leads to Insight Online. The way it is now, there are two systems, and they're part of the same ecosystem.

Gardner: Tommaso, you mentioned self-solve services. What are those, and what did you mean?

Esmanech: The term self-solve we define as those activities and capabilities for which a customer can find a solution of the problem by himself. For example, if you were going on a website for support, you're accessing that knowledge base, finding articles and information on how to troubleshoot or solutions to the problem. If you were just loading drivers, it’d be component of self-solve.

By themselves, they're not services that we sell, but they're part of our services support portfolio. It's about doing business.

Some of the self-solve capabilities may be available to customers with contracts, versus customers who have a warranty, or or don't even have an HP device, but we give the customer the ability to solve problems by themselves.

Future direction

Gardner: Next one to you, Paddy. This is sort of a big question. They are asking, can you predict HP support automation's future direction for the next 10 years? Can you look at your crystal ball and tell us what people should expect in terms of some of the capabilities to come?

Medley: We're seeing a number of trends in the industry. We talked earlier about the converged infrastructure of storage, servers, and networks into single tabs and converged management of that environment.

We’re seeing a move to virtualization. Storage is continuing to grow at a pervasive rate, and hardware continues to become more and more reliable. So when you look at that backdrop, the future is different from the past, in terms of service and service need. We’re seeing this greater need for interoperability, management, revision, configuration management, and for areas like performance and security.

In other words, we're also seeing a move to greater needs that are proactive, as well as reactive, service support. The beauty of the Insight Online solution is that it provides us a framework to go along that path. It provides us the basic framework to provide remote event monitoring or reactive monitoring in the case of subsequent events occurring, and then getting those events back to HP, but also to deliver proactive service.

What we're doing with the solution here is that, as we collect configuration and event information from customer environments, that configuration and event information is securely transported back to HP. Parts are loaded into a database against a defined data model.

We’re bringing convergence of all the reference data associated with the products that we support and then providing a set of analytics that analyze that collected data.



We’re bringing convergence of all the reference data associated with the products that we support and then providing a set of analytics that analyze that collected data against that reference data, producing recommendations and actions and events management. In fact, aggregation and that ability to do that in that aggregated back end, that’s really providing us, we see, with a key differentiator.

And then, all of that information is presented through the Insight Online portal, along with our knowledge bases, forums, and other reference data. So it's that whole aggregation that’s really the sweet spot with this overall solution.

Gardner: Well, that sounds very exciting. I'm afraid we’ll have to leave it there. A huge thanks to Tommaso Esmanech, Andy Claiborne and Paddy Medley.

I’d also like to thank you, our audience, for taking your time, and I hope this was helpful and useful for you. I'm Dana Gardner, Principal Analyst at Interarbor Solutions. Goodbye until next time until the next HP expert chat session.

Listen to the podcast. Find it on iTunes/iPod. Download the transcript. Sponsor: HP.

Transcript of a BriefingsDirect expert chat with HP on new frontiers in automated and remote support. Copyright Interarbor Solutions, LLC, 2005-2012. All rights reserved.

You may also be interested in: