Showing posts with label Interarbor. Show all posts
Showing posts with label Interarbor. Show all posts

Thursday, January 22, 2009

IT Repositories Help Financial Giant Manage Change Amid Complex Systems Consolidation

Transcript of a BriefingsDirect podcast on the role of repositories in data integration for large enterprises. Disclaimer: The views expressed in the following are not necessarily those of Wells Fargo & Co. or any of its subsidiaries or affiliates.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: Hewlett-Packard.

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you're listening to BriefingsDirect. Today, we present a sponsored podcast discussion on solutions for enterprise repositories in service-oriented architectures (SOAs).

We'll look at how enterprise systems of record need to be increasingly managed by repositories, or assets need to be quickly federated and integrated in cases of mergers and acquisitions (M&As) or unexpected business consolidations. Done incorrectly, managing and consolidating data from various systems of record may create multiple information systems that may "disagree" about the same piece of information.

Using SOA and repositories effectively, however, can pave the way for harmonious data integration and service mappings across these critical systems. Getting your systems-of-record act together in conjunction with enterprise repository solutions provides more flexibility for change and disruptions -- challenges not unheard of in today's tough economy.

While the challenge is significant, gaining new value from managing repositories and SOA governance sets a stage for much greater visibility and agility, when facing wholesale shifts, unexpected or otherwise, in how applications and data are used.

We're going to provide an in-depth look at how enterprise repository solutions are evolving in conjunction with systems of record. We're joined by two IT enterprise architects from Wachovia, a bank currently moving through a massive acquisition process with Wells Fargo.

We welcome Harry Karr, an IT architect at Wachovia. Thanks for coming on the show, Harry.

Harry Karr: Thank you very much. I appreciate it.

Gardner: We're also joined by Hemesh Yadav, also an IT architect at Wachovia. Hi, Hemesh.

Hemesh Yadav: Thank you so much, Dana.

Gardner: Now, we don't really want to focus on the acquisition so much as we want to focus on the systems of record, and we are looking at managing multiple systems under perhaps difficult circumstances.

Let's start with you, Harry. Tell us how IT architecture is destiny when it comes to these systems of record, and how to manage multiple systems effectively?

Karr: Well, the hardest part is keeping track of what we have, especially in times of mergers and acquisitions, but also at any other time. When we are trying to add new functionality, the first thing you have to know is what you have in place. So, keeping that up to date, knowing what we have is probably the biggest challenge.

Gardner: How is it different now from the past, when it comes to the tools you have at your disposal to manage these various systems and try to bring them into some concert or harmony?

More Distributed Systems

Karr: The difference is that we have more distributed systems now. We have services being offered by a half dozen or a dozen different service containers. We have many different clients hitting those services. We have many more pieces to the puzzle than we had before, and they're all owned by different people, different groups, and different teams.

Keeping up with that is much harder than it used to be with a single monolithic type of application, as in knowing where the touch points are, what the integration needs are, and where the security mechanisms are applied. There are a lot of things you have to know between the applications.

Gardner: How does the repository come into play here? How does this fit into the puzzle in order to make that complexity a bit more manageable?

Karr: I like to talk about a repository solution. A repository solution has more than one physical repository, and each one has certain specific information or a slice of the data. All together, it gives us a good enterprise solution for a repository and gives us a picture of what we have.

In our world, we do a lot of outsourcing and going through a change of structures, determining what we're going to do right now. We're being acquired by Wells Fargo, and all those changes mean different people will be involved with different things.

If something isn't written down, you've lost it. It's not going to be there. What we need to do is make sure that we have a record of what's there, so that anybody in the bank can go back and look and say, "We have this at this point, and these are the touch points involved, this is the security, and these are the access requirements." Anything they need to know about those touch points can be known from that repository solution.

Gardner: I suppose it must be a great comfort if you were to find yourself working with another organization that had gone through the same diligence and has also embarked with repository solutions.

Karr: Oh yeah, it definitely helps. When you first sit down together, you are out there fact-finding. What do you have? What do I have? Having that in a repository, I can look it up and research it. It's readily accessible. I don't have to wait and call another meeting with the right experts in place who weren't there at the first meeting. I have all the information there, and I could talk to one architect to look up all that information.

Gardner: And this seems to be a reactive and proactive benefit. That is to say, you can look into these systems and understand more about your applications and data, but you could also then execute through these repositories, apply policies and rules, and then have a certain level of functionality follow through. Is that correct?

Karr: It is correct, and we have done that to a limited extent. I think there's room to do it a lot more than we've done it, but right now we've just done very minimal amounts of that.

Gardner: Okay. Hemesh, what sort of role do you have in this, and how do these various federated repositories come together effectively?

One Place to Go

Yadav: Dana, my experience is based on my previous job and my current position. I was involved with repository implementation for Bank of America, when we picked the HP repository (HP SOA Systinet). At the same time, I was working with Harry to pick a repository for enterprise solution for Wachovia. I'd just like to add what Harry was trying to convey here. If you have a single repository and multiple federated systems, you have one place to go.

This is especially true around merger and acquisition, when you are trying to consolidate all the information into one place. I've personally used repository for mergers and acquisitions. When we did the merger with Bank of America with another large banking operation, we put all their Web services into one place, and we put our Web services into one place.

Even though you put their services and your services into one place, if you don't have a common place to store and keep the information in very organized way, if you don't have a single repository and you don't have a good taxonomy to classify those services, even though you have a single enterprise solution, it doesn't really make you very productive.

We tried to address single repository with single classification scheme, single taxonomy, and single policy. So, you have policies for the design time or runtime implementation, naming conventions, and also how we store the metadata, even though you have a different source of systems. But, if you build an enterprise repository and implement a single enterprise metamodel, it will be very easy to classify, store, access, and understand data.

Karr: I'd like to add to that. Hemesh brought up a good point about the consistency of the data in a repository. In my mind, there's no value at all in putting information in a repository. The value is when we get the information out, and in order to get it out, you have to be able to query it. Having it in with a consistent taxonomy and consistent metadata is the only way that you can get the information back out again.

Gardner: Hemesh also raised an interesting point about this lifecycle benefit, bridging the gap between design time and runtime. This has some relationship to a configuration management database (CMDB), moving into quality assurance and the whole production process around services, perhaps even in an agile situation where there is very rapid development and many iterations managing that whole lifecycle. Harry, do you have any thoughts on how powerful repositories are in terms of this lifecycle benefit, perhaps, integrating across internal processes?

Karr: You can come up with a case that there is a selling point for repository at any point in the lifecycle, and all together it's a huge story. But, even at any point, such as the original conception part, do we have this already? Is that already available? How does it fit with what we already have? Going into development, are there pieces that can be reused? How do people know what's developed? Where do you put the Web services description languages (WSDLs) and schemas, when you are developing services?

The next point is around the testing. Testing needs to match the business requirements. If those requirements are not in a repository, are they being handed over on a notebook somewhere? Where do they exist? Repository helps a great deal there.

Then, in production and troubleshooting, deployments, end production, or any kind of troubleshooting you need to know what changes have happened. What's going on with that application? What's changed since the last time it was running properly? Without all that tie-in from all those different repositories, you lose track of what you have, and it helps every single lifecycle.

Gardner: Almost on a philosophical level, it seems that repositories help balance the best of decentralization organizationally. And, in terms of policy and access and control with centralization, you want to have both. Does that strike you as fair?

Karr: It does. There are different corporate mandates for centralization, how much is centralized versus not, but the data can be centralized whether the teams are or not. The organizational structure shouldn't be affected by how the repositories are put in place.

Gardner: Hemesh, what sort of requirements do you look for in choosing repositories? I should think that is better to have more standardization, and the ability to embrace as much data and as many systems formats and used structures as possible. Right?

Defining a Metamodel

Yadav: When we looked at a repository, we looked at a couple of points. I'll just mention couple of key points. Number one, you should be able to define a metamodel for a repository. If you have a repository that comes with an out-of-the-box metamodel, there is a specific way you can define the services or load the services into repository. But, we wanted to customize those metamodels. We wanted to change those models so that we could customize according to Wachovia needs. That's number one.

Some products, when you store their services, they try to keep the level at the Web-service level. They don't keep it at the operations level. Web services could have multiple operations. They don't treat operations as a first-class citizen. There is a gray area there, and most of the vendors have not done a great job.

The third point was that we were looking for how to generate good reporting. So even though you load the data into a repository, there's no good way to generate a report using some types of tools. We found that some products are very good with that, but some products had a lot of weaknesses.

Fourth, we wanted to make that sure that when we stored any data in the repository, we should be able to integrate it with extensible markup language (XML) appliances, universal description, discovery, and integration (UDDI), or SOA management solutions, so we can have complete closed-loop governance.

So, you define your policies, you mandate your policies, you track your policies back, make sure you are meeting your service-level agreements (SLAs), and you generate good reporting. These are the three or four items we were looking very closely when we looked at the products.

Gardner: I suppose another important aspect is how to get started on something like this. Harry, given that you're managing multiple repositories, you're probably going to be managing even more over the course of your business activities.

Is there an opportunity to sort of crawl, walk, and run with these? And, once you do get into sort of a jog or are moving along rapidly, how widely can you establish an SOA, vis-à-vis these repositories?

Karr: Well, it's hard, because you want to look at the whole enterprise repository solution. You want to look at what touch points need to be between the different repositories. Once you map out that, then choosing and working on one repository at a time, putting it in place, and putting it in for a certain unit or division within your company will work very well.

If you don't have the big picture to start with, then, when it comes time to integrate those repositories into a cohesive picture of what you have, you are going to be stuck. You'll have to look at a lot of redo of your work, and that's very expensive.

Gardner: What recommendations do have for folks in order to avoid that?

Karr: It's important to look at the whole picture. They need to look at what's important between all the different repositories. You need to have some way of storing your business-process model. That includes business rules, services, information about your systems of record, information about the data, contracts, who's using what, requirements for change management, SLA management, problem management, organizational structure, and process flows.

All those different repositories need to have touch points. Mapping that out ahead of time will give you an idea of what to do with any one of those, as you put each one in place.

Yadav: One thing I want to add here is that when we were looking into repositories, one thing we were looking at was how to implement a repository where we can leverage the implementation for risk lifecycle management and SOA governance. We wanted to keep that bigger picture in mind, so that whatever information we had in place, we wanted to make sure we could capture it and have it be useful for SOA governance purposes as well.

Gardner: That's interesting. We have a need for backward compatibility, if you will, to be inclusive of as many systems and repositories as we can, but we also want to provide insurance for the future. We want to be able to provide a better business process model and management capability with tools along those lines.

How do you view this as a return on investment (ROI), Harry? Is this money well spent, in that if you do this right, it's going to have many returns for quite some time?

Hand in Hand

Karr: I agree completely. It is going to have a lot of benefits. If you can make the business case for governance of any sort, then the repository goes hand in hand with that governance -- being able to track what you are doing, your processes, everything involved. The repository is a key piece of the governance. I don't think that anybody would disagree that governance has a great business case behind it, and the repository is part of that governance model.

Gardner: I've been in some conversations recently where we've gotten into the notion that governance is not only going to be powerful for managing information systems, but ultimately the same repository can be applied -- and the rules and policies extended -- to actual business processes and across front-office and back-office activities. Do you have a sense that we're going to move this beyond just IT?

Karr: Definitely. As I mentioned a minute ago, the business process models are the business. The business owns that, but they need to see what goes on beyond that. How is that implemented? How does that make you real within the services that are being called? Business process models have an active part in this.

Everybody talks about alignment between IT and business. The repository is the key piece of that. In order to have some kind of alignment, you have to have visibility, and the repository gives you that visibility.

Gardner: Do you agree, Hemesh, that this repository solution set provides a better way, a broker or pivot point, across multiple aspects of a business?

Yadav: Definitely. If you start doing your top-down business process modeling, and you try to create a business activity, build a business service, or implement business services, that's the only way you can track it end to end.

You say, "Okay, this is the business activity, and this is a service that belongs to that particular business activity." If you don't have a single point of storage, or if you don't have an enterprise repository, it's very hard to align business needs with IT implementation.

Gardner: Let's look further out on the horizon, as people start doing sourcing across different types of hosting scenarios and models. We hear a lot about cloud computing these days. Do you sense that these repositories and this solution approach can help you manage services from a variety of different environments, different sourcing, and perhaps even entirely different business models?

Karr: We do sourcing in a lot of different ways. We outsource development. We have services that we buy from other vendors. We have services that are hosted by other vendors -- all those different models.

Getting back to one of your original questions, Dana, about why it's different now than it used to be regarding repository, this is much more distributed than it ever was before, and that trend is only going to increase. As we become a global company, we need to be able to communicate and have that visibility into what each of the pieces is doing. And, as the distribution goes across company boundaries, so it gets even more important.

Gardner: Do you have anything to add on that, Hemesh?

Yadav: No, I agree with Harry. I just wanted to give you an example, how we implemented in my previous job, how we leveraged the SOA repository for-end to-end development. We started with the service architecture as very top down. We put the data into a service repository. The designer would come and add WSDL information to the repository.

Then, the offshore resources would add additional metadata about the system. Then, when we'd go to the hosting team, they would add additional information like WSDL endpoints, service hosting data centers, and other things. So the service repository really helped us to bring different parties to work together and share the information in a common way.

Gardner: Now, you're in the banking industry, and we're going to see more regulation. I don't need to be an analyst to make a big bold prediction that there is going to be increased regulation in the banking industry.

Do these repositories and solutions benefit the compliance and regulations that might also be fast changing in the coming years?

Regulation Means Tracking


Karr: Good Point. I don't think I'll question the fact that we're going to have more regulation. It's going to happen. Any time regulations come in, it requires us to track more of what we have, what we're doing, and how we're doing it, and that's where repositories come in. They don't come from different groups owing that information. It needs to be visible at the highest level throughout the company. That's where repository really helps, and having a centralized repository solution is a big part of that.

Gardner: Harry, if you were to go write a book about your experiences with repositories, what would you say would be the first two or three most important chapters that you need to get into that for setting people up for this journey?

Karr: The big part to me is looking at the scope. How big a picture are we looking at? Are we looking at something just for SOA? Are we looking at something that includes change management or business process models? Make sure that you all agree on the scope before you start. That's probably the number one point. Then, know where you're going to go from there.

The next point would be around flexibility. How much flexibility can you allow different groups to have to determine their own metamodel and to make sure that the taxonomies are similar? Are you going to allow a lot of flexibility there, or you are going to make sure it's more governed? There are business cases on both sides, but you want to have agreement upfront on what's going to happen there.

Those are the two biggest points. I can't think of anything else that would be on top right now.

Gardner: Hemesh, what sort of chapters would you add to that book?

Yadav: I agree with Harry on the first two chapters, but I'd like to add two more chapters that are close to my heart, especially around SOA governance. I'd like to define my end-to-end governance first, before I implement a service repository, so that I have a clear scope in mind and a clear metamodel in mind to capture the information.

The fourth item is to identify the key stakeholder and the role-based authorization in the system. I don't want to share based on the regulation discussion we had just now. We need to give a real emphasis on who can modify the information, who can access what information, and who can change the information. That way we know who really owns the data, and who is using the data. So, in chapter four, I want to define the key stakeholder, the librarian of the services, and their role and scope of influence in the repository.

Gardner: Could you finish up our discussion with a little case study on how you brought this about? How did you sell this internally? Both of you seem to feel strongly that this needs to be comprehensive, holistic, and top down, but that makes it more difficult to sell and get buy-in. Harry, how do you get that inclusion and everyone's buy-in?

Karr: We have issues constantly that point to the need for this type of solution. You have issues happen in production, and someone says, "Well, what changed lately?" People need to get that information quickly. It needs to be tied to the systems that were affected. Who is impacted by those changes. That's a very big part of it.

We've built a lot of services. Some of them are great services, and some of them we probably shouldn't have built. We didn't have the visibility to decide that upfront. We didn't have the governance in place to solve that upfront. So, there's a lot of waste there with any of those services. They had to be redone later, which affects the clients, and affects the systems of record in some cases. There's a lot of waste when we do that.

The next selling point we would have would be around regulations. It's a very big concern to the people making the decisions about the purse strings. In a lot of industries, but especially banking, the regulatory and auditory concerns are huge.

We need to look at those as not new. We've always had them, and we're going to have more, and so that's a big selling point. I can help you with the auditing and regulatory systems that will help you meet your requirements. That's a big selling point right there, because that takes up a lot of their time at the top levels.

Gardner: Hemesh, last word to you. How do you help sell this in terms of business value to a wider audience inside of your organization?

Yadav: I'll add to what Harry was trying to say. When I started selling the service repository, one of my key concerns was that if I can't show you what services I have or if I don't provide enough information to reuse the services, it's very hard for me to justify that you need to reuse my services. Service reuse cannot be achieved without implementing the centralized service repository.

The service repository, if you look at it from business perspective, provides more productivity, more agility, and speed to market, and it reduces the silos.

A real-world example is in interest rate services. Most banks have multiple channels. Each channel provides different implementation of the same services. When you try to put those services in to a centralized repository, you start getting feedback. "I have four services. " "I have five services."

Our business started getting the inputs that said by implementing a centralized repository, there is definitely a way to leverage a single service, and it reduces your maintenance cost, production cost, and also posting cost, and it provides a lot of market value and implementation agility.

Gardner: We've been discussing the role of SOA, repositories, management across complex business circumstances like mergers and acquisitions, and how to future-proof your business against complexity, regulation, and ability to manage across the lifecycle of services in IT, and as well as extending into business processes.

Here to help us understand these topics, and I very much appreciate your input, we've been joined by Harry Karr, IT architect at Wachovia. Thank you, Harry.

Karr: Thank you very much. I appreciate it, Dana. It's been enjoyable.

Gardner: Also, we were benefited by the presence of Hemesh Yadav, IT architect, also at Wachovia. Thanks so much Hemesh.

Yadav: Thank you Dana.

Gardner: This is Dana Gardner, principal analyst at Interarbor Solutions. You've been listening to a sponsored BriefingsDirect podcast. Thanks for listening, and come back next time.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: Hewlett-Packard.

Transcript of a BriefingsDirect podcast on the role of repositories in data integration for large enterprises. Disclaimer: The views expressed in the podcast are not necessarily those of Wells Fargo & Co. or any of its subsidiaries or affiliates. Copyright Interarbor Solutions, LLC, 2005-2009. All rights reserved.

Tuesday, January 20, 2009

Enterprises Seek New Ways to Package and Deliver Applications and Data to Mobile Devices

Transcript of BriefingsDirect podcast on new ways to deliver data and applications to mobile workers using Kapow Technologies solutions.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Listen to related webinar. Sponsor: Kapow Technologies.

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions and you're listening to BriefingsDirect. Today, we present a sponsored podcast discussion on bringing more data to the mobile tier. We'll look at innovative ways to extract and make enterprise data ready to be accessed and consumed by mobile device users.

This has been a thorny problem for many years now, and the approach of Kapow Technologies in focusing on the Web browser on the mobile device has some really neat benefits. Kapow's goal is to allow data to be much more efficiently used beyond the limited range and confines of traditional enterprise applications and interfaces, when delivered out through mobile networks.

As enterprises seek to cut cost, while boosting real world productivity, using ubiquitous mobile devices and networks to deliver actionable and real-time data to business workers in their environment has never been more economical and never has made more sense.

Here to provide an in-depth look at how more enterprises and their data can be packaged and delivered effectively to more mobile users, is JP Finnell, CEO of Mobility Partners, a wireless mobility consulting firm. Welcome to the show, JP.

JP Finnell: Thank you, Dana.

Gardner: We're also joined by Stefan Andreasen, founder and chief technology officer at Kapow Technologies. Welcome back to the show, Stefan.

Stefan Andreasen: Thank you very much, Dana.

Gardner: We're also joined by Ron Yu, head of marketing at Kapow. Thanks for coming on the show, Ron.

Ron Yu: Thanks for having us, Dana.

Gardner: I want to take a look at the state of mobile applications and the need now to get fresh data out to the field. Why is this a time when the imperative economically and in terms of business agility has perhaps never been as acute or as important?

Let's take this to JP Finnell. You're in the field and you work with a lot of folks who are dealing with these issues. Why is this such an important time?

Finnell: I used to head up professional services for Nokia worldwide. Before that, I was with the Deloitte Consulting, Xerox, and Cambridge Technology Partners for Novell. So, in the past, I've really seen these cycles and adoptions of technologies a number of times, and mobility is different.

Unlike conventional applications, mobile applications have a huge number of choices to juggle. There are choices about input and output, touch-screen versus QWERTY. For example, we've seen that with RIM recently, where there is a lot of controversy with the Storm device versus the touch-screen versus the Bold. So you don't really see that dimension in the traditional adoption.

You also have the choice of the device platform. That's also quite different from your traditional choice of development options. A lot of choices have been holding things back, and companies like Kapow are making it much easier for developers to get on board. Hopefully, later on during this podcast, we'll touch on some of the other factors that are coming in place to make 2009 a year when we're going to see some [large scale] adoption of mobility.

Gardner: Now, this complexity has been going on for a long time, and there are many choices. Aside from what we can bring to the solution on the technical side, from your perspective, JP, what is pulling people to find the solution because of the real benefits of moving to the mobile tier and leaving the PC back in the office?

Finnell: There are a number of elements of suitability. When I was at Nokia, we wrote a book called Work Goes Mobile: Nokia's Lessons from the Leading Edge. According to Wiley Publishing, it's one of the top best-selling books on business mobility. We're seeing that need to be more responsive.

Business processes that really either are business to employee (B2E) or business to business (B2B) is where responsiveness and timeliness is really an issue. I'll talk more later about the application we did in the field for a major bank where we were able to take substantial cycle time out of the process. So, being able to be more responsive and doing more with less is the motto in 2009.

Gardner: Let's go to Ron Yu. What is it about data in particular that, at this time, can start to help these organizations be more agile and responsive?

Complex Legacy Systems

Yu: What we see within the enterprise is that the IT organization is really buried in the complexity of legacy systems. First and foremost, how do they get real-time access to information that's locked in 20- or 30-year-old systems.

On the other hand, there is a tremendous amount of data that's locked in homegrown applications through Internet portals and applications that have been adopted and developed through the years, either by the IT organization itself or through mergers and acquisitions. When you're trying to integrate all these heterogeneous data sources and applications, it's almost impossible to conceive how you would develop a mobile application. What we see IT focused on today is solving that data problem.

Gardner: And, what is it about being able to get to the data presentation beyond a full-fledged application that is attracting people nowadays?

Yu: The interesting thing is that Kapow is not a mobile company. The reason we're having this discussion today is because Kapow customers have actually brought us into this market. Because of how we have innovatively solved these real-time, heterogeneous, unstructured data challenges, customers have come up with their own ideas of how they can develop mobile apps in real time. That's what Kapow solves for them.

Gardner: Let's go to Stefan. Stefan, what is it exactly that Kapow is doing that these users have innovatively applied to the mobile problem?

Andreasen: Let's just go back to the foundation here -- why is the need for mobile application growing? It all started with the Internet and the easy access to applications through the Web browser. Then, we got laptops and we can actually access this application when we are on the road. The problem is the form factor of the laptop, opening it up at the airport, and getting on the 'Net is quite cumbersome.

So, to improve agility for mobile workers, they're better off taking their mobile out of their pocket and seeing it right there. That's what's creating the need. The data that people want to look at is really what they're already looking at on their laptop. They just want to move it to a new medium that's more agile, handier, and they can get access to wherever they are, rather than only in the airport or in the lobby of the hotel.

Gardner: JP, what's wrong with the way some of the other vendors and combination of hardware and vendor and service provider have tried to tackle this problem? Have they been using the wrong tools? Have they had the wrong philosophy? Why has this been so long in coming, and what's the alternative that Kapow and folks like you are putting together as solutions?

Finnell: Before addressing that question, Dana, I'd like to back up and to what Stefan was talking about use cases in airports, for example. We saw that in a use case for a major bank. This was a unique problem where it was a process that automated the capture of credit card data or credit card applications in particular.

You see these kiosks in airports, stadiums, and shopping malls. It's like in the airport, where there is really no power, and no connectivity. There's more of that today, but in football stadiums and shopping malls, it's still very hard to find a laptop solution that has power for eight hours and will have broadband connectivity. That was another unique use case, where there is a need for visibility and automation.

Gardner: I'd like to add to that too. It seems that there's a behavioral shift as well. The more people use smart phones, the more they're used to doing their email through a hand-held device. They cross this barrier into an always-on mentality, and they can't take time to boot up, set up, and charge the battery for a full-fledged PC experience. The expectation among people who start doing this always-on activity is that they want their data instantly wherever they are, whenever they are.

Consumers Driving the Need


Yu: Dana, that's a great point. Consumerization is an interesting market dynamic that is really driving more need for mobile apps. We, as consumers, are being wowed by the iPhone applications, the Facebook applications, and things that we can do in our private lives and in the social networking context.

When we come into the business world, we demand the same type of tools, the same type of access, and the same type of communication -- and we just don't have that today. What we see is the line-of-business knowledge worker putting a lot of pressure on IT. IT tries to respond to this, but dealing with the old traditional methods of technical requirements, business cases and things like that, just doesn't lend itself to quick, agile, iterative, perpetual-beta types of mobile application development.

Gardner: So, we have this growing dissonance between the expectations of the individual, the ubiquity of the mobile device and people's comfort level with it, and then the older approach and some of the solutions that have been attempted for mobile delivery which seem to be extremely expensive and cumbersome. JP, again, what has been wrong with the standards of the old methods?

Finnell: I wouldn't say it's wrong. I'd say it's incomplete. The approaches of these large platform vendors, and I am strategic partner in several of them, aren't strong, when it comes to agility, prototyping, and being able to accommodate this real-time iterative application development approach. That's really where Kapow shines.

Gardner: I've spoken to a number of developers over the years and they've likened this mobile issue to an onion where with every layer that you peel back, you think you're getting closer to the solution, but you just keep digging down, and there are more variables and more hurdles. Eventually, the cost and the delays have dissuaded people from pursuing these types of activities.

Stefan, what is it about Kapow that should help people become more engaged and actually look forward to developing in the mobile tier?

Andreasen: The answer is very simple. It's because we work in the world that they already know. If you want a mobile application, if you want agility, you want it in the world of applications that you're already working with.

If you're already opening your laptop and working with data, we give you that exact same experience on the mobile phone. So, it's not that you have to think, "What can I use this for?" It's about taking what you're already doing and doing it in a more agile and mobile way. That's what's very appealing. Business workers get their data and their applications their way on the mobile phone, and basically, it's making them more effective in what they're already doing.

Yu: Dana, the metaphor that comes to mind for me is not an onion, but it's really on a baseball diamond. When you look at Sybase and other independent software vendors (ISVs) that are selling platform and infrastructure, there are huge investments that you have to make.

To me, it's almost as if you are looking for that home run hitter, that Mark McGwire. I won't say Barry Bonds anymore. But there's a place to go for the home run, and to go for that large global enterprise deployment. With mobile apps, what we're seeing with our customers is that they want to hit singles.

They want to be able to meet the demands of a line-of-business department and to get that in their hands -- the 80/20 rule applies -- and get some experience and develop best practices and learning lessons about how they can iterate and roll out the next one.

I think Stefan is going to elaborate, when we talk about Audi, but Audi literally rolled out four mobile apps within the first week of implementing Kapow.

Gardner: Let's get into the actual solution. We want to solve these mobile data access problems. We're writing directly to XHTML. You refer to this as extract, transform and load (ETL) and then extension of data for Web data services. Help me understand technically what it is that Kapow is providing here.

From Laptop to Mobile

Andreasen: The best way to describe it is with an example. This is actually a real use case. Let's say I am the CEO of a big network equipment manufacturer. I go to the airport and I open up my laptop to see what are the latest sales figures. I have these applications where I can see sales data, performance, market changes, etc.

What's unique with Kapow is that you can go then to the developers and say, “Hey, look at this. This is what I want on my mobile app -- on my mobile phone.” And, they can get the data from the world of the browser, turn it into standard application programming interfaces (APIs), and get it through any mobile devices.

Just to give you an example of what we did there. With three hours work, we developed a mobile XHTML application for Blackberry that gave the dashboard that the CEO needed. That shows the power of Kapow right there. The alternative approach would be three months of development and probably $150,000 of cost.

Gardner: What's required in the handsets to be able to access what you're describing?

Andreasen: Handsets today are getting more and more browser enabled. So, of course, if you have a browser-enabled phone, it's very easy to do this. You can write just in XHTML as you've mentioned. But, a lot of companies already have like a mobile infrastructure platform.

Because our product turns the applications into standard APIs, standard feeds, it works with any mobile platform and can work in the devices that they support. You basically get the best of both worlds.

Gardner: How do we get over the hurdle of applications that are developed for a browser on a full-blown PC, where there's quite a bit of visual graphics and images, but we want to boil that down into really text and numerics. What is it that you bring to the table to solve that problem?

Andreasen: We recently had a webinar, and we asked what are the biggest challenges that people have. The number one challenge that came out of it was standard access to data, and that's exactly the problem we solve. We allow you to very, very quickly -- almost as quickly as it would take to browse an application once -- turn an application to standard API. Then, you can take it from there to your mobile phone or your mobile applications.

Gardner: People, of course, can deploy with virtual private networks (VPNs) and use a variety of secure socket layer (SSL) or other authentication, so that this data and this delivery to the mobility tier remains secure, and access privileges are maintained.

Andreasen: Exactly. We basically leverage the security mechanisms already in place. The benefit with Kapow is that you don't have to re-write anything or get any new infrastructure. You just use what you already have, because you aren't working with the data. You just do it in the mobile way you want to work with, and we allow you to do that.

Yu: What's powerful about Kapow is that we have an integrated development environment (IDE) that basically allows the IT architects to service enable anything with a Web interface, whether it's a homepage or an application. The power of that really is to bring the knowledge worker or line of business manager together with the IT person to actually develop the business and technical requirements in real-time.

This helps perpetuate the beta development of mobile applications where you don't have to go through months and months of planning cycles, because we know that in a mobile world, once you've gone one or two or three months past, the business has changed. So, as Stefan was saying, the ability to develop data applications for mobile in a matter of hours is powerful.

Gardner: Let's go to JP again. Give us a sense of what types of content and data have been the first to be deployed and delivered in such a fashion. What sorts of developers are the most ready to start exploiting these capabilities?

Funding Requires Business Case

Finnell: Dana, we're not seeing most projects get funded. Where the traction is today is where the projects are getting funded. Projects don't get funded unless there is a business case. The best business cases are those where there's a business process that's already been defined and that needs to be automated. Typically, those are field-based types of processes that we are seeing.

So, I'd say, the field-force automation projects, utilities or direct sales agents, are the areas where I'm seeing the most investment today on a departmental level.

Also, to echo what Ron was saying, you need to go through that prototyping or iterative phase. For example, we had these utility technicians in the field, several hundred of them. Initially, we designed the screens to be scroll down. An alternative user interface (UI) for that was actually to have a screen for each question. Once they answer the question, they hit the next screen.

Unlike a pure Web application, where you want to have a scroll bar and you scroll down and answer every one of 10 questions on a page, the technicians much preferred to have one question per page, because of the form factor. That only was discovered as a result of the prototyping. So, that's another example.

Andreasen: And it's a good example of exactly what Kapow can do. If you have an existing Web-based application with 10 questions on one page, you take our product, pull it into our visual IDE, and turn it into an API service-oriented interface. Then, you can put a new UI on that, which basically asks one question at a time and solves exactly the problem that JP is referring to.

Gardner: This strikes me something that's going to be even more important, as organizations adopt more software-as-a-service (SaaS) applications and as we see more of SaaS providers deliver their applications for both a PC browser experience as well as a stripped-down mobile one.

We're already started to see that on the social networking and consumer site for users of iPhones or iPod Touches. It's going to be interesting to see if a field mobile warrior is going to be accessing this information through the SaaS provider, while they wouldn't be able to in the on-premises applications that are delivered through the enterprise.

It's almost as if the SaaS world is going to drive the need for more of these types of interfaces in the enterprise environment. Does that make sense, Ron?

Yu: Yes, absolutely. Once again, there's this whole notion of completeness that JP mentioned earlier. The SaaS vendors, the Salesforce.coms are going to be focusing on building out their applications. But, at a company level, at a departmental level, we're going to have unique requirements that Salesforce will not be able to develop and deploy in their application in real time.

Yes, they have the application, the AppExchange. And, you have access to Force.com, and you can write your own apps, but, once again, you're talking about software development. With Kapow, we completely leapfrog the need to actually write code. Because of the visual-programming IDE tool, you can actually work, as Stefan was saying, at the business logic level. You work with the interfaces that you know to service-enable your data and roll out apps in real time.

We see this is as enabling and empowering the IT organization to take control of their destiny today, as opposed to waiting for funding and cumbersome development and planning processes to be able to scope out a project and then to write code.

Gardner: Because of Kapow's heritage and the fact that it's been doing Enterprise 2.0 activities for a while now, it seems that, as the developers have become attuned to thinking for the mobile tier, they can, in a sense, develop the application once to then appear anywhere. Is that starting to happen, JP, in the market?

Juggling Mobile Choices


Finnell: One thing that's unique about mobility is the degree of fragmentation. As I mentioned before, there are a lot of choices you have to juggle, not just the device, but actually the platform. You have WIN Mobile, Symbian, UIQ -- which I understand filed for bankruptcy today -- RIM, and Palm. So, there are a number of device platforms, and then you have development options: mobile browser versus SmartClient, J2ME versus .NET.

Stefan and Ron could probably talk about some case studies that they have been seeing in terms of write once-run anywhere.

Gardner: Let's look at this same question, but through the lens of case studies. Now, you've got users like Bank of America, CNET, Audi, Visa, Intel. Tell us about some of these use cases and also if there has been a write once-run mobile, as well as through traditional interfaces.

Andreasen: Let's talk about Audi. It's one of Kapow's largest customers. It's very Web-enabled. Actually, we see that most companies are getting Web-enabled. Audi has a big intranet with a lot of applications.

One application, for example, is for the manager on the assembly line. He can monitor where cars are in production, where they are in the assembly line, and their status. He's walking up and down the assembly line and his laptop is probably in a different office. So, going back and forth to work on his application is very cumbersome.

One of the first things we did for them, as Ron said early, was build four mobile apps in the first week. We took that intranet application and mobilized it, so that the assembly line manager could actually stand right there in front of the car, pick up the phone, and access the entire application. This is an example of the same application existing both as a traditional browser application and as the mobile application.

The interesting thing here is that Kapow enables you to leverage what you already have, the Web browser application, and reuse and repurpose that into a mobile application in a very, very short time, as was just described.

You can take the equation further, if you're going to an entirely new application and you want output in both media. The key is first to get your data in a standard interface and then build on that. That's where you use Kapow. Get the data in a standard interface and then you can build it out for different media as needed.

Yu: Dana, would you like to hear about the iPhone app that we built for Gartner?

Gardner: By all means.

Andreasen: We just attended the Gartner Application Architecture, Development & Integration Summit (AADI) in December. They have a very neat website where you can go and check their agenda. You can also walk around with "the bible," this big book, and see what's going on.

Let's say I'm sitting in a presentation and say, “Wow. This is boring. What's going on right now that I'd rather see?” What you really would like to do is take out your phone, click a button right now, and the rooms and what things are going on now.

So, together with IBM's Rational solutions, we built a mobile version of the Gartner AADI agenda. Using Kapow, we turned the existing Gartner agenda website into a standard feed, and the Rational guys built an iPhone application on top of that. And we are promoting that.

It became a big hit at the show. All the Gartner people loved it. Virtually, you could build your own agenda. You could push a "Now" button, when you were in a boring presentation, and you could walk somewhere else. We got all the benefits of mobility with just two or three hours of total work, and for thousands of people.

Yu: Dana, the most amazing thing about this is that Stefan and I had a conversation on Thursday evening in preparation for a mobile analyst meeting that we were going to be having at the show.

We said, “Wouldn't it be great to walk into that briefing with an iPhone app?” And, Stefan said, "Great." So, in the evening, he spent an hour-and-a-half to create this service feed and he contacted our partner at IBM. In an hour-and-a-half, they used their tool and developed that application. It was just phenomenal. Stefan, why don't you talk about the interactions that you've had with the IT folks at Gartner?

Andreasen: The IT folks of Gartner, of course, were amazed that we could actually produce this and they could see how popular it became. I ended up having a meeting with them, and we're talking with them right now. Actually, if anybody want to see this application, we have it live running on our website under the Mobile Solutions page. So, please feel free to go there and check it out.

Same-Day Development

Yu: This is really a perfect example of how the enterprise in 2009 will operate -- the ability to wake up one day and to have a line-of-business or an IT person, conceive a mobile application and to be able to deploy it within the same day. It's powerful and, hopefully, we'll see more examples of what we did for Gartner within global enterprises.

Gardner: This also raises another issue, which probably is sufficient for an entirely separate podcast, and that's the juxtaposition of this sort of data with location and positioning services. Perhaps at a conference not only would you want a room number, but you might be able to get directions to it and be able to juxtapose these services.

Quickly, to anyone on the panel, what is it now that enterprises should consider, not only delivering this data out to a mobile device, but juxtaposing it with location services and what that could offer?

Andreasen: I think there's a more fundamental question. Can we leverage different sources of information into the same application. If we just go back to the Gartner thing, I could pull out the name of the room, but I didn't have a map on the Gartner side. The hotel itself, of course, has a separate website with hotel information, maps, and everything.

We could actually use our product and service-enable that as well, combine the two, and get a new mash-up mobile application, where you leverage the benefit of multiple applications that couldn't even work together before. That's one answer to that question. You can now combine and mash up several applications and get the combined efficiency.

Gardner: That strikes me as the real return-on-investment (ROI) benefit, because not only are you justifying the cost of delivering the data, but you are able to then use that data for much higher productivity, when you do these as a mash-up. That's really important in our economic climate -- basically 2+2 = 6 -- and that's what I think we're talking about here.

Andreasen: Exactly. Today, people have to look at different places on their paper and, in their mind, combine things. That's what you can automate and create a lot of efficiency.

Gardner: We're almost out of time. What does the future portend for Kapow and some of these mobile services? Is there a road map for improving the breadth and scope of the solution? Once again, I'll throw this out to anyone on the panel.

Andreasen: There is one thing that we're doing, you mentioned as earlier, with SaaS. We launched Kapow OnDemand half-a-year ago and we can see that that's driving a lot of mobile business. So, now we can use our product, not only for on-premises solution, but also in the cloud. We see that as a major driver in our road map to support that.

Yu: The other thing is that I think it's pretty clear now that, from our perspective and from JP's perspective, there are no clearly defined mobile applications. We see the ISVs and IT organizations focused on security and infrastructure.

But, really, beyond email there hasn't been one killer app. I think that tells a story that every enterprise will have their specific mobile apps that they are going to roll out. At Kapow, we will continue to mobile-enable IT organizations to be able to roll out applications as quickly as they can conceive them.

The other part of that is that we will continue to focus on partnering. At Kapow, we will not be a mobile ISV, per se, but will continue to partner with the platform providers to help drive more adoption of mobile.

Gardner: JP hit on this little earlier when he was focused on the business process. Perhaps we're not going to see mobile killer apps or killer mobile apps, but killer business processes that need to have a mobile element to them.

Finnell: That's right, and there is something that I call "strategy emerging from experience." The best way to get adoption in your enterprise is to rapidly iterate at the departmental level, gain experience that way, create centralized governance or coordinative governance that captures the lessons from those, and then become more strategic.

What I am seeing in 2009 is a good experience space. Almost every enterprise today has at least one department that's doing something around mobile. One way to get that to be more strategic is to be more iterative with your approach.

Gardner: Well, great. We've been talking about delivering more content and data out to a mobile tier, but without some of the pain, expense, and complexity that's been traditional in these activities. We've been joined by a panel of JP Finnell, CEO of Mobility Partners. Thanks so much for joining.

Finnell: Thank you.

Gardner: We also had Stefan Andreasen, founder and chief technology officer at Kapow Technologies. Thank you, Stefan.

Andreasen: Thank you, Dana.

Gardner: Also Ron Yu, head of marketing for Kapow. I appreciate your input, Ron.

Yu: Thank you, Dana, I enjoyed the discussion.

Gardner: This is Dana Gardner, principal analyst at Interarbor Solutions. You have been listening to a sponsored BriefingsDirect podcast. Thanks for listening and come back next time.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Listen to related webinar. Sponsor: Kapow Technologies.

Transcript of BriefingsDirect podcast on new ways to deliver data and applications to mobile workers using Kapow Technologies solutions. Copyright Interarbor Solutions, LLC, 2005-2009. All rights reserved.

Wednesday, January 07, 2009

Webinar: Analysts Plumb Desktop as a Service as the Catalyst for Cloud Computing Value for Enterprises and Telcos

Transcript of a recent webinar, produced by Desktone, on the future of cloud-hosted PC desktops and their role in enterprises.

Listen to the webinar here.

Jeff Fisher: Hello and welcome, everyone. Thanks so much for attending this Desktone Webinar Series entitled, “Desktops as a Service, The Evolution of Corporate Computing.” I’m Jeff Fisher, senior director of strategic development at Desktone, and I will also be the host and moderator of the events in this series.

We are really excited to kick off this series of webinars with one focused on cloud-hosted desktops and are equally as excited and privileged to have just a wonderful panel with us starting with Rachel Chalmers from the 451 Group, Dana Gardner from Interarbor Solutions, and Robin Bloor from Hurwitz and Associates.

For those of you who don’t know, Rachel, Dana and Robin are really three of the top minds in this emerging cloud-hosted desktop space. It’s going to be great to see just what they have to say about the topic and we’ll talk to them just a little bit later on.

Before we do that, I want to spend a little bit of time talking about Desktone’s vision and definition of cloud-hosted desktops and, most importantly, about why we believe that virtual desktops, as opposed to virtual servers, are really going to kick-start adoption of cloud computing within the enterprise.

Desktone is a venture-backed software company. We’re based outside of Boston in a town called Chelmsford. We raised $17 million in series A in a round of funding in summer 2007. Highland Capital and Softbank Capital led that round. We also got an investment at that time from Citrix Systems.

We’re currently about 35 full-time employees and have 25 full-time outsourced software developers. The executive team has experience leading desktop virtualization vendors such as Citrix, Microsoft and Softricity, and also experience running Fortune 500 IT organizations at Schwab and Staples.

We have a number of technology partners in the area of virtualization software, servers, storage and thin clients, and some key service provider partnerships with HP, IBM, Verizon and Softbank. What’s really important to note here is that Desktone actually goes to market through these service provider partners.

We don’t host virtual desktops ourselves, but rather the desktops as a service (DaaS) offering that we enable is provided through service provider partners. The only services that we host ourselves, or are offered directly, are trial and pilot services.

We built a platform called the Desktone Virtual-D Platform. It’s the industry’s first virtual desktop hosting platform specifically designed to enable desktops to be delivered as an outsourced subscription service.

And what’s important to understand is that this platform is designed specifically for service providers to be able to offer desktop hosting in the same way that they offer Web hosting or e-mail hosting.

We architected the Virtual-D Platform from the ground-up with that mission in mind. It’s a solution for running virtualized, yet genuine, Windows client environments, whether XP or Vista, in a service provider cloud. We’ll talk more about how we define a service provider cloud in a bit.

It leverages a core virtual desktop infrastructure (VDI) architecture, that is, server-hosted desktop virtual machines, which are accessed by users through PC remoting technologies like remote desktop protocol (RDP), for example. The Virtual-D Platform enables cloud-scale and multitenancy, which are two of the key things that a service provider needs to have to be able to be in this business.

Without getting into too much detail, it’s not really viable to take an enterprise VDI architecture or a product that’s been architected to deliver enterprise VDI and just port it over for service provider use.

It’s not viable for service providers to manage individual instances of VDI products. They really need a platform to manage this efficiently and effectively. The other key thing that the Virtual-D Platform does is separate the responsibilities of the user, the enterprise desktop administrator, and the service provider hosting operator, so that each of these constituents has their own view into the system, through a Web-based interface of course, and can do what they need to do without seeing functions and capabilities that are only really required by some of the other groups.

So, it’s a very, very different technology approach, although the net result appears in certain ways to be similar to some of the VDI platforms that you probably know pretty well.

So, that’s Desktone in a nutshell.

Promise of the cloud

Let’s get to the promise of the cloud. Clearly, everyone is talking about cloud computing. You can’t look anywhere within IT and not hear about it. It’s amazing to see it surpassing even the frenzy around virtualization. In fact, most of the conversations people are having today are around virtualization and how it can take place in the cloud. Everyone wants to focus on all the benefits, including anytime/anywhere access and subscription economics.

However, like any other major trend that unfolds in IT, there are a number of challenges with the cloud. When people talk about cloud computing with respect to the enterprise, in most cases they’re talking about virtualizing server workloads and moving those workloads into a service provider cloud.

Clearly, that shift introduces a number of challenges. Most notable is the challenge of data security. Because server workloads are very tightly-coupled with their data tier, when you move the server or the server instance, you have to move the data. Most IT folks are not really comfortable with having their data reside in a service provider or external data center.

For that reason Desktone believes that it’s actually going to be virtual desktops, not servers, that are the better place to start and are going to be what jump starts this whole enterprise adoption of cloud computing.

The reason is pretty simple. Most fixed corporate desktop environments -- those are desktops that have a permanent home within your enterprise – already probably have their application and user data abstracted away from the actual desktop. The data is not stored locally. It’s stored somewhere on the network, whether it’s security credentials within the active directory (AD), whether it’s home drives that store user data, or it’s the back end of client-server applications. All the back end systems run within your data center.

When you shift that kind of environment to the cloud, although the desktop instance has moved, the data is still stored in the enterprise data center. Now, what you are left with are virtual desktops running in a highly secure virtual branch office of the enterprise. That’s how we like to refer to our service-provider partners’ data centers, as secure virtual branch offices of your enterprise.

In addition, if you virtualize and centralize physical PCs, which used to reside in remote branch offices that have limited or no physical security, you’ve actually increased the security of the environment and the reasons are clear. The PC can no longer walk off, because it doesn’t have a physical manifestation.

Because users are interacting with their virtual desktops through PC remoting technology, a la Remote Desktop Protocol (RDP), you have control as an administrator as to whether or not they can print to their access device or through their access device, whether or not they can get access to USB devices, USB key fobs that they put into that device.

So, you can control the downstream movement of data from the virtual desktop to the edge. You can also control the upstream movement of data from the edge to the virtual desktop and stop malware and viruses from being introduced through USB keys as well.

Those are some really nice benefits. We have an animation that illustrates this, showing a physical desktop PC, which accesses its data from the enterprise data center – again whether it’s AD or user data or the actual apps themselves. Then, what happens is the actual PC is virtualized and centralized into a service-provider cloud, at which point it’s accessed from an access device, whether that’s a thin client or a thick client, a PC that’s been repurposed to act like a thin client, or a dumb terminal and/or laptops as well.

The key message here is that, although the instance of the desktop is moved, the data does not have to move along with it. Through private conductivity between the enterprise and the service provider, it’s possible to access the data from the same source.

Service-provider cloud

The other interesting thing is this notion of the service-provider cloud, which is that it actually can traverse both the enterprise and the service provider data centers.

So, depending on the use case, service providers can either keep the virtual infrastructure and the racks powering that virtual infrastructure in their data center or they can, in certain cases, put the physical infrastructure within the enterprise data center, what we call the customer premise equipment model. The most important thing is that it doesn’t break the model.

There is flexibility in the location of the actual hosting infrastructure. Yet, no matter where it resides, whether it’s in a service provider data center or an enterprise data center, the service provider still owns and operates it and the enterprise still pays for it as a subscription.

Let’s touch on just a couple of other benefits and then we’ll jump into talking with our panel. The Desktone DaaS cloud vision preserves the rich Windows client experience in the cloud. This is true, blue Windows -- XP or Vista -- not another form of Windows computing whether it’s shared service in the form of Terminal Services and/or browser-based solutions like Web OSes or webtops.

That’s important, because most enterprises have the Windows apps that they need to run and they don’t want to have to re-architect and re-engineer the packages to run within a multi-user environment. They certainly don’t want a browser-based environment where they can’t run those apps.

In the same vein, this sustains the existing enterprise IT operating model, while introducing cloud-like properties so that IT desktop administrators can continue to use the same tools and processes and procedures to support the virtual desktops in the cloud as they have done and as they will continue to do with their physical desktops.

We talked about the notion of separating service provider enterprise responsibilities. It’s really important to be able to draw a line and say that the service provider is responsible for the hosting infrastructure, and yet the enterprise is still ultimately responsible for the virtual desktops themselves, the OS images, the patching, the licensing, the applications, application licensing, etc.

And then, finally, this notion we mentioned of combining both on- and off- premise hosting models is important. I think most of the leading analyst firms agree that in order for enterprises to be able to adopt cloud computing they are not going to be able to go from a fully enterprise data center model to a full cloud model. There’s got to be some sort of common ground in between and, again, the fact this model supports both is important.

Now, let’s turn to our panel and see what they have to say. We’ll get started with Rachel Chalmers who is research director of infrastructure management at the 451 Group. She’s led the infrastructure software practice for the 451 Group since its debut in April 2000.

She’s pioneered coverage on services-oriented architecture (SOA), distributed application management, utility computing and open-source software, and today she focuses on data center automation server, desktop, and application virtualization. Rachel, thank you so much for being with us today.

Rachel Chalmers: You’re very welcome. It’s good to be here.

Fisher: Rachel, I actually credit you with being the first analyst to really put cloud-hosted desktop virtualization on the map and the reason is because you’ve written two really expansive and excellent reports on desktop virtualization. The first one you released in the summer of 2007. The follow-up one was released this past summer of ’08.

What I’ve really found interesting was that in the updated version you actually modified your desktop virtualization taxonomy to include cloud-hosted desktops as a first-class citizen, so to speak, alongside client-hosted desktop virtualization and server-hosted desktop virtualization. Of course that begs the question, what was so compelling about the opportunity that made you do that?

Taxonomy is key


Chalmers: Taxonomy is the key word. For those who aren’t familiar with The 451 Group, we focus very heavily on emerging and innovative technology. We do a ton of work with start-ups and when we work with public companies, it’s from the point of view of how change is going to affect their portfolio, where the gaps are, who they should buy. So we’re very much the 18th century naturalists of the analyst industry. We’re sailing around the Galapagos Islands and noting intriguing differences between finches.

I know we described cloud-hosted desktop virtualization as one of these very constructive differences between finches. When I sat down and tried to get my arms around desktop virtualization, it was just at the tail end of 2007 and, as you’ll recall, just as it’s illegal for a vendor to issue a press release now without describing their product as a cloud-enablement product. In 2007, it was illegal to issue a press release without describing a product as virtualization of some kind.

I was tracking conservatively 40 to 50 companies that were doing what they described as desktop virtualization and they were all doing more or less completely different things. So, the first job as a taxonomist is to sit down and try and figure out some of the broad differences between companies that claim to be doing identical things and claim to deliver identical functionality. One of the easiest ways to categorize the true desktop virtualization guys, as opposed to the terminal services or application streaming vendors, was to figure out exactly where the virtual machine (VM) was running.

So I split it three ways. There are three sensible places to run a desktop virtual machine. One is on the physical client, which gives you a whole bunch of benefits around the ability to encrypt and lock down a laptop and manage it remotely. One is to run it on the server, which is the very similarly tried-and-tested VM with VDI or Citrix XenDesktop method. That’s appropriate for a lot of these cases, but when you run out of server capacity or storage in the server-hosted desktop virtualization model a lot of companies would like elastic access to off-site resources.

This is particularly appropriate, for example, for retailers who see a big balloon in staffing – short-term and temporary staffing around the holiday seasons, although possibly not this year -- or for companies that are doing things off-shore and want to provide developer desktops in a very flexible way, or in education where companies get big summer classes, for example, and want to fire up a whole bunch of desktops for their students.

This kind of elastic provisioning is exactly what we see on the server virtualization side around cloud bursting. On the desktop side, you might want to do cloud bursting. You might even want to permanently host those desktops up in the cloud with a hosting provider and you want exactly the same things that you want from a server cloud deployment. You want a very, very clean interface between the cloud resources and the enterprise resources and you want a very, very granular charge back in billing.

And so, we see cloud-hosted desktop virtualization as a special case of server-hosted desktop virtualization. Really, Desktone has been the pioneer in defining what that interface should look like, where the enterprise data should reside, where AD, with its authentication and authorization functions, should reside, and what gets handled by the service provider and how that gets handled by the service provider.

Desktone isn’t the only company in cloud-hosted desktop virtualization, but it’s certainly the best-known and it’s certainly done the best job of articulating what the pieces will look like and how they’ll work together.

Fisher: Great.

Chalmers: It’s a very impressive finch.

Fisher: Always appreciate it. Dana and Robin, do you have any additional comments on what Rachel had to say?

New era in compute resources

Dana Gardner: Yes, I think we’re entering a new era in how people conceive of compute resources. To borrow on Rachel’s analogy, a lot of these finches have been around, but there hasn’t been a lot of interest in terms of an environment where they could thrive. What’s happening now is that organizations are starting to re-evaluate the notion that a one-size-fits-all PC paradigm makes sense.

We have lots of different slices of different types of productivity workers. As Rachel mentioned, some come and go on a seasonal basis, some come and go on a project basis. We’re really looking at a slice-and-dice productivity in a new way, and that forces the organization to really re-evaluate the whole notion of application delivery.

If we look at the cost pressures that organizations are under, recognizing that it’s maintenance and support, and risk management and patch management that end up being the lion’s share of the cost of these systems, we’re really at a compelling point where the cost and the availability of different alternatives has really sparked sort of a re-thinking.

And a lot of general controlled-management security risk avoidance issues require organizations to increasingly bring more of their resources back into a server environment.

But, if you take that step in virtualization and you look at different ways of slicing and dicing your workers, your users, if you can virtualize internally – well then we might as well make the next step and say, “What should we virtualize externally?” “Who could do this better than we can at a scale that brings the cost down even higher?”

This is particularly relevant if they’re commodity level types of applications and services. It could be communications and messaging, it could be certain accounting or back office functions. It just makes a lot of sense to start re-evaluating. What we haven’t seen, unfortunately, is some clear methodologies about how to make these decisions and boundaries inside of organizations with any sort of common framework or approach.

It’s still a one-off company by company approach -- which workers should we keep on a full-fledged PC? Who should we put on a mobile Internet device, for example? Who could go into a cloud-based applications hosting type of scenario that you’ve been describing?

It’s still up in the air and I’m hoping that professional services and systems integrators over the next months and years will actually come up with some standard methodologies for going in and examining the cost-benefit analysis, what types of users and what types of functions and what types of applications it makes sense to put into these different finch environments.

Fisher: Absolutely. I couldn’t agree more and I’ve always been one who talks about use cases. It all comes down to the use cases.

The technology is great, the innovation is great but especially in the case of desktop usage you really have to figure out what people are doing, what they need to do, what they don’t need to be doing at work but are currently doing, and that’s the whole notion of this how the consumerization piece fits in and personal life melds in with business life. You can say, well, this person doesn’t need to do that but if they are today you need to figure out how to make that work and how to take that into account.

So, I agree with you. It’d be fantastic to get to a world where there was just a better way to have better knowledge around use cases and which ones fit with which delivery models.

Chalmers: I think that’s a really crucial point. Just as server and work cloud virtualization have transformed the way we can move desktops and servers around, I see a lot of really fascinating work being done around user virtualization.

Jeff, you talked a lot about the issue of having user data stored separately from the dynamic run-time data. I know you’ve done a lot of work with AppSense within Merrill Lynch. There’s a group of companies -- AppSense, RTO Software, RES, and Sansa -- that are all doing really interesting work around maintaining that user data in a stateful way, but also enabling IT operators to be able to identify groups of users who may need different form factors for their desktop usage and for their work profile.

Buy side perspective

Gardner: We’ve been looking at this from the perspective on the buy side where it makes a lot of sense, but there’s also some significant momentum on the sell side. These organizations that are perhaps traditional telcos, co-location, or hosting organizations, cloud providers or some ecology of providers that actually run on someone else’s cloud but have a value-added services capability of some sort.

These are on the sell side and they’re looking for opportunities to increase their value, not just to small to medium-sized businesses but to those larger enterprises. They’re going to be looking and trying to define certain classes of users, certain classes of productivity and work and workflow, and packaging things in a new and interesting way.

That’s the next shoe to fall in all of this: the type of customer that you have there at Desktone. It’s incumbent upon them now to start doing some packaging and factoring the cost savings, not just on an application-on-application basis but more on a category of workflow business process work and do the integration on the back-end across.

Perhaps that will involve multiple cloud providers, multiple value-added services providers, and they then take that as a solution sell back into the enterprise, where they can come up with a compelling cost-per-user-per-month formula. It’s recurring revenue. It’s predictable. It will probably even go down over time, as we see more efficiency driven into these cloud-based provisioning and delivery systems.

So, there’s a whole new opportunity for the sellers of services to package, integrate, add value, and then to take that on a single-solution basis into a large Fortune 1000 organization, make a single sale, and perhaps have a customer for 10 or 15 years as a result.

Chalmers: It is a tremendously exciting opportunity for our managed-hosting provider clients. It’s the dominating topic of conversation at a lot of the events that we run for that group. Traditionally, a really, really great managed hoster that delivers an absolutely fantastic service will become the beloved number one vendor of choice of the IT operator.

If that managed hosting provider can deliver the same quality of service on the desktop, then they will be the beloved number one vendor of everybody up to and including the CIO and the CEO. It’s a level of exposure they’ve just never been able to aspire towards before.

Robin Bloor: I think that’s probably right. One of the things that is really important about what’s happening here with the virtualization of the desktop is just the very simple fact that desktop costs have never been well under control. The interesting thing is that with end users that we’ve been talking to earlier this year, when they look at their user populations, they normally come to the conclusion that something like 70 or 80 percent of PC users are actually using the PC in a really simple way. The virtualization of those particular units is an awful lot easier to contemplate than the sophisticated population of heavy workstation use and so on.

With the trend that’s actually in operation here, and especially with the cloud option where you no longer need to be concerned about whether your data center actually has the capacity to do that kind of thing, there’s an opportunity with a simple investment of time to make a real big difference in the way the desktop is managed.

Fisher: I totally agree. Thanks, Robin. All right. Let’s shift gears and talk to Dana Gardner. Dana is the president of Interarbor Solutions, and is known for identifying software and enterprise infrastructure trends and new IT business growth opportunities.

During the last 18 years he’s refined his insights as an industry analyst and news editor, and lately he’s been focused on application development and deployment strategies and cloud computing.

So Dana, you’ve been covering us for a while on your blog. For those of you who don’t know, Dana’s blog is called BriefingsDirect. It’s a ZDNet blog. You’ve covered our funding and platform launch, and some of our partner announcements, and we’ve had some time to sit down as well and chat.

In a posting this summer you wrote about Pike County– which is a school district in Kentucky, where IBM has successfully sold a 1,400-seat DaaS deployment. That’s something that we’re going to dive deeper into on a couple of the webinars in the series.

You’ve stated a broad affection for the term “cloud computing,” and all that sticks to that nowadays will mean broad affection, too, for DaaS. Can you elaborate on that?

Entering transitional period

Gardner: Well, sure. As I said, we’re entering a transitional period, where people are really re-thinking how they go about the whole compute and IT resources equation. There’s almost this catalyst effect or the little Dutch boy taking his finger out of the hole in the dike, where the whole thing comes tumbling down.

When you start moving toward virtualization and you start re-thinking about infrastructure, you start re-thinking the relationship between hardware and software. You start re-thinking the relationship between tools and the deployment platform, as you elevate the virtualization and isolate applications away from the platform, and you start re-thinking about delivery.

If you take the step toward terminal services and delivering some applications across the wire from a server-based host, that continues to tip this a little bit toward, “Okay, if I could do it with a couple of apps, why not look at more? If I could do it with apps, why not with desktop? If I can do it with one desktop, why not with a mobile tier?”

If I’m doing some web apps, and I have traditional client-server apps and I want to integrate them, isn’t it better to integrate them in the back-end and then deliver them in a common method out to the client side?

So we’re really going through this period of transformation, and I think that virtualization has been a catalyst to VDI and that VDI is therefore a catalyst into cloud. If you can do it through your servers, somebody else can do it through theirs.

If we’ve managed the wide-area network issues, if we have performance that’s acceptable at most of the application performance criteria for the bell curve of users, the productivity workers, we just go down this domino line of one effect after another.

When we start really seeing total costs tip as a result, the delta between doing it yourself and then doing it through some of these newer approaches is just super-compelling. Now that we’re entering into an economic period, where we’re challenged with top-line and bottom-line growth, people are not going to take baby steps. They’re going to be looking for transformative, real game-changing types of steps. If you can identify a class of users and use that as a pilot, if you can find the right partners for the hosting and perhaps even a larger value-added services portfolio approach, you start gaining the trust, you start seeing that you can do IT at some level but others can do it even better.

The cloud providers are in the business of reducing their costs, increasing their utilization, exploiting the newer technologies, and building data centers primarily with a focus on this level of virtualization and delivery of services at scale with performance criteria. Then, it really becomes psychology and we’re looking at, as you said earlier, the trust level about where to keep your data and that’s really all that’s preventing us now from moving quite rapidly into some of these newer paradigms.

The cost makes sense. The technology makes sense. It’s really now an issue of trust, and so it’s not going to happen overnight, but with baby steps, the domino effect, as you work toward VDI internally. If you work towards cloud with a couple of apps, certain classes of users, before long that whole dike is coming down and you might see only a minority of your workers actually doing things in the conventional client-server, full PC local run-time and data storage mode.

I think we’re really just now entering into a fairly transformative period, but it’s psychologically gaining ground rapidly.

Fisher: Yes, definitely. Rachel, Robin, any thoughts on Dana’s comments?

Psychological issues


Chalmers: I think that’s exactly right and I think the psychological issues are really important, as Dana has described them. One of the huge barriers to adoption of earlier models of this kind of remote desktop-like terminal services has been just that they’re different from having a full, rich Windows user experience in front of you.

The example people keep returning to is the ability to have a picture of your kids as your desktop wallpaper. It seems so trivial from an IT point of view, but just the ability to personalize your own environment in that way turned out to be a major obstacle to adoption of the presentation servers in that model.

You can do that in a virtual desktop environment. You can serve that exact same desktop environment to the same employee, whether she’s working from San Francisco or London. Because the VDI deployment model also has the same, yet better, features to that employee, it becomes much easier to persuade organizations to adopt this model and the cost savings that come along with it.

So, we underplay the psychological aspects at our peril. People are human beings and they have human foibles, and technology needs to work around that rather than assuming that it doesn’t exist.

Bloor: Yes, I’d go along with that. What you’ve actually got here is a technology where the ultimate user won’t necessarily know whether they’ve got a local PC. Nowadays, you can buy devices where the PC itself is buried in the screen.

So, it’s like they may psychologically, in one way or another, have some kind of feeling of ownership for their environment, but if they get the same environment virtually that they would adopt physically, they’re not going to object. Certainly some of the earlier experiences that users have had is that problems go away. The number of desk-side visits required for support, when all you’ve got is a thin client device on the desktop, diminishes dramatically.

The user suddenly has a responsibility for various things that they would do within their own environment lifted completely from them. So, although you don’t advertise it as this there’s actually a win for the user in this.

Chalmers: That’s exactly right and fewer desktop visits, fewer IT guys coming around to restart your blue screen or desktop, that translates directly into increased productivity.

Fisher: Yes, and what we like to talk about at Desktone is just this notion of anytime, anywhere. It’s one thing to get certain limited apps and services. It’s another thing to be able to get your PC environment, your corporate persona everywhere you go.

If you need to work from home for a couple of days a week, or in emergency situations, it’s great to be able to have that level of mobility and flexibility. So, we totally agree.

Now let’s move over now to Robin Bloor. He's a partner at Hurwitz and Associates. He’s got over 20 years experience in IT analysis and consultancy and is an influential and respected commentator on many corporate IT issues. His recent research is focused on virtualization, desktop management, and cloud computing.

Robin, in your post about Desktone on your blog -- “have Mac will blog” -- a title I love -- you mentioned that you were surprised to see the DaaS or cloud value prop for client virtualization emerge this early on. You mentioned that you found our platform architecture diagram to be extremely helpful in explaining the value prop, and I just would like you to provide some more color around those comments.

Tracking virtualization

Bloor: Sure. I really came into this late last year and, in one way or another, I was looking at the various things that were happening in terms of virtualization. I’d been tracking the escalating power of PC CPUs and the fact that, by and large, in a lot of environments the PC is hard to use.

If you do an analysis of what is happening in terms of CPU usage, then the most active thing that happens on a PC is that somebody waves their mouse around or possibly somebody is running video, in which case the CPU is very active. But it became obvious that you could put a virtualized environment on a PC.

When I realized that people were doing that, I got interested in the way that people were actually doing it and, there are a lot of things out there, if you actually look at it. It absolutely stunned me that a cloud offering became available earlier this year because that meant that somebody would actually have had to have been thinking about this two years ago in order to put together the technology that would enable such an offering like that.

So just look at the diagram and you certainly see why, from the corporate point of view, if you’re somebody that’s running a thousand desktops or more it’s a problem. It is a problem in terms of an awful lot of things but mostly it’s a support issue and it’s a management issue. When you get an implementation that involves changing the desktop from a PC to a thin client and you don’t put anything into the data center, it improves.

You’ve now got a situation where you don’t need cages in the data center running PC blades or running virtualized blades to actually provide the service. You don’t need to implement the networking stuff, the brokering capability, boost the networking in case it’s clashing with anything else, or re-engineer networks.

All you do is you go straight into the cloud and you have control of the cloud from the cloud. It’s not going to be completely pain free obviously, but it’s a fairly pain-free implementation. If I were in the situation of making a buying decision right now, I would investigate this very, very closely before deciding against it, because this has got to be the least disruptive solution. And if the apparent cost of ownership turns out to be the same or less than any other solution, you’re going to take it very seriously.

Fisher: Absolutely. Rachel, Dana, thoughts and comments?

Chalmers: I agree and I love this diagram. It’s the one that really conveyed to me how cloud-hosted desktop virtualization might work, and what the value prop is to the IT department, because they get to keep all the stuff they care about, all the user data, all the authentication authorization, all of the business apps. All they push out is support for those desktops, which frankly had been pushed out anyway.

There’s always one guy or gal in the IT organization who is hiking around from desktop to desktop installing antivirus or rebooting machines. Now, instead of that person being hiking around the offices, they are employed by the service provider and sitting in a comfy chair and being ergonomically correct.

Rational architecture

Gardner: Yes, I would say that this is a much more lucid and rational architecture. We’ve found ourselves, over the past 15 or 20 years, sort of the victim of a disjointed market roll out. We really didn’t anticipate the role of the Internet, when client-server came about. Client-server came about quickly just after local area networks (LANs) were established.

We really hadn’t even rationalized how a LAN should work properly, before we were off and running into bringing browsers in TCP/IP stacks. So, in a sense, we've been tripping over and bouncing around from one very rapid shift in technology to another. I think we’re finally starting to think back and say, “Okay, what’s the real rational, proper architectural approach to this?”

We recognize that it’s not just going to be a PC on every desktop. It’s going to be a broadband Internet connection in every coat pocket, regardless of where you are. That fundamentally changes things. We’re still catching up to that shift.

When I look at a diagram like Desktone’s, I say, “Ah-ha!” You know now that we fairly well understand the major shifts that have occurred in the past 20-25 years. If we could start from a real computer-science perspective, if we could look at it rationally from a business and cost perspective, how would we properly architect how we deploy and distribute IT resources? We’re really starting to get to a much more sensible approach, and that’s important.

Bloor: Yes, I would completely go with Dana on that. From an architect’s point of view, if nobody had influenced you in any way and you were just asked to draw out a sense of a virtualization of services to end users, you would probably head in this direction. I have no doubt about it. I’ve been an architect in my time, and it’s just very appealing. It looks like what Desktone DaaS has here is resources under control, and we’ve never had that with a PC.

Fisher: Well, that was great, and I really appreciate you guys taking the time to answer my questions. With the remaining 10 minutes, I’d like to turn it over for some Q&A.

The question coming in has to do with server-based computing app delivery with respect to this model.

This is something that comes up all the time. People say, “We’re currently using terminal services or presentation server,” which is obviously what we use for app deployment. How does that application deployment model fit into this world? And to kick off the discussion I’ll tell you that at Desktone we view what we’re doing very much as the virtualization of the underlying environment of the actual PC itself and the core OS.

That doesn’t change the fact that there are still going to be numerous ways to deploy applications. There’s local installation. There’s local app virtualization. There’s the streaming piece of app virtualization. And, of course, there’s server-based computing which is, by far, the most widely used form of virtualized application delivery.

And, not to mention the fact that in our model there is a private LAN connection between the enterprise and the service provider. In some cases, the latency around that connection is going to warrant having particularly chatty applications still hosted back in the enterprise data center on either Citrix and/or Microsoft terminal servers. So, I don’t view this as being really a solution that cannibalizes traditional server-based computing. What do you guys think?

Chalmers: I think that’s exactly accurate. You mentioned right at the front of the call that Citrix is an investor in Desktone. Clearly the VDI model itself is one that extends the application of terminal services from traditional task workers to all knowledge workers –those people who are invested in having a picture of their kids on the desktop wallpaper.

I think cloud-hosted desktop virtualization extends that again, so that, for example, if you’re running a very successful terminal services application and you don’t want to rip that out—very sensible, because ripping and replacing is much more expensive than just maintaining a legacy deployment of something like that—you can drop in XenDesktop. XenDesktop can talk quite happily to what is now XenApp, the presentation server deployment.

It can talk quite happily to a Desktone back-end and have all of its VDI virtual desktops hosted on a hosting provider. If you’ve got a desk full of Wall Street traders, it can also connect them up to blade PCs, dedicated resources that are running inside the data center.

So XenDesktop is an example of the kinds of desktop connection brokers you’re going to see, but as happy supporting traditional server-based computing or the blade PC model as they are being the front end for a true VDI deployment.

Bloor: Yes, I’d go along with that. One of the things that’s interesting in this space is that there are a number of server-based computing implementations that have been, what I’ll call, early attempts to virtualize the PC, and you may get adrift from some of those implementations. I know in certain banks that they did this purely for security reasons.

You know the virtualized PC is aa secure a server as a computer is. So you may get some drift from one kind of implementation to another, but in general, what’s going to happen is that the virtual PC is just the same as a physical PC. So, you just continue to do what you did before.

Fisher: Absolutely. I do agree that there definitely will be a shift and that again – back to the use cases -- people are going to have to say, “Okay, here are the four reasons we did server-based computing,” not, “We did server-based computing because we thought it was cool.”

Maybe in the area of security, as Robin mentioned, or some other areas, those reasons for deployment go away. But certainly, dealing with latency over LAN, depending on where the enterprise data center sits, where the user sits and where the hosting provider sits, there very well may still be a compelling need to use server-based computing.

Okay. We’ve got about five minutes left. There was an interesting question about disaster recovery (DR), using cloud-hosted desktops as DR for VDI, and this is a subject that’s close to my heart. It will be interesting to hear what you guys have to say about it. There is actually already the notion of some of our service-provider partners looking at providing desktop disaster recovery as a service. It’s almost like a baby step to full-blown cloud-hosted desktops.

Maybe you don’t feel comfortable having your users’ primary desktop hosted in the cloud, but what about a disaster recovery instance in case their PC blue screens and is not recoverable and they’re in some kind of time-critical role and they need to get back and up and running.

Or, as is probably more commonly thought of, what if they’re the victims of some sort of natural disaster and need to get access to an instance of the corporate desktop. What do you guys think about that concept?

Bloor: There are going to be a number of instances where people just go to this, particularly banks where, because of the kind of regulatory or even local standards they operate, they have to have a completely dual capability. It’s a lot easier to have dual capability if you’re going virtual, and I’m not sure that you would necessarily have the disaster recovery service virtual and the real service physical. You might have them both virtual, because you can do that.

This is just a matter of buying capacity, and the disaster-recovery stuff is only required at the point in time where you actually have the disaster. So, it’s got to be less expensive. Certainly, when you’re thinking about configuration in changed management for those environments, when you’ve got specifically completely dual environments, this makes the problem a lot easier.

Gardner: I think there are literally dozens of different security and risk-avoidance benefits to this model. There’s the business continuity issue, the fact that cloud providers will have redundancy across data centers, across geographies, the fact that there is also intellectual property risk management where you can control what property is distributed and how, and it keeps it centrally managed and check-in and check-out can be rigorously managed. And, then there’s also an audit trail as to who is there, so there’s compliance and regulatory benefits.

There’s also control over access to privileges, so that when someone changes a job it’s much easier to track what applications they would and wouldn’t get, in that you’ve basically re-factored their desktop from scratch that next day that they start the new job. So, the risk-compliance and avoidance issues are huge here, and for those types of companies or public organizations where that risk and avoidance issue is huge, we’ll see more of this.

I think that the Department of Defense and some of the intelligence communities have already moved very rapidly towards all server-side control and, for the same reasons that would make sense for a lot of businesses too.

Chalmers: Disaster recovery is always top of mind this time of year, because the hurricanes come around just in time for the new financial round of budgeting. But really it’s a no-brainer for a small business. For the companies that I talked to that are only running one data center, the only thing that they’re looking at the cloud for right now is disaster recovery, and that applies as much to their desktop resources as to their server resources.

Fisher: Great. Well, we are just about out of time so I want to close out. First, lots of information about what we’re doing at Desktone is up on our website, including an analyst coverage page under the News and Events section where you can find more information about Robin and Dana and Rachel’s thinking and – as well as other analysts.

We do maintain a blog at www.desktopsasaservice.com. We have a number of webinars up and coming to round out the series. We’ll be talking to Pike County, a customer of IBM’s and a user of the Desktone DaaS solution, and we’ll be speaking with our partner IBM. We’ll also have the opportunity to have Paul Gaffney, our COO, on a couple of our webinars as well.

So with that I will thank our terrific panel. Rachel, Dana and Robin, thank you so much for joining and for a fantastic conversation on the subject, and thank you so much everyone out there for attending.

View the webinar here.

Transcript of a recent webinar, produced by Desktone, on the future of cloud-hosted PC desktops and their role in enterprises.