Wednesday, June 29, 2011

Talend Open-Source Approach Provides Holistic Integration Capability Across, Data, Devices, Services

Transcript of a sponsored BriefingsDirect podcast on enterprise integration and new tools to put control in the hands of "the masses."

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: Talend.

Dana Gardner: Hi, this is Dana Gardner, Principal Analyst at Interarbor Solutions, and you're listening to BriefingsDirect.

Today, we present a sponsored podcast discussion on how the role and impact of integration has shifted, and how a more comprehensive and managed approach to integration is required, thanks to such major trends as cloud, hybrid computing, and managing massive datasets.

Moreover, the tools that support enterprise integration need to be usable by more types of workers, those that are involved with business process activities and data analysis. The so-called democratization of IT effect is also rapidly progressing into this traditionally complex and isolated world of applications and data integration. [Disclosure: Talend is a sponsor of BriefingsDirect podcasts.]

So, how do enterprises face up to the generational shift of the function of integration to new and more empowered users, so that businesses can react and exploit more applications and data resources and do so in a managed and governed fashion? This is no small task.

We're finding that modern, lightweight, and open-source platforms that leverage modular architectures are a new and proven resource for the rapid and agile integration requirements. And, the tools that support these platforms have come a long way in ease of use and applicability to more types of activities.

We're here today to discuss how these platforms have evolved, how the open-source projects are being produced and delivered into real-time and enterprise-ready, mission-critical use scenarios, and what’s now available to help make integration a core competency among more enterprise application and data activities and processes.

Please join me now in welcoming our guests today. We're here with Dan Kulp, the Vice President of Open Source Development at Talend’s Application Integration Division and also the Project Management Committee Chair of the Apache CXF Project. Welcome back to BriefingsDirect, Dan.

Dan Kulp: It’s great to be here. Thank you.

Gardner: We're also here with Pat Walsh. He is the Vice President of Marketing in the Application Integration Division at Talend. Hey, Pat.

Pat Walsh: Nice to be here as well.

Gardner: Pat, let me start with you. We're talking about a shift here in some major trends. Everyone is talking about how IT needs to react differently. There is lots of change going on. Integration has always been important, but now it’s probably more important than ever.

With some of the shifts in computing models, such as cloud and the data intensive atmosphere that most organizations are now operating in, why is integration a real issue that needs to be approached differently?

Overriding trends

Walsh: We're seeing a couple of overriding trends that have really shifted the market for integration solutions. The needs have shifted with changes in the workplace.

First and foremost, we're seeing that there is much more information that needs to be managed, much more data associated, and there are a couple of drivers of that.

One is that there are many more interactions amongst different functional units within a business. We're seeing that silos have been broken down and that there’s more interaction amongst these different functions, and thus more data being exchanged between them and more need to integrate that data.

There’s also this notion of the consumerization of IT, that with so many devices like iPhones and iPads being accessible to consumers in their everyday life. They bring those to work and they expect those tools to be adapted to their workplace. With that just comes an even larger increase in the data explosion that you had referenced earlier.

Coupled with that are overriding trends in IT to shift the burden of supporting systems away from the traditional data center and into the cloud. Cloud has been a big movement over the last couple of years in IT and it has an impact on integration. No longer can an IT department have full control over the applications that they are integrating. They now have to interact with applications like Salesforce.com.

A number of these trends converged. In the past, you may have been able to address data issues separately with small portion of your IT group within the data center and say application integration separately with another group within the data center. Nowadays, you are not only in control of your own systems, you have to depend on systems that someone else would be supporting for you in the cloud. Thus, the complexity of all of the integration points that need to be managed has exploded.

The architectural trend is really driving the need for the data and application integration technologies and the team supporting those to come together.



These are some of the overriding trends that we are seeing at Talend and responding to in terms of issues that are driving our customer needs today.

Gardner: It sounds like there are two major shifts in addition to some other complexity issues. The two shifts seem to be that we now need to integrate data, applications, and services with some sort of a coordinated effect. Having them in separate silos doesn’t seem to work very well. And then, we have a shift in terms of the architecture of where the computing, the resources, and the data reside -- and that would be this cloud computing activity.

Why is it important for data and application integration activities to become closer or even under the same umbrella?

Walsh: The two trends that you talked about are related. The architectural trend is really driving the need for the data and application integration technologies and the team supporting those to come together. The reason is that data and application integration no longer are necessarily centralized in a single location.

When they were, you had, in essence, a single point of integration that you needed to manage amongst the data and the applications. Nowadays, it’s distributed throughout your enterprise, but also distributed, as I mentioned before, across a network of partners and providers that you may be using.

So many touch points

With that, there’s now the mandate that you can no longer isolate data from application, because the touch points are just so many. You now need to look at solutions that, from the get-go, consider both aspects of the integration problem -- the data aspect and the system and application integration aspect.

Gardner: And, I suppose we need to tool in such a way that we can approach both of these problem sets, the data integration and the applications integration, with a common interface or at least common logic. Is that correct?

Walsh: Yes, and up until now the two audiences have been treated quite differently. I think the tool expectations of the audience for data management versus the audience for application integration were quite different. We're finding that we need to bridge that gap and provide unified tool sets that are appropriate for both the data management user, as well as the application integration user.

Gardner: I think we understand the business requirements now, why this shift is happening, why it’s so important, and how it supports real agility capabilities of an organization. So, this is not a nice to have, but really mission-critical.

Let’s go to Dan Kulp. Tell me why a certain architectural or platform approach best address these issues. It doesn’t sound like a manual, labor-intensive, siloed approach works. Why must we take a different kind of architectural step here, Dan?

Kulp: As Pat mentioned earlier, with the shifting of the requirements from silos into more of a distributed environment, the developers that are doing the application integration and the people doing the data management have to talk a lot more to get these problems solved. Your older solutions, from five years ago or whatever, that had each of those things completely separate were not able to scale up to this distributed type environment.

Gardner: Let me ask you now from a different perspective, architecturally we have a shift, but why does an open source community approach help bring these constituencies together? What is it about an open source and modular approach to these infrastructure components that helps bridge these cultures?

Kulp: One aspect that open source brings is a very wide range of requirements that are placed on these open source projects. That provides a lot of benefit to an organization, as these requirements may not be required of your organization today, but you don’t really know what’s going to happen six months or a year from now.

You may acquire another company or you have to integrate another set of boxes from another area of your organization. The open source projects that you see out there, because of their open-source nature, have been attracting a wide range of developers, a wide range of new requirements and ideas, and very bright people who have really great ideas and thoughts and have made these projects very successful, just from the community nature of open source.

There is also the obvious cost benefit of not having all these high priced licenses, but the real value, in my opinion, is the community that’s behind these projects. It's continuously innovating and continuously providing new solutions for problems you may not even have yet.

Gardner: With cloud computing, you're also dealing with more moving parts. You don’t necessarily know where those parts are coming from or what the underlying heritage is, but if there is an open source commonality among and between them. I'm quite sure that many of the cloud providers have a significant amount of open source in their infrastructure that helps make these interactions, these common denominators technically possible.

New complexities

Walsh: Agreed. The cloud brings a whole new set of complexities and challenges and as you are deploying your applications into the cloud, you need to think about these things. And a lot of these open-source projects that are addressing some of these cloud needs have thought about these things.

If your organization isn’t into cloud yet, but you're thinking about it, leverage the expertise that's already out there. Talk to the communities and get engaged with those communities. You'll learn a lot, and you'll be probably better off for it in the long run.

Gardner: Dan, you've been involved with open source for quite some time in a number of capacities. Maybe you could explain about where you're involved, what sort of projects you are working on, and why this particular mix of projects sort of come to a head in helping us address this integration challenge?

Kulp: I've been involved with open source for roughly six years now, primarily at Apache. I got started at Apache as part of the Apache CXF Project. I've been there since the beginning. As you mentioned earlier, I'm the PMC Chair for that project, very heavily involved.

For those people who aren’t familiar with CXF, that’s the web services stack. At Apache, they're supporting all of your SOAP standards as well as JAX-RS and REST-based services. It’s really a framework for producing services.

As the problems in the enterprise expand from year to year, which they always do, it’s fascinating seeing these open-source projects at Apache being incubated.



Six years ago, that was the problem people were trying to solve. As things have evolved over the last six years, we're seeing more application integration challenges that are beyond SOAP and REST. That’s where projects like Apache Camel come in, where you're doing your enterprise integration patterns inside of your enterprise service buses (ESBs). So, I'm getting more heavily involved with that.

I've also been involved with even things like the Maven Project at Apache, doing build-related tools and deployment scenario things.

As the problems in the enterprise expand from year to year, which they always do, it’s fascinating seeing these open-source projects at Apache being incubated, or even graduating from the incubator, that solve these real world scenarios. To me, it has been an amazing experience to be involved with that whole process of seeing ideas bubble up through the incubator and into Apache projects that solve real world problems.

Gardner: Okay. We understand that there is a new set of requirements for integration. We know that we have an arsenal of approaches vis-à-vis the open-source communities, and some proven and mature projects that are implemented quite robustly in some of the most intensive compute environments.

How do we now bring this together in such a way that your typical enterprise can understand what they can do to bridge this gap between the data and the applications integration and then reduce their risk by setting up an architecture that’s cloud ready or hybrid computing ready?

Let’s go back to Pat Walsh. What are you finding on the street? What are people starting to do in terms of coming to grips with these architectural changes?

Expanded market

Walsh: One interesting point to raise before talking about what we're seeing people doing is that there is an expanded market now for these integration challenges. It used to be that we would see very large enterprises were the ones that were addressing complexity in their organizations.

With cloud-based initiatives and such, it’s affecting even small to medium-size businesses (SMBs). We see a much broader set of enterprises trying to address it. Companies that have fewer than 1,000 employees are now looking at integration solutions to manage their data and their applications in the cloud in a much more sophisticated way than just three years ago. It’s a much broader problem.

The way that people are hoping to address it is by looking for a way that doesn’t require a massive outlay of investment in consulting resources. The traditional large organization, in addition to purchasing product to help them with integrating their data and integrating their applications, would typically have systems integrator help them pull everything together. That’s obviously not an affordable path for an SMB.

Therefore, people are looking to see, how they can find a combined, easy to use way and how they can gain knowledge from people who have experience, having tackled these issues and problems in the past.

We're finding that people are looking for just a simpler, prescriptive way to do the majority of the challenges out there. In terms of the 20 percent outlier problems, you may need to have a systems integrator come in and help you with that. But, people are really focused on the meat and potatoes of the integration of their functions, the data, and the applications that go along with those processes and functions.

We grab those and bring them together, the best of breed from the various Apache projects that solve real world problems.



Gardner: Dan Kulp, we need to have architecture modernization in effect, but we need to do it in such a way that more people in a large organization and more types of organizations, small to medium-sized businesses, can avail themselves of these services, these capabilities.

Tell me a little bit about what you have done to allow that difficult equation to be solved? It seems to me that we are still talking about service-oriented architecture (SOA). In many respects we're talking about ESBs. Five or seven years ago, that was a very complex and costly activity. We've now been able to abstract up the value, but I suppose reduce and subvert the complexity. Tell me how you do that.

Kulp: The first step in that process to solve that problem was identifying where the best solutions are/ They're primarily in open source. I mentioned CXF and Camel, and there is Apache Karaf providing some OSGi stuff.

That was the first step. We grab those and bring them together, the best of breed from the various Apache projects that solve real world problems.

The next step was trying to find or produce a set of tooling that makes using those products a lot easier. One of the things about Apache that you will discover, if you are heavily involved is that we are hardcore developers. For us, writing Java code to solve a problem is natural.

Skill sets

One of the problems that we're trying to address is bringing this great technology produced by the Apache people into the hands of those that don’t have that same level of skill set, expertise, or mindset.

That includes those from the application integration side, where you have developers that are used to doing point-and-click type enterprise integration pattern things, to the data integration people that are used to their data mappings, GUIs, and things like that, and trying to bring both sets of people together into a platform that can solve both teams.

Gardner: A similar questions to you Pat. Where do we bring the value higher but make the complexity less of an issue and less visible? What is it about your tools and approach at Talend that is helping to bring this to the masses in a way that’s automated, a service factory approach, rather than a hand coding approach?

Walsh: Talend has a great history of unifying technologies onto a common platform, to really keep the power of the underlying tools, but simplify the interface to it. This unified platform really consists of five key components.

The first one is a common development environment that is used across the products. The second thing is a common deployment tool that allows you to deploy into a runtime environment.

By providing this unified platform of tools, it allows someone to learn a single interface, regardless of whether it’s at the development stage, the deployment stage, or the management stage.



There's also a common repository that allows you, across the lifecycle of your process, to be able to manage it consistently, regardless of the type of technology that’s being used. Finally, there is common monitoring across the entire environment.

What we are doing now is extending that model that has been applied to our data management products to encompass the ESB, the application integration aspect of it. By providing this unified platform of tools, it allows someone to learn a single interface, regardless of whether it’s at the development stage, the deployment stage, or the management stage, and get the power of master data management technologies, data integration, data quality, or the ESB technologies themselves.

By providing this one interface, this one common environment, allows people to become comfortable with this common interface, but have the benefit of multiple sets of tools.

Gardner: One of the things that I face when I talk about these issues with enterprises is that they like the idea of having more people involved, but they also see that there is a risk involved with that concerning permissions, access, control, and even policy and rules driven activities around who gets to integrate what. How do you solve or ameliorate that problem?

Walsh: We've gone to great lengths to include security mechanisms into the solution, so that we can have approaches whereby there are certain permissions for just individuals. Or, IT management can look at certain aspects while opening it up maybe to a broader audience, when it comes to development and use of the interfaces that are going to be developed on the data in application side.

Democratizing technology

I
t’s very important, as you say, that as we bring this technology to the masses, as we refer to it, democratizing the technology, lowering the barriers to entry that historically have been in place, we don’t remove any of the enterprise qualities that are expected. Security is certainly a major one, as is policy management, so that you could have a number of different business roles that allow you to have the flexibility you need as you deploy it into a large- or even medium-size enterprise.

We're providing both capabilities, simplifying the interface, while not removing any of the enterprise qualities that have come to be expected of the integration products we provide.

Gardner: Okay. Dan has told us a little bit about how some of the open source projects, such as CXF, Camel, and Karaf have provided some fundamental underpinnings for this. But, Talend has also been merging and acquiring. Tell me a little bit about your business and the evolution of Talend that has allowed you to provide this all in one integration capability to, as you say, more of the masses?

Walsh: It came quite naturally from Talend’s perspective. Data customers were using our data integration tools, as well as our data quality tools. We have Talend Open Studio, which is our popular open source data integration technology. Customers naturally were inquiring about how they could provide these data jobs as services, so that they could be reused by other applications, or they were inquiring how they could incorporate our technology into a SOA.

This led Talend to partner with a company called Sopera. They had a very rich ESB-based integration platform for applications. After two years of partnership, we decided it made sense to come together in a stronger way, and Talend acquired Sopera.

We're providing both capabilities, simplifying the interface, while not removing any of the enterprise qualities that have come to be expected of the integration products we provide.



So, we have seen this firsthand from our customers. It really drove us to see the convergence of data and application integration technology, and therefore the acquisition of Sopera’s technology, as well as the people behind that technology, has enabled us to really come in with this common platform that we are just now releasing.

Gardner: The timing sounds very good. There's movement in the market towards democratization, more inclusive platform approach to both data and applications and services integration. The driver in the market about hybrid computing is coming right at the right time in terms of being able to bridge different types of computing environments and integrate across them.

This all is great in theory and we have certainly seen a lot of action in the open source community that had bolstered the ability of these underlying products and projects. But, what about real use case scenarios. Do we have any examples of where this is being used now, perhaps early adopters? Maybe you can name them or maybe you can only describe what they are doing. But for me, showing is always better than just telling. Can we show how this all in one integration capability is actually being used in the field?

Walsh: We have a couple of examples that I can refer to. I think the most tangible one that may make sense to folks is that we have an insurance company that we work with. While they've been working with us for quite some time on the data side of the house, looking at how they can have their back office data shared amongst the different industry consortia that they work with to do ratings and other checks on credit worthiness or insurance risk, that has really been about integrating data on the backend.

Much like any business, they're making it more accessible to their consumers by trying to extend their back-office systems into systems that have more general web interface or maybe an interface at an ATM.

Opened to consumers

So, they required some application integration technology, and with that, they built this web interface and opened it up to consumers. The expectation of their user is a much more rapid response time. When they had to interface with an agent in the office, they may wait 24 hours for a response, but now they expect their answer to come during their web-based session.

The timeframe required has led them to have an application integration solution that can respond in sub-second response rates for their transaction. In the past, they were going with a much longer latency for the completion of transactions.

It's just a typical example that I think folks can appreciate. As people extend their back office systems to consumers, number one, consumer expectations raised the bar in terms of the overall performance of the system, and thus the technology that’s supporting those systems needs to necessarily change to support that expectation.

Gardner: In listening to Pat describe that use case, Dan, it sounds as if what we're trying to accomplish here is to do what the data warehousing, data mining, and business intelligence (BI) field have done, but perhaps allow many of those values to be extracted with more agility, faster, and then with a dynamic approach.

Is that fair? Are we really compressing or creating a category separate from BI, but that does a lot of what BI does vis-à-vis the integration of data and activities for application services?

That requires a whole new set of skills, a whole new set of challenges.



Kulp: That’s exactly what’s happening. A couple of years, data mining ended up being batch jobs that were run at midnight or overnight. Then, the data would be available to the front end people the next morning. You'd get your reports or you'd log into your system and check the results of these batch jobs.

With extending your backend data systems to the consumer, these overnight batch systems are really not meeting the expectations of the consumers. They're demanding that their information be available immediately. They submit a new request and they want to have things updated immediately, so that results are available and displayed within seconds, not overnight.

That requires a whole new set of skills, a whole new set of challenges. The people that were doing the front-end application integration that queried the data from the overnight batch jobs suddenly have to have some expertise in not just cleaning the data, but allowing or working with the team doing the data space, to provide updates to that information in a much more dynamic form.

Gardner: How is this going to become more critical? Looking to the future, particularly for organizations that are doing more and more web-based commerce, perhaps even more mobile commerce, whether it’s through a web interface, a HTML5 interface, native applications on mobile devices, it seems to me that the consumer activities are driving more need for this fast feedback loop integration and data analysis function.

Let’s start with you, Pat. Why in the future does what we are talking about today become even more important, therefore become more critical as a core competency?

Becoming more relevant

Walsh: You can see that, as the consumerization of technology increases. We're already seeing the pressure that IT feels from becoming more relevant to the business, that just expands.

As I said before about the consumerization of devices in the workplace, it really does come down to the interfaces and the expectations that it doesn’t require a specialist in an IT field to be able to manipulate and analyze the information that they need or even to create a service or application that would enable them to do their everyday task or work function.

That’s just going to expand it. It has been happening, and we are just going to see that at a more rapid pace. It’s going to require that vendors and technology companies like Talend respond in kind and build products that are more accessible to a broader audience of users.

I think it’s analogous to what we saw in the early days of the Internet. Early on you would do command-line interfaces to send files back and forth. Once there was a web-based interface, it opened it to the masses. Nowadays, we think nothing of using a web browser to do all kinds of activity that 20 years ago was reserved to just people that had a technical know how to manipulate those systems.

We are seeing the same across these aspects of the business that up until now had really been the bastions of IT teams.

If it’s beneficial to my organization, why wouldn’t it be beneficial to others in my industry or to an even broader audience?



Gardner: I would also wonder if data services become additional revenue sources for companies. If they can expose just the right amount of data safely and securely and give people some tools to work with that, not only do they provide services, but the fact that they were in a position to gather data about certain markets, certain activities, be it B2B or B2C, they can then in a sense monetize that data back out into a field of partners and/or end-users.

Is there an opportunity for enterprises to start looking at data, not just as an asset, but as actually a product or service to sell?

Walsh: Absolutely. Today, we see that they are really addressing data services as an efficiency within their organization. How can I leverage the investment that I have made in this initial data analysis or data job across the entirety of my organization? But it’s not a big step to take beyond that to say, if it’s beneficial to my organization, why wouldn’t it be beneficial to others in my industry or to an even broader audience?

So we absolutely see that as a level of commerce that will be enabled by more sophisticated data services, technology, with a more accessible interface to that technology.

Gardner: Dan Kulp, same perspective of the future of what’s going on in the future to you. How do you see the trends around mobile and even localization services and mobile commerce? How do these shape up, so that we will require more of the types of services we have been talking about today, that all in one integration, rapid iterative development around it?

Comes down to consumers

Kulp: It really comes down to the consumers of these services and data. As the markets have expanded and the consumers are demanding things to get their information faster or get more information or advertisers need to figure out, where are these consumers going and just the whole variety of information sources expand out as well, the architecture of the applications and the interactions between the front end and backend systems kind of get blurred.

Things are changing, and companies like Talend that are involved in the space need to adapt as well and provide better solutions that make these blurring lines occur a lot quicker. That’s what we are trying to target today.

Gardner: We will have to wrap up now. We're about out of time. Pat, for those folks interested in learning more, do you have some resources, some white papers, reports? Where would I go if I wanted to learn more about this integration across data and applications function for the masses? What do you have available?

Walsh: The easiest place to go would be our website at www.talend.com.

Gardner: Dan Kulp, what about in the open source community? Can you point folks to a place where they can learn more about some of these underlying and supporting projects?

The community behind those projects is as much of an asset to the projects as the code itself.



Kulp: Each of the projects have their own website with information. So CXF is cxf.apache.org; Camel is camel.apache.org; Karaf is karaf.apache.org. However, if you just go to the Apache website, at www.apache.org, there are links to all of them, as well as a lot of valuable information about how Apache works and how these Apache communities work and how you get involved?

A lot of that is just as important as what the technology projects themselves are trying to solve, but the community behind those projects is as much of an asset to the projects as the code itself. I encourage people to poke around there and see all the exciting things that are going on at Apache.

Gardner: You've been listening to a sponsored BriefingsDirect podcast discussion on how the role and impact of integration has shifted and how a more comprehensive and managed approach to integration is helping enterprises produce and leverage more data driven business processes.

I'd like to thank our guests. We've been here today with Dan Kulp. He is Vice President of Open Source Development at Talend’s Application Integration Unit. Thanks so much, Dan.

Kulp: Thank you.

Gardner: And also Pat Walsh, Vice President of Marketing at Talend in their Application Integration Division. Thank you, sir.

Walsh: Thanks, Dana.

Gardner: This is Dana Gardner, Principal Analyst at Interarbor Solutions. Thanks for listening, and come back next time.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: Talend.

Transcript of a sponsored BriefingsDirect podcast on enterprise integration and new tools to put control in the hands of "the masses." Copyright Interarbor Solutions, LLC, 2005-2011. All rights reserved.

You may also be interested in:

Tuesday, June 28, 2011

Discover Case Study: Health Care Giant McKesson Harnesses HP ALM for Data Center Transformation and Dev-Ops Performance Improvement

Transcript of a BriefingsDirect podcast from HP Discover 2011 on how McKesson has migrated data centers into fewer locations, while improving overall metrics of applications performance.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: HP.

Dana Gardner: Hello, and welcome to a special BriefingsDirect podcast series coming to you from the HP Discover 2011 conference in Las Vegas. We're here on the Discover show floor the week of June 6 to explore some major enterprise IT solutions, trends, and innovations making news across HP’s ecosystem of customers, partners, and developers.

I'm Dana Gardner, Principal Analyst at Interarbor Solutions, and I'll be your host throughout this series of HP-sponsored Discover live discussions.

We're now going to focus on McKesson Corp., and how they're improving their operations and reducing their mean time to resolution. We'll also explore applications quality assurance, test, and development, and how they're progressing toward a modernization front on those efforts as well.

We might even get into a bit of how these come together for an application lifecycle management and dev-ops benefit. Here to help us understand these issues better -- and their experience and success -- is Andy Smith, Vice President of Application Hosting Services at McKesson. Welcome, Andy.

Andy Smith: Thank you.

Gardner: We're also here with Doug Smith, Vice President of Data Center Transformation at McKesson. Welcome, Doug.

What we've seen through the improvement in the processes and the improvement in the tools has been a marked improvement in all of our metrics.



Doug Smith: Thank you, Dana.

Gardner: First, we might want to get people familiar, if they are not already, with McKesson. Andy Smith, tell us a little bit about McKesson, the type of organization you are, and the extent. It’s quite a large organization you have for IT activities there as well.

Andy Smith: McKesson is a Fortune 15 healthcare company primarily in three areas: nurse call centers, medical pharmaceutical distribution, and a healthcare software development company.

Gardner: And, you have a very large and distributed IT organization. I've heard about it before, but let’s go through that a little bit again if you don’t mind.

Andy Smith: It’s a very federated model. Each business unit has its own IT department responsible for the applications, and in some cases, their own individual data centers. Through Doug’s data center transformation program, we've been migrating those data centers into fewer corporate locations, and I'm responsible for running the infrastructure in those corporate locations.

Gardner: Andy, tell us about what you've been doing in order to get to faster time to market for your services, meeting your service level agreement (SLA) obligations internally, and how you reduce your meantime to resolution. What’s been the story there?

Improving processes

Andy Smith: What we've been doing over a little more than two years is improving our processes into ITIL v3. We focused heavily on change management, event management, and configuration management. At the same time, in parallel, we introduced the HP Tool Suite, for monitoring and configuration management, asset management, and automation.

What we've seen through the improvement in the processes and the improvement in the tools has been a marked improvement in all of our metrics. We've seen a drop in our Tier 1 outages of 54 percent during the last couple of years, as we implemented this tool. We've got three years worth of metrics now, and every year, the metrics have declined compared to the prior year. We've also seen an 86 percent drop in the breaches of those Tier 1 SLAs.

Gardner: That’s very impressive. Doug Smith, tell us what you've been doing with data center transformation and how you're working toward a higher level of quality with the test development and the upfront stages of applications?

Doug Smith: Well, Dana, we've been on this road of transformation now for about three and a half years. In the beginning, we focused on our production environments, which generally consist of fairly predictable workloads across multiple business units, and as Andy mentioned, quite a variety actually of models. In the past, the business units have obtained a great deal of autonomy in how they manage their infrastructure.

The first thing was to pull together the infrastructure and go through a consolidation exercise, as well as an optimization of that infrastructure. There we focused heavily on virtualization, as well as optimization of our storage environment, and to Andy’s point around process, heavily invested in process improvement.

We look to continue to take advantage, both from an infrastructure perspective as well as a tools perspective, in how we can facilitate our developers through a more rapid development cycle, more securely, and with higher quality outcomes for our customers.



A couple of years into this, we began to look at our development environment. McKesson has several thousand developers globally, and these developers spread across multiple product sets in multiple countries.

If you think about our objectives around security, quality, and agility, we look to continue to take advantage, both from an infrastructure perspective as well as a tools perspective, in how we can facilitate our developers through a more rapid development cycle, more securely, and with higher quality outcomes for our customers.

Gardner: So, it sounds as if both of you have relied increasingly on automation and integration and federation for many of the products that support these activities. Is there anything in particular, at a philosophical level, about why managing and governing across multiple products, but with governance or management capabilities is so important? Let’s start with you, Andy.

Andy Smith: When we first started looking at new tools, we recognized that we had a lot of point solutions that may have been best-in-breed, but they were a standalone solution. So, we weren’t getting the full benefits of the integration. As we looked at the next generation of tools, we wanted a tool suite that was fully integrated, so that the whole was better than the sum of the parts is probably the best way to put it.

We felt HP had progressed the farthest of all the competition in generating that full suite of tools to manage a data center environment. And, we believe we're seeing the benefits of that, because all these tools are working together to help improve our SLAs and shorten those mean time to restore.

Gardner: Doug Smith, any thoughts on that same level of the whole greater than the sum of the parts?

Governance in place

Doug Smith: Absolutely. It's not unique, but to a large business like McKesson, as a federation, we have businesses that retain their autonomy and their decision-making. The key is to have that governance in place to highlight the opportunity at an enterprise level to say that if we make the investments, if we coordinate our activities, and if we pull together, we actually can achieve outcomes greater than we could individually.

Gardner: Doug Smith, you've been using the application development function as a first step towards a larger data center transformation effort, and you've been an early adopter for that set of application.

At the same time, Andy Smith has been involved with trying to make operations run more smoothly. Do these come together? Is there a better ability to create an end-to-end process for development and operations and perhaps provide a feedback loop among and between them.

This is sort of dev-ops question. Andy Smith, how does that strike you? Is there something even greater, maybe perhaps a greater whole among the sum of even more parts?

Andy Smith: I believe so, because for the products that McKesson develops and sells to the healthcare industry, in many cases, we're also hosting them within our data centers as an application service provider.

I can take the testing scripts that were used to develop the products and use those in the BAC Suite to test and monitor the application as it runs in production. So, we're able to share that testing data and testing schemas in the production world to monitor the live product.



And the bigger sum of the whole to me is the fact that I can take the testing scripts that were used to develop the products and use those in the BAC Suite to test and monitor the application as it runs in production. So, we're able to share that testing data and testing schemas in the production world to monitor the live product.

Gardner: Doug Smith, thoughts on the same dev-ops benefit? How does that strike you?

Doug Smith: As you look across product groups and our ability to scale this, and with Andy’s capability that he is developing and delivering on, you really see an opportunity for a company like McKesson to continue to deliver on its mission to improve the health of the businesses that we serve in healthcare. And, we can all relate to the benefits of driving out cost and increasing efficiency in healthcare.

So, at the highest level, anything that we can do to facilitate a faster and more agile development process for the folks who are delivering software and services in our organization, as well as help them provide a foundation and a layer where then they can talk to each other and build additional services and value-added services for our customers on top of that layer, then we have something that really can have an impact for all of us.

Gardner: Well, very good. Thank you for sharing that. I want to thank our guests. We've been here talking about the benefits of better tools for operations, as well as application development and hosting, and sharing their experience has been Andy Smith. He is the Vice President of Application Hosting Services at McKesson. Thanks so much, Andy.

Andy Smith: Thank you.

Gardner: And also Doug Smith, Vice President of Data Center Transformation at McKesson. Thank you, Doug.

Doug Smith: Thank you, Dana.

Gardner: And thanks to our audience for joining this special BriefingsDirect podcast coming to you from the HP Discover 2011 Conference in Las Vegas. I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host for this series of user experience discussions. Thanks again for listening, and come back next time.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: HP.

Transcript of a BriefingsDirect podcast from HP Discover 2011 on how McKesson has migrated data centers into fewer locations, while improving overall metrics of applications performance. Copyright Interarbor Solutions, LLC, 2005-2011. All rights reserved.

You may also be interested in:

Thursday, June 23, 2011

Private Clouds: Debunking the Myths That Can Slow Adoption

Transcript of a sponsored podcast on the misconceptions that slow some enterprises from embracing private cloud models.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: Platform Computing.

Get a complimentary copy of the Forrester Private Cloud Market Overview from Platform Computing.

Dana Gardner: Hi, this is Dana Gardner, Principal Analyst at Interarbor Solutions, and you're listening to BriefingsDirect. Today, we present a sponsored podcast discussion on debunking myths on the road to cloud-computing adoption.

The popularity of cloud concepts and the expected benefits from cloud computing have raised expectations. Forrester now predicts that cloud spending will grow from $40 billion to $241 billion in the global IT market over the next 10 years, and yet, there's still a lot of confusion about the true payoffs and risks associated with cloud adoption. IDC has it's own numbers.

Some enterprises expect to use cloud and hybrid clouds to save on costs, improve productivity, refine their utilization rates, cut energy use and eliminate gross IT inefficiencies. At the same time, cloud use should improve their overall agility, ramp up their business-process innovation, and generate better overall business outcomes. [Disclosure: Platform Computing is a sponsor of BriefingsDirect podcasts.]

To others, this sounds a bit too good to be true, and a backlash against a silver bullet, cloud hype mentality is inevitable and is probably healthy. Yet, we find that there is also unfounded cynicism about cloud computing and underserved doubt.

So, where is the golden mean, a proper context for real-world and likely cloud value? And, what are the roadblocks that enterprises may encounter that would prevent them from appreciating the true potential for cloud, while also avoiding the risks?

We're here to identify and debunk some myths, for better or worse, that can cause confusion and hold IT back from embracing cloud model sooner rather than later. We’ll also define some clear ways to get the best out of cloud virtues without stumbling.

Here to join me on our discussion about the right balance of cloud risk and reward are Ajay Patel, a Technology Leader at Agilysys, and he's in Chicago. Welcome to the show, Ajay.

Ajay Patel: Thank you very much, Dana.

Gardner: We're also here with Rick Parker, IT Director for Fetch Technologies, and he's in El Segundo, Calif. Welcome, Rick.

Rick Parker: Good morning.

Gardner: We're also here with Jay Muelhoefer, Vice President of Enterprise Marketing at Platform Computing, and he joins us from Boston. Welcome, Jay.

Jay Muelhoefer: Glad to be here.

Looking at extremes?

Gardner: Jay, let me start with you. I want to try to understand a little bit from your perspective, being deeply involved with cloud, particularly private cloud. Are we looking at extremes?

On one hand, we have people that see this as a golden, wonderful opportunity to change IT fundamentally. On the other hand, we have folks that seem to be grounded in risk about security and data, and think that the cost will probably be even higher. So where's the right balance? Are they both right or they both wrong? How do you see it?

Muelhoefer: They're both right in some ways. Yes, there are risks that people are confronting today, but there’s also lots of opportunity. Right now, it's a golden time to be evaluating the concept of cloud and private cloud.

In 2009, I think a lot of people were looking at cloud and saying, "Okay, this is an interesting technology. Is this really something that’s going to go into fruition?" In 2010, there was a lot of research and a lot of the early adopters dipping their toes into cloud and what the benefits could be.

But, 2011 is really where that tension is moving from "Is this possible" to "How do I take advantage of it for my own organization?" Google and Amazon have really reset the bar for how IT services are delivered in the marketplace. If internal organizations don't start meeting the needs of their business constituencies, whether it’s a development, test or even production user, they're going to look elsewhere to consume those resources. So, we've hit an inflection point, and that’s going to make it an exciting time.

Gardner: Ajay Patel, how about from your perspective? Do you see this mostly through the lens of opportunity, or do the risks merit being bit conservative?

Patel: Looking at it from systems-integrator (SI) perspective, what we're seeing is the customer base, the end-users are ready to take the leap to cloud. The technologies are there. The capabilities of the cloud management software, the key part of deploying private clouds, are there -- but the fear of security concerns around it are keeping them from jumping to it. I am very confident that the technology and the industry is ready to take customers to the next phase of private clouds.

Gardner: We'll get to some of those fears in a little while, when we look at various myths and perhaps what is supporting them or what needs to be debunked.

Rick Parker, how about you? What are you seeing? What are you hearing in the field? Do most people seem to think that the good or the benefits outweigh the risks, or are many people still on the fence?

No standard definition

Parker: The biggest issue is the lack of knowledge, because there isn't a standard definition of what a private cloud network is comprised of. If you don't know what it is, then you can’t possibly build one yourself. Because there isn’t a standard definition that majority of people are aware of, that leads to an enormous amount of confusion.

Then, when marketing gets hold of it and applies the term to many different things that aren't even cloud related, that obscures the issue even further. So, I see a basic lack of knowledge as the issue for private cloud deployments more than anything.

Gardner: So, we're working toward refining that understanding and, that way, being able to have a better sense of where our risks and rewards are. Of course, we hear that IT is focusing on a sense of lost control, that a third-party public cloud gets between them and their users.

We also hear about a lack of trust, that these cloud providers are not proven. They say that they're going to do what they do, but if they don’t, the IT department is still going to be left holding the bag or being held responsible. There is, of course, as you mentioned security, vulnerability, confidentiality, and privacy issues, particularly around data.

Let's begin to tackle some of the underlying myths that substantiate these concerns, ameliorate them, or help folks get the good without suffering the ills. We have a series of myths, and I'll take the first one to you, Rick.

Private cloud, to put a usable definition to it, is a web-manageable virtualized data center. What that means is that through any browser you can manage any component of the private cloud.



There's an understanding that, as we are trying to define it, virtualization is private cloud and private cloud is virtualization. Clearly, that's not the case. Help me understand what you perceive in the market as a myth around virtualization and what should be the right path between virtualization and a private cloud?

Parker: Private cloud, to put a usable definition to it, is a web-manageable virtualized data center. What that means is that through any browser you can manage any component of the private cloud. That's opposed to virtualization, which could just be a single physical host with a couple of virtual machines (WMs) running on it and doesn't provide the redundancy and cost-effectiveness of an entire private cloud or the ease of management of a private cloud.

So there is a huge difference between virtualization and use of a hypervisor versus an entire private cloud. A private cloud is comprised of virtualized routers, firewalls, switches, in a true data center not a server room. There are redundant environmental systems, like air-conditioning and Internet connections. It’s comprised of an entire infrastructure, not just a single virtualized host.

Gardner: And is there a certain level of virtualization required? We hear some common rates for server workloads of 20 to 30 percent. Is there a certain point in your adoption of server virtualization where you're almost inevitably heading toward a cloud? Are there people who have 80 percent virtualization and perhaps have no interest in, or will never get to, the cloud? How does the rate of adoption for virtualization perhaps impact the likelihood of adopting private cloud infrastructure?

Parker: Moving to a private cloud is inevitable, because the benefits so far outweigh the perceived risks, and the perceived risks are more toward public cloud services than private cloud services.

Gardner: We’ve talked a little bit about fear of loss of control. Perhaps bringing private cloud infrastructure and models to bear on a largely virtualized server infrastructure would provide even more control, better security, and a reduction in some of these risks. Is there a counter-intuitive effect here that cloud will give you better control and higher degrees of security and reliability?

Redundancy and monitoring

Parker: I know that to be a fact, because the private cloud management software and hypervisors provide redundancy and performance monitoring that a lot of companies don't have by default. You don’t only get performance monitoring across a wide range of systems just by installing a hypervisor, but by going with a private cloud management system and the use of VirtualCenter that supports live motion between physical hosts.

It also provides uptime/downtime type of monitoring and reporting capacity planning that most companies don't even attempt, because these systems are generally out of their budget.

Gardner: I wonder if you wouldn’t mind telling us, Rick a little bit about Fetch Technologies. You're the IT Director there. Tell us a little bit about your organization.

Parker: Fetch Technologies is a provider of data as a service, which is probably the best way to describe it. We have a software-as-a-service (SaaS) type of business that extracts formats and delivers Internet-scale data. For example, two of our clients are Dow Jones and Shopzilla.

Gardner: Let’s go next to Ajay. A myth that I encounter is that private clouds are just too hard. "This is such a departure from the siloed and monolithic approach to computing that we'd just as soon stick with one server, one app, and one database," we hear. "Moving toward a fabric or grid type of affair is just too hard to maintain, and I'm bound to stumble." Why would I be wrong in assuming that as my position, Ajay?

The fear of the operations being changed is one of the key issues that IT management sees. They also think of staff attrition as a key issue.



Patel: One of the main issues that the IT management of an organization encounters on a day-to-day basis is the ability for their current staff to change their principles of how they manage the day-to-day operations. So, the operational ability for an IT management staff to operate a private cloud is there.

The training and the discipline need to be changed. The fear of the operations being changed is one of the key issues that IT management sees. They also think of staff attrition as a key issue. By doing the actual cloud assessment, by understanding what the cloud means, it's closer to home to what the IT infrastructure team does today than one would imagine through the myth.

For example, virtualization is a key fundamental need of a private cloud -- virtualization at the servers, network and storage. All the enterprise providers at the servers, networks, and storage are creating a virtualized infrastructure for you to plug into your cloud-management software and deliver those services to a end-user without issues -- and in a single pane of glass.

Gardner: When you say a single pane of glass, I think you are talking about the manageability, the fact that these highly virtualized environments can be automated and that you can probably oversee many, many more instances of servers and runtime environments with fewer people. Is that what you mean?

Patel: Absolutely. If you look at the some of the metrics that are used by managed service companies, SIs, and outsourcing companies, they do what the end-user companies do, but they do it much cheaper, better and faster.

More efficient manner

How they do it better is by creating the ability to manage several different infrastructure portfolio components in a much more efficient manner. That means managing storage as a virtualized infrastructure; tier storage, network, the servers, not only the Windows environment, but the Unix environment, and the Linux environment, including all that in the hands of the business-owners.

Gardner: This is probably where we hear a lot about the cost containment issues. We're talking about higher utilization, lower energy, and better footprint, when it comes to facilities and so forth. Is this what you're seeing, that those who do cloud properly, that put in the proper management and administration, are actually getting some cost-benefits? There might be an upfront cost associated, but it’s the operational ongoing costs that are probably the most important, and that's where the real value is.

Patel: Absolutely. Another thing to look at is not even the upfront cost that you need to be concerned about. Today, with the money being so tight to come by for a corporation, people need to look at not just a return on investment (ROI), but the return on invested capital.

You can deploy private cloud technologies on top of your virtualized infrastructure at a much lower cost of entry, than if you were to just expand utilizing the islands of bills of test, dev environment, by application, by project.

Gardner: I'd like to hear more about Agilysys? What is your organization and what is your role there as a technology leader?

You can deploy private cloud technologies on top of your virtualized infrastructure at a much lower cost of entry.



Patel: I am the technology leader for cloud services across the US and UK. Agilysys is a value-added reseller, as well as a system integrator and professional services organization that services enterprises from Wall Street to manufacturing to retail to service providers, and telecom companies.

Gardner: And do you agree, Ajay, with Forrester Research and IDC, when they show such massive growth, do you really expect that cloud, private cloud, and hybrid cloud are all going to be in such rapid growth over the next several years?

Patel: Absolutely. The only difference between a private cloud and public cloud, based on what I'm seeing out there, is the fear of bridging that gap between what the end-user attains via private cloud being inside their four walled data center, to how the public cloud provides the ability for the end-user to have security and the comfort level that their data is secure. So, absolutely, private to hybrid to public is definitely the way the industry is going to go.

Gardner: Jay at Platform, you're thinking about myths that have to do with adoption, different business units getting involved, lack of control, and cohesive policy. This is probably what keeps a lot of CIOs up at night, thinking that it’s the Wild West and everyone is running off and doing their own thing with IT. How is that a myth and what does a private cloud infrastructure allow that would mitigate that sense of a lot of loose cannons?

Muelhoefer: That’s a key issue, when we start thinking about how our customers look to private cloud. It comes back a little bit to the definition that Rick mentioned. Does virtualization equal private cloud -- yes or no? Our customers are asking for the end-user organizations to be able to access their IT services through a self-service portal.

Key element

That’s a key element that we see being added on top of virtualization. But, a private cloud isn’t just virtualization, nor is it one virtualization vendor. It’s a diverse set of services that need to be delivered in a highly automated fashion. Because it's not just one virtualization, it's going to be VMware, KVM, Xen, etc.

A lot of our customers also have physical provisioning requirements, because not all applications are going to be virtualized. People do want to tap in to external cloud resources as they need to, when the costs and the security and compliance requirements are right. That's the concept of the hybrid cloud, as Ajay mentioned. We're definitely in agreement. You need to be able to support all of those, bring them together in a highly orchestrated fashion, and deliver them to the right people in a secure and compliant manner.

The challenge is that each business unit inside of the company typically doesn’t want to give up control. They each have their own IT silos today that meet their needs, and they are highly over provisioned.

Some of those can be at 5 to 10 percent utilization, when you measure it over time, because they have to provision everything for peak demands. And, because you have such a low utilization, people are looking at how to increase that utilization metric and also increase the number of servers that are managed by each administrator.

You need to find a way to get all the business units to consolidate all these underutilized resources. By pooling, you could actually get effects just like when you have a portfolio of stocks. You're going to have a different demand curve by each of the different business units and how they can all benefit. When one business unit needs a lot, they can access the pool when another business unit might be low.

You need to find a way to get all the business units to consolidate all these underutilized resources.



But, the big issue is how you can do that without businesses feeling like they're giving up that control to some other external unit, whether it's a centralized IT within a company, or an external service provider? In our case, a lot of our customers, because of the compliance and security issues, very much want to keep it within their four walls at this stage in the evolution of the cloud marketplace.

So, it’s all about providing that flexibility and openness to allow business units to consolidate, but not giving up that control and providing a very flexible administrative capability. That’s something that we've spent the last several years building for our customers.

Gardner: So, the old way of allowing for physical IT to be distributed offers them control, but at a high price. Perhaps with increasing security vulnerability issues, it’s hard to have a comprehensive security and network performance benefit, when there's so much scattered infrastructure, but the balance then has to be that we want to let them feel they are enabled. Perhaps private cloud can do that.

Muelhoefer: It’s all about being able to support that heterogeneous environment, because every business unit is going to be a little different and is going to have different needs. Allowing them to have control, but within a defined boundaries, you could have centralized cloud control, where you give them their resources and quotas for what they're initially provisioned for, and you could support costing and charge back, and provide a lot more visibility in to what’s happening.

You get all of that centralized efficiency that Ajay mentioned, but also having a centralized organization that knows how to run a larger scale environment. But then, each of the business units can go in and do their own customized self-service portal and get access to IT services, whether it's a simple OS or a VM or a way to provision a complex multi-tier application in minutes, and have that be an automated process. That’s how you get a lot of the cost efficiencies and the scale that you want out of a cloud environment.

Gardner: And, for those business units, they'd also have to watch the cost and maybe have their own P&L. They might start seeing their IT costs as a shared services or charge-backs, get out of the capital expense business, and so it could actually help them in their business when it comes to cost.

Get a complimentary copy of the Forrester Private Cloud Market Overview from Platform Computing

Still in evolution

Muelhoefer: Correct. Most of our customers today are very much still in evolution. The whole trend towards more visibility is there, because you're going to need it for compliance, whether it’s Sarbanes-Oxley (SOX) or ITIL reporting.

Ultimately, the business units of IT are going to get sophisticated enough that they can move from being a cost center to a value-added service center. Then, they can start doing that granular charge-back reporting and actually show at a much more fine level the value that they are adding to the organization.

Parker: Different departments, by combining their IT budgets and going with a single private cloud infrastructure, can get a much more reliable infrastructure. By combining budgets, they can afford SAN storage and a virtual infrastructure that supports live VMotion.

They get a fast response, because by putting a cloud management application like Platform on top it, they have much more control, because we are providing the interface to the different departments. They can set up servers themselves and manage their own servers. They have a much faster "IT response time,” so they don’t really have to wait for IT’s response through a help desk system that might take days to add memory to a server.

IT gives end-users more control by providing a cloud management application and also gives them a much more reliable, manageable system. We've been running a private cloud here at Fetch for three years now, and we've seen this. This isn’t some pie-in-the-sky kind of thing. This is, in fact, what we have seen and proven over and over.

They have a much faster "IT response time,” so they don’t really have to wait for IT’s response through a help desk system that might take days to add memory to a server.



Gardner: I asked both Ajay and Rick to tell us about their companies. Jay, why don’t you give us the overview of Platform Computing? It’s based in Toronto and it’s been in the IT business for quite some time.

Muelhoefer: Platform Computing is headquartered in Toronto, Canada and it's about an 18-year-old company. We have over 2,000 customers, and they're spread out on a global basis.

We have a couple of different business units. One is enterprise analytics. Second, is cloud, and the third is HPC grids and clusters. Within the cloud space, we offer a cloud management solution for medium and large enterprises to build and manage private and hybrid cloud environments.

The Platform cloud software is called Platform ISF. It's all about providing the self-service capability to end-users to access this diverse set of infrastructure as a service (IaaS), and providing the automation, so that you can get the efficiencies and the benefits out of a cloud environment.

Gardner: Rick, let’s go back to you. I've heard this myth that private clouds are just for development, test, and quality assurance (QA). Developers really like cloud. They have unique characteristics as users, lots of uneven demands when they test or they need to distribute applications for development and bring it back from those teams. So is that right? Is cloud really formed by developers and it’s being getting too much notoriety, or is there something else going that it’s for test, dev, and a whole lot more?

Beginning of the myth

Parker: I believe that myth just came from the initial availability of VMware and that’s what it was primarily used for. That’s the beginning of that myth.

My experience is that our private cloud isn't a specific use-case. A well designed private cloud should and can support any use case. We have a private cloud infrastructure and on top of this infrastructure, we can deliver development resources and test resources and QA resources, but they're all sitting on top of a base infrastructure of a private cloud.

But, there isn't just a single use case. It’s detrimental to define use cases for private cloud. I don't recommend setting up a private cloud for dev only, another separate private cloud for test, another separate private cloud for QA. That’s where a use case mentality gets into it. You start developing multiple private clouds.

If you combine those resources and develop a single private cloud, that lets you divide up the resources within the infrastructure to support the different requirements. So, it’s really backward thinking, counter-intuitive, to try to define use cases for private cloud.

Gardner: How about learning from that heritage, though? It’s almost like New York. If you can do it there, you can do it anywhere. Is there something to be said for private cloud supporting the whole test, dev, and deploy or dev/ops type of lifecycle means that it’s probably going to be quite capable at supporting any number of workloads?

Our goal is 100 percent virtualization of all servers, of running everything on our private cloud.



Parker: Correct. We run everything on our private cloud. Our goal is 100 percent virtualization of all servers, of running everything on our private cloud. That includes back-office corporate IT, Microsoft Exchange services like domain controllers, SharePoint, and all of these systems run on top of our private cloud out of our data centers.

We don't have any of these systems running out of an office, because we want the reliability that the cost savings that our private cloud gives us to deploy these applications on servers in the data center where these systems belong.

Muelhoefer: Some of that myth is maybe because the original evolution of clouds started out in the area of very transient workloads. By transient, I mean like a demonstration environments. or somebody that just needs to do a development environment for a day or two. But we've seen a transition across our customers, where they also have these longer-running applications that they're putting in the production type of environments, and they don't want to have to over-provision them.

At the end of the quarter, you need to have a certain capacity of 10 units, you don’t want to have that 10 units throughout the entire quarter as resource-hogs. You want to be able to flex up and flex down according to the requirements and the demand on it. Flexing requires a different set of technology capabilities, having the right sets of business policies and defining your applications so they can dynamically scale. I think that’s one of the next frontiers in the world of cloud.

Gardner: Jay, I suppose that's particularly important for organizations that are in the business-to-consumer (B2C) business, that have Web apps and others, they are facing their retail or other consumer bases. These could be flexing based on certain demand or even seasonal fluctuations, and certainly a much more cost-efficient way to attack that problem would be through cloud infrastructures.

Flexing capability

Muelhoefer: We've seen with our customers that there is a move toward different application architectures that can take advantage of that flexing capability in Web applications and Java applications. They're very much in that domain, and we see that the next round of benefits is going to come from the production environments. But it does require you to have a solid infrastructure that knows how to dynamically manage flexing over time.

It’s going to be a great opportunity for additional benefits, but as Rick said, you don't want to build cloud silos. You don't want to have one for dev, one for QA, one for help desk. You really need a platform that can support all of those, so you get the benefits of the pooling. It's more than just virtualization. We have customers that are heavily VMware-centric. They can be highly virtualized, 60 percent-plus virtualized, but the utilization isn’t where they need it to be. And it's all about how can you bring that automation and control into that environment.

Gardner: Next myth, it goes to Ajay. This is what I hear more than almost any other: "There is no cost justification. The cloud is going to cost the same or even more. Folks that seem to think that this is really going to have a long-term benefit are kidding themselves. We've seen this in the past with other shifts in computing. They always claim it's going to cost less, but it never does." So, there is some cynicism out there, Ajay. Why is that cynicism unjustified?

Patel: One of the main things that proves to be untrue is that when you build a private cloud, you're pulling in the capabilities of the IT technology that is building the individual islands of environments. On top of it, you're increasing utilization. Today, in the industry, I believe the overall virtualization is less than 40 percent. If you think about it, taking the less-than-40 percent virtualized environment, the remaining is 60 percent.

Even if you take 30 percent, which is average utilization -- 15-20 percent in the Windows environment. By putting it on a private cloud, you're increasing the utilization to 60 percent, 70 percent, 80 percent. If you can hit at 85 percent utilization of the resources, now you are buying that much less of every piece of hardware, software, storage, and network.

You put the right infrastructure in place with the ability to service your business, what you do successfully



When you pool all the different projects together, you build an environment. You put the right infrastructure in place with the ability to service your business, what you do successfully. You end up saving minimally 20 percent, if you just keep the current service level agreements (SLAs) and current deliverables, the way you do today.

But, if you retrain your staff to become cloud administrators -- to essentially become more agile in the ability to create the workloads that are virtual-capable versus standalone-capable -- you get much more benefit, and your cost of entry is minimally 20-30 percent lower on day one. Going forward, you can get more than 50 percent lower cost.

Gardner: I would imagine that for large organizations, in some cases, their constraints, their physical plants, their large brick-and-mortar data centers are at capacity. So this isn't simply saving costs operationally, but frees up capacity that they can use for other activities, and therefore not have to build additional data centers. That could be a huge savings.

Patel: It's killing two birds with one stone, because not only can you re-utilize your elasticity of a 100,000 square-foot facility of data center, but you can now put in 2-3 times more compute capacity without breaking the barriers of the power, cooling, heating, and all the other components. And by having cloud within your data center, now the disaster-recovery capabilities of cloud failover is inherent in the framework of cloud.

You no longer have to worry about individual application-based failover. Now, you're looking at failing over an infrastructure instead of applications. And, of course, the framework of cloud itself gives you a much higher availability from the perspective of hardware up-time and the SLAs than you can obtain by individually building sets of servers with test, dev, QA, or production.

Gardner: Ajay, when we talk about cost, I suppose another important criteria here is comparing old processes and methods to the new. Are there any metrics that you've been able to gather about how private cloud in a sense compresses and/or improves on how IT has done?

Days to hours

Patel: Operationally beyond the initial set up of the private cloud environment, the cost to IT, in an environment and the IT budget goes down drastically on the scale based on our interaction to end-users and our cloud providers is anywhere from 11 days to 15 days down to 3-4 hours.

This means that the hardware is sitting on the dock in the old infrastructure deployment model, versus the cloud model. And when you take three to four hours down into individual components it takes one to two to three days to build the server, rack it, power it, connect it.

It takes 10 minutes today within the private cloud environment to install the operating system. It used to take one to two days, maybe two-and-a-half days, depending on the patches and the add-ons. It takes 30 to 60 minutes starting with a template that is available within private cloud and then setting up the dev environments at the application layer, goes down from days down to 30 minutes.

When you combine all that, the operational efficiency you gain definitely puts your IT staff at a much greater advantage than your competitor.

Gardner: Ajay just pointed out that there is perhaps a business continuity benefit here. If your cloud is supporting infrastructure, rather than individual apps, you can have failover, reliability, redundancy, and disaster recovery at that infrastructure level. Therefore, having it across the board.

In most cases, a number of the components of a private cloud is just redeployed existing hardware, because the cloud network is more of a configuration than the specific cloud hardware.



Is that something that you're seeing your customers use, or is there a hybrid benefit as well? That's a roundabout way of asking what's the business continuity story and does that perhaps provide a stepping stone to hybrid types of computing models?

Parker: To backtrack just a little bit, at Fetch Technologies, we've cut our data-center cost in half by switching to a private cloud. That's just one of the cost benefits that we've experienced.

Going back to the private cloud cost, one of the myths is that you have to buy a whole new set of cloud technology, cloud hardware, to create a private cloud. That's not true. In most cases, a number of the components of a private cloud is just redeployed existing hardware, because the cloud network is more of a configuration than the specific cloud hardware.

In other words, you can reconfigure existing hardware into a private cloud. You don't necessarily need to buy, and there is really no such thing as specific cloud hardware. There are some hardware systems and models that are more optimal in a private cloud environment, but that doesn't necessarily mean you need to buy them to start. You get some initial cost savings, do virtualization to pay for maybe more optimal hardware, but you don't have to start with the most optimal hardware to build a private cloud.

As far as the business continuity, what we've found is that the benefit is more for up-time maintenance than it is for reliability, because most systems are fairly reliable. You don't have servers failing on a day-to-day basis.

Zero downtime

We have systems, at least one server, that's been up for two years with zero downtime. For updating firmware, we can VMotion servers and virtual machines off to other hosts, upgrade the host, and then VMotion those virtual servers back on to the upgraded host so we have a zero downtime maintenance. That's almost more important than reliability, because reliability is generally fairly good.

Gardner: Rick, at Fetch Technologies, we've been talking about cloud computing at almost an abstract level, but for end users, the folks who are actually using these applications, there might be some important benefits for them that we haven't looked at yet?

Parker: Yes. The response that we got from the QA engineers that we rolled out Platform to was that it was the greatest thing since sliced bread, because they're able to deploy new virtual machines when they wanted to, when they needed them. They could change the configuration of the virtual machines.

They weren't waiting for IT to respond different things. So just the almost ecstatic feedback from the end-users was different from a very few other applications that we've deployed. That was extremely important.

Gardner: Jay Muelhoefer at Platform, is there another underlying value here that by moving to private cloud, it puts you in a better position to start leveraging hybrid cloud, that is to say more SaaS or using third-party clouds for specific IaaS and/or maybe perhaps over time moving part of your cloud into their cloud.

Is there a benefit in terms of getting expertise around private cloud that sets you up to be in a better position to enjoy some of the benefits of the more expensive cloud models?

We offer a way to provide a unified view of all your IT service usage, whether it's inside your company being serviced through your internal organization or potentially sourced through an external cloud.



Muelhoefer: That's a really interesting question, because one of the main reasons that a lot of our early customers came to us was because there was uncontrolled use of external cloud resources. If you're a financial services company or somebody else who has compliance and security issues and you have people going out and using external clouds and you have no visibility into that, it's pretty scary.

We offer a way to provide a unified view of all your IT service usage, whether it's inside your company being serviced through your internal organization or potentially sourced through an external cloud that people may be using as part of their overall IT footprint. It's really the ability to synthesize and figure out -- if an end user is making a request, what's the most efficient way to service that request?

Is it to serve up something internally or externally, based upon the business policies? Is it using very specific customer data that can't go outside the organization? Does it have to use a certain type of application that goes with it where there's a latency issue about how it's served, and being able to provide a lot of business policy context about how to best serve that whether it's a cost, compliance, or security type of objective that you’re going against?

That’s one key thing. Another important aspect we do see in our customers is the disaster recovery and reliability issue is very important. We've been working with a lot of our larger customers to develop a unique ability to do Active/Active failover. We actually have customers that have applications that are running real-time across multiple data centers.

So, in the case of not just the application going down, but an entire data center going down, they would have no loss of continuity of those resources. That’s a pretty extreme example, but it goes to the point of how important meeting some of those metrics are for businesses and making that cost justification.

Stepping stone

Gardner: We started out with some cynicism, risk, and myths, but it sounds like private clouds are a stepping stone, but at the same time, they are attainable. The cost structure sounds very attractive, certainly based on Rick and Ajay’s experiences.

Jay, where do you start with your customers for Platform ISF, when it comes to ease of deployment? Where do you start that conversation? I imagine that they are concerned about where to start. There is a big set of things to do when it comes to moving towards virtualization and then into private cloud. How do you get them on a path where it seems manageable?

Muelhoefer: We like to engage with the customer and understand what their objectives are and what's bringing them to look at private cloud. Is it the ability to be a lot more agile to deliver applications in minutes to end users or is it more on the cost side or is it a mix between the two? It's engaging with them on a one-on-one basis and/or working with partners like Agilysys where we can build out that roadmap for success and that typically involves understanding their requirements and doing a proof of concept.

Something that’s very important to building the business case for private cloud is to actually get it installed and working within your own environment. Look at what types of processes you're going to be modifying in addition to the technologies that you’re going to be implementing, so that you can achieve the right set of pooling.

Something that’s very important to building the business case for private cloud is to actually get it installed and working within your own environment.



You’re a very VMware-centric shop, but you don’t want to be locked into VMware. You want to look at KVM or Xen for non-production-type use cases and what you’re doing there. Are you looking at how can you make yourself more flexible and leverage those external cloud resources? How can you bring physical into the cloud and do it at the right price point?

A lot of people are looking at the licensing issue of cloud, and there are a lot of different alternatives, whether it's per VM, which is quite expensive, or other alternatives like per socket and helping build out that value roadmap over time.

For us, we have a free trial on our website that people can use. They can also go to our website to learn more which is http://www.platform.com/privatecloud. We definitely encourage people to take a look at us. We were recently named the number one private cloud management vendor by Forrester Research. We are always happy to engage with companies that want to learn more about private cloud.

Gardner: Very good. We’ve covered quite a bit of a ground, but we're out of time. You've been listening to a sponsored BriefingsDirect podcast discussion on debunking myths on the road to cloud computing adoption. I want to thank our guests. We've been joined by Ajay Patel, the Technology Leader at Agilysys. Thanks so much, Ajay.

Patel: Thank you very much for your time, Dana.

Gardner: And, Rick Parker, IT Director at Fetch Technologies. Thank you, sir.

Parker: You’re welcome.

Gardner: And last, Jay Muelhoefer, Vice President of Enterprise Marketing at Platform Computing. Thank you, Jay.

Muelhoefer: Thanks Dana. I appreciate it.

Gardner: This is Dana Gardner, Principal Analyst at Interarbor Solutions. Thanks for listening and come back next time.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: Platform Computing.

Get a complimentary copy of the Forrester Private Cloud Market Overview from Platform Computing

Transcript of a sponsored podcast on the misconceptions that slow some enterprises from embracing private cloud models. Copyright Interarbor Solutions, LLC, 2005-2011. All rights reserved.

You may also be interested in: