Monday, August 11, 2014

Service Providers Gain New Levels of Actionable Customer Intelligence from Big Data Analytics

Transcript of a BriefingsDirect podcast on how service providers are harnessing the power of data analytics to improve customer service and customer relations.

Listen to the podcast. Find it on iTunesDownload the transcript. Sponsor: HP.

Dana Gardner: Hello, and welcome to the next edition of the HP Big Data Podcast Series. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your moderator for this ongoing sponsored discussion on how data is analyzed and used to enrich the way you live and work. 

Gardner
Once again, we’re showcasing how companies and industries worldwide are capturing myriad knowledge, gaining ever deeper analysis, and rapidly and securely making those insights available to more people on their own terms.

Our next Big Data innovation discussion highlights how the telecommunication service-provider industry is gaining new business analytic value and strategic return through the better use and refinement of their Big Data assets. To learn more about how Big Data analytics has become a business imperative for communication service providers (CSPs), please join me in welcoming Oded Ringer, Worldwide Solution Enablement Lead for HP Communication and Media Solutions. Welcome, Oded.

Oded Ringer: Hi, Dana, thanks for bringing me in. It's great being here.

Gardner: What are some of the major trends and even competitive pressures that are leading CSPs to now view themselves as being more data-driven organizations?

Ringer: It’s not a secret that CSPs are under a lot of pressure. On one hand, this industry has never been more central. Everybody is connected, spending so much more time online than ever before, and carrying with them small devices through which they connect to the network. So CSPs are central to our work and personal lives – as a result, they’re under lot of pressure.

Ringer
They’re under a lot of pressure, because they’re required to make massive investments in the networks, but they also need to deal with shrinking margins and revenues to subsidize these investments. So, at the end of the day, they’re squeezed between these two motions. 

One approach many CSPs have adopted in the last year was to reduce cost and to cut operations. But this is pretty much a trip to nowhere. Going into most basic services and commodity services is no way for these kinds of things to survive. 

In the last two to three years, more and more traditional operators understand that they must go beyond what they did before. They need to offer more compelling services to reduce churn and acquire new customers. They need to leverage their position as a central place between consumers and what they are looking for to become some kind of brokers of information

The key asset they have in their hand to become such brokers is the huge amount of information that they maintain. It’s exactly where analytics comes into play.

Talking about mobile

Gardner: When we say CSP and telecommunication companies these days, we’re more and more talking about mobile, right? How big a shift has mobile been in terms of the need to analyze use patterns and get to know what's really happening out in the mobile network?

Ringer: Mobile services are certainly the leading tool in most operator’s arsenals. Operators that have the subscriber “connected” with them wherever they go, around the clock, have an advantage over those that are more dependent upon or only provide tethered services. 

But we need to keep in mind that there’s also a whole space for analytics solutions that are related to fixed-line services, like cablesatellitebroadband, and other, landline services. CSPs are investing a lot in becoming more predictive, finding out what the subscriber really wants, what the quality of those services are at any given time, and how we can reduce churn in their customer base. 

Another kind of analytics practices that operators take is trying to be predictive in their investments in the network, understanding which network segments are used by more high-worth individuals, those that they do want to improve service to, beefing up those networks and not the other networks.

Again, it’s these mobile operators who are on the front lines of doing more with subscriber data and information in general, but it is also true for cable operators and pay-TV operators, and landline CSPs.
CSPs, unlike most enterprises, need to handle not only the structured data that’s coming from databases and so on, but also unstructured data.

Gardner: Oded, what are some of the data challenges that are specific to CSPs? We know of course that Big Data is an issue associated with rapidly increasing velocity, volume, and variety of data for just about any organization, but is there something specific about the Big Data challenge in order to get these all important analytics specific to CSP?

Ringer: In the CSP industry, Big Data is bigger than in any other industry. Bigger, first of all, in volume. There is no other industry that runs this amount of data – if you take into consideration they’re carrying everybody’s data, consumer and enterprise. But that’s one aspect and is not even the most complicated one. 

The more complicated thing is the fact that CSPs, unlike most enterprises, need to handle not only the structured data that’s coming from databases and so on, but also unstructured data, such as web communication, voice communication, and video content. They want to analyze all those things, and this requires analyzing unstructured data. 

So that’s a significant change in that type of process flow. They are also facing the need to look at new sets of structured data, data from IT management and security log files, from sensors and end-point mobile device telematics, cable set-top boxes, etc.

And two, in the CSP industry, because everything is coming from the wire, there’s no such thing as off-line analytics or batch analytics. Everything needs to be real-time analytics. Of course, this doesn’t mean that there will not be off-line or batch analytics, but even these are becoming more complex and span many more data sets across multiple enterprise silos.

More real time

If you analyze subscriber behavior right now and you want to make an offer to improve the experience that he’s having in real time, you need to capture the degradation of service right now and correlate it with what you know about the subscriber right now. So it's so much more real time than in any other industry. 

In short, CSP Big Data analytics is Big Data analytics on steroids.

Gardner: Of course, all these different types of media, information, and data need to be associated in order to get those bigger analytic payoffs. That is to say, having separate pools of analysis isn’t as valuable as analyzing them all together. How do the CSPs pull together data and assets that up until now they really didn’t try to join or analyze in conjunction or in association with one another?

Ringer: There are different data sources or information sources, and it makes no sense to consolidate everything in one database, because it's endless, and most of the existing databases are limited in purpose and scale to their existing databases – not to mention the exponential increase in governance problems associated with wholesale transfer from existing siloed data stewards to a single consolidated repository. 

The idea is that we need to have good tools of mediation and collection of data from different places, collecting them in a staging database for trend analysis over time and connecting them via event triggering for real-time analytics. So the sources of information remain separate and many times, isolated. 
The market is still young. So it's very hard to say which one will be more dominant.

We’re not talking here about projects of data consolidation. It may be necessary in some cases, but that’s not really the practice that we’re talking about here. We’re talking about federating, referring to external information, analyzing in the context of the logic that we want to apply, and making real-time decisions.

Gardner: We've outlined some of the issues and challenges that are specific to CSPs. We recognize that this is extremely important to how they conduct their business, how they make their investments and how they satisfy and engage their customers. 

What does a long-term solution look like, rather than cherry picking against some of these analytics requirements? Is there a more strategic overview approach that would pay off longer term and put these organizations in a better position as they know more and more requirements will be coming their way?

Ringer: Actually we see two kinds of behaviors. The market is still young. So it's very hard to say which one will be more dominant. We see some CSPs that are coming to us with a very clear idea on what business process they want to implement and how they believe a data-driven approach can be applied to it. 

They have clear model, a clear return on investment (ROI) and they want to go for it and implement it. Of course, they need the technology, the processes, and the business projects, but their focus is pretty much on a single use case or a variety of use cases that are interrelated. That’s one trend.

There’s another trend in which operators say they need to start looking at their data as an asset, as an area that they want to centralize. They want to control it in a productive manner, both for security, for privacy, and for the ability to leverage it to different purposes.

Central asset

Those will typically come with a roadmap of different implementations that they would like to do via this Big Data facility that they have in mind and want to implement. But what’s more important for them is not the quickest time to launch specific processes, but to start treating the data as a central asset and to start building a business plan around it. 

I guess both trends will continue for quite some while, but we see them both in the market sometimes even in the same company in different organizations.

Gardner: Let's look at what harnessing Big Data can bring to an organization, whether they do it tactically or strategically. It seems to me that the business case for this is simply getting more and more pronounced and more powerful. 

Let’s look at some examples. There’s a new retail model called smart shopping that takes the advantage of geolocation in the mobile tier. There’s electronic fraud detection and prevention, where they can help people protect themselves and do more commerce to gain trust on their networks. 
Operators realize that they need to use the data to differentiate themselves, be more relevant to the subscribers, and to be more proactive in their behavior.

There are marketing benefits that can be brought back to providers of services, sellers that want to engage. And of course, it's important to track all the use patterns along the way for all of these and be able to make that data available to as many parties as possible. Tell me more about why this is essential and not something that’s likely to go away or even diminish any time soon.

Ringer: First of all, it’s essential because operators realize that they need to use the data to differentiate themselves, be more relevant to the subscribers, and to be more proactive in their behavior. They can’t continue to be a dumb pipe. They realize that. That’s clear to everybody. 

It's interesting that you mentioned those areas. Some are very similar to the way we also define this space that we’re active in. You mentioned the implementation of smart shopper, which is something that we actually did with a large North American operator in collaboration with a chain of malls in North America. 

Gardner: When we think about these really important business imperatives and how a CSP can really change their identity from being a pipe, a conduit, to being more of a rich services provider on top of communications, I can see why they’re really putting a lot of emphasis on this. 

What is it that HP is bringing to the table? What is it about HP HAVEn, in particular, that is well suited to where the telecommunications industry is going and what the requirements are?

Ringer: HP has made huge investments in the space of Big Data in general and analytics in particular, both in-house developments, multiple products, as well as acquisitions of external assets. 

Complete platform

HAVEn is now the complete platform that includes multiple best-in-class product elements based multiple, cutting edge yet proven technologies, for exploiting Big Data and analytics. Our solution for the space is pretty much based on HAVEn and expanded with specific solutions for CSP needs, with a wide gallery of connectors for external data sources that exist within the CSP space. 

In short, we’re taking HAVEn and using it for the CSP industry with lots of knowledge about what traditional CSP operators need to become next-generation CSPs. Why? 

Because we have a very large group within HP of telecom experts who interact with and leverage what we’re doing in other industries and with many of the new age service providers like the AmazonsGooglesFacebook and Twitters of the world. We go a long way back in expertise in telecom -- but combine this with forward thinking customers and our internal visionaries in HP Labs and across our business units. 

Gardner: Just to be clear for our audience, HAVEn translates to HadoopAutonomyVertica, and Enterprise Security, along with a whole suite of horizontally and vertically integrated set of applications that are vertical industry specific. Is that right?
It’s coming from the business people that understand that they need to do something with the data and monetize it.

Ringer: Exactly.

Gardner: Tell me what you do in terms of how you reach out to communications organizations. Is there something about meeting them at the hardware level and then alerting them to what these other Big Data capabilities are? Is this a cross-discipline type of approach? How do you actually integrate HP services and then take that and engage with these CSPs?

Ringer: Those things exist, like engaging at a hardware level, but those are the less common go-to-market motions that we see. The more popular ones are more top-down, in the sense that we are meeting with business stakeholders who wants to know how to leverage Big Data and analytics to improve their business. 

They don’t care about the data other than how it’s going to be result in actionable intelligence. So, at the CSP level, it can be with marketing officers within the CSP who are looking to create more personalized services or more sticky services to increase the attention of their subscribers. They’re looking to analytics for that. 

It can be with business-development managers within the CSP organization that are looking to create models of collaboration with the Yahoos and Facebooks of the world, with retailers, or with any kind of other participants of their ecosystem where they can bring the ability to provide the pipe, back-end hosting of services and intelligence about how the pipe is providing the services and the sentiment of the customers on the other end of the pipe. 

They want to share information of value to their customers, making them dependent on them in new ways that aren’t just about the pipe thereby gaining new revenue streams. That’s the kind of motivation they have. It can be with IT folks as well, but at the end of the day the discussion about CSP Big Data isn’t coming from the technology. It’s coming from the business people that understand that they need to do something with the data and monetize it.

Then, of course, it becomes pretty quickly a technical discussion that the motion is business to technology, rather than infrastructure to technology. 

Support practice

We also developed the support practice within our organization that does exactly that, business advisory workshops. It’s for stakeholders of different roles to realize what the priorities are in using Big Data. What is the roadmap that they want to implement? 

The purpose of this exercise is to quickly bring everybody to the same room, sit together for a day or two, and come out with an agreement on how to turn themselves from conventional services to more personalized services and diversify the business channels via using information data.

Gardner: Let’s go to some examples to demonstrate what telecommunications and service provider organizations are doing to accomplish that, to become smarter in their services, to get more personalized, and leverage Big Data to do that. 

Are there use cases you can think of or anecdotes of how this is being used? Or perhaps you have some named customers that you could use to show us what they’re doing and what they are getting from their investments.
We can pretty accurately get the quality of experience for every single video streaming session. It’s no big deal.

Ringer: For several years now, we have one large customer, Telefónica a Latin American conglomerate, has been working with us on analytics projects to improve the quality of experience of their subscribers. 

In Latin America, most people are interested in football, and many of them want to watch it on their mobile device. The challenge is that they all want to watch it during the same 90 minutes. That’s a challenge for any mobile operator, and that’s exactly where we started a critical project with Telefónica. 

We’re helping them analyze the quality of experience. Realizing the quality of the experience isn’t a very complicated thing. There are probes in the network to do that. We can pretty accurately get the quality of experience for every single video streaming session. It’s no big deal.

Analytics kicks in when you want to correlate this aggregation of quality with who the subscriber is, how the subscriber is expected to behave, and what he’s interested in. We know that the quality isn’t good enough for many subscribers during the football game, but we need to differentiate and know to which one of them we want to make an offer to upgrade his package. What’s the right offer? When’s the right time to make the offer? How many different offers do we test to zero in on the best set of offers?

We want to know which one of them we don’t want to promote anything to, but just want to make him happy. We want to give him a better quality experience for free, because he is a good customer and we don’t want to lose him. And we want to know which customer we want to come back to later, apologize, and offer him a better deal.

Real-time analytics

Based on real-time triggering of events from the network, degradation of quality with information that is ongoing about the subscriber, who the subscriber is, what marketing segment he belongs to, what package is he subscribed to and so on, we do the analytics in real time, and decide what the right action is and what the right move is, in order for us to give the best experience for the individual subscriber. 

It’s working very nicely for them. I like this example, first of all, because it’s real, but also because it shows the variety of processes we have here with correlation of real-time information with ongoing information for the subscribers. We have contextual action that is taken to monetize and to improve quality and to improve satisfaction. 

This example touches so many needs of an operator and is all done in a pretty straightforward manner. The implementation is rather simple. It’s all based on running the right processes and putting the right business process in place. But this isn’t always straightforward for enterprise customers, particularly those in the small to medium enterprise segment so imagine what CSPs could do for their customers once they’ve gotten a handle on this for their own businesses.
We have contextual action that is taken to monetize and to improve quality and to improve satisfaction. 

Gardner: It seems to me that that helps reduce the risk of a provider or their customers coming out with new services. If they know that they can adjust rapidly and can make good on services, perhaps this gives them more runway to take off with new services, knowing that they can adjust and be more agile. It seems like it really fundamentally changes how well they can do their business.

Ringer: Absolutely. It also reduces quite a lot the risk of investment. If you launch a new service and you find out that you need to beef up your entire network, that is a major hit for your investment strategy. At the same time, if you realize that you can be very granular and very selective in your investment, you can do it much more easily and justify subsequent investments more clearly.

Gardner: Are there any other examples of how this is manifesting itself in the market -- the use of Big Data in the telecommunication’s industry? 

Ringer: Let me give another example in North America. This is an implementation that we did for a large mobile operator in North America, in collaboration with a chain of retail malls. 

What we did there is combine their ongoing information that the mobile operator has about its subscribers -- he knows what the subscriber is interested in, what they’re prior buying pattern and transactions were and so on -- with the location information of where the individual person is at the mall. 

The mall operator runs a private wi-fi network there, so he has his own system of being able to track where the individual is exactly within the mall. He knows within two meters where a person is in the mall but with the map overlay of the physical mall and all product and service offerings to the same grid.

When we know a person is in the mall, we can correlate it with what the CSP knows about this person already. He knows that the specific person has high probability of looking for a specific running shoe. The mobile operator knows it because he tracks the web behavior of the specific individual. He tracks the profile of the specific individual and he can have pretty good accuracy in telling that this guy, for the right offer, will say yes for running shoes. 

Targeted and timely

So combining these two things, the ongoing analytics of the preferences, together with real-time location information, give us the ability to push out targeted and timely promotions and coupons.

Imagine that you go in the mall and suddenly you pass next to the shoe store. Here, your device pops up a message and that says right now, Nike shoes are 50 percent off for the next 15 minutes. You know that you’re looking for Nike shoes. So the chance that you’ll go into the store is very good, and the results are very good because you create a “buy-now or you’ll miss-out” feeling in the prospect. Many subscribers take the coupons that are pushed to them in this way. 

Of course, it’s all based on opt-in, and of course, it’s very granular in the sense that there are analytics that we do on subscriber information that is opted in at the level of what they allow us to look at. For instance, a specific person may allow us to look at his behavior on retail sites, but not on financial sites. 

Gardner: Again, this shows a fundamental shift that the communications provider is not just a conduit for information, but can also offer value-added services to both the seller and the buyer -- radically changing their position in their markets. 

If I am an organization in the CSP industry and I listen to you and I have some interest in pursuing better Big Data analytics, how do I get started? Where can I go for more information? What is it that you’ve put together that allows me to work on this rather quickly?

Ringer: As I mentioned before, we typically recommend engaging in a two-day workshop with our business consultants. We have a large team of Big Data advisory consultants, and that’s exactly what they do. They understand the priorities and work together with the telecom organizations to come up with some kind of a roadmap -- what they want to do, what they can do, what they are going to do first, and what they are going to do later. 
They all look to become more proactive, they all realize that data is an asset and is something that you need to keep handy, keep private, and keep secured.

That’s our preferred way of approaching this discipline. Overall, there are so many kinds of use cases, and we need to decide where to start. So that’s how we start. To engage, the best place is to go to our website. We have lots of information there. The URL is hp.com/go/telcoBigData, that’s one word, and from there you just click Contact Us, and we’ll get back to you. We’ll take you from there. There are no commitments, but chances are very good.

Gardner: Before we sign off, I just wanted to look into the future. As you pointed out, more and more entertainment and media services are being delivered through communication providers. The mobile aspect of our lives continues to grow rapidly. And, of course, now that cloud computing has become more prominent, we can expect that more data will be available across cloud infrastructures, which can be daunting, but also very powerful. Where do you see the future challenges, and what are some of the opportunities?

Ringer: We can summarize four main trends that we’re seeing increasing and accelerating. One is that CSPs are becoming more active in enabling new business models with partnerships, collaborations, internet players, and so on. This is a major trend. 

The second trend that we see increasing quite intensively is operators becoming like marketing organizations, promoting services for their own or for others.

The third one is more related to the operation of the CSP itself. They need to be more aware of where they invest, what’s their risk and probability of seeing an specific ROI and when will that occur. In short, Big Data and Analytics will make them smarter and more proactive in making the investments. That’s another driver that increases their interest in using the data. 

Overall they all look to become more proactive, they all realize that data is an asset and is something that you need to keep handy, keep private, and keep secured, but be able to use it for variety of use cases and processes to be ready for the next move. 

Gardner: I am afraid we’ll have to leave it there. You’ve been listening to a Big Data innovation discussion that highlights how the telecommunications service provider industry is gaining new business analytics value and strategic returns through better use and refinement of their Big Data assets. And we have seen how Big Data capabilities and advanced business analytics have become essential to CSPs, especially as mobile and e-commerce drives their business’s future.

This discussion marks the latest episode in the ongoing HP Big Data Podcast Series, where leading-edge adapters of data-driven business strategies share their success stories and where the transformative nature of Big Data takes center stage. 

Please join me now in thanking today’s guest, Oded Ringer, Worldwide Solution Enablement Lead for HP Communication and Media Solutions. Thank you so much, Oded.

Ringer: Thank you very much, Dana.

Gardner: To learn more about how businesses anywhere can best capture knowledge, gain deeper analysis, and rapidly and securely make those insights available to more people, visit the HP HAVEn Resource Center at hp.com/HAVEn, and for more CSP-specific Big Data information visit, hp.com/go/telcoBigData

I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your moderator for this ongoing sponsored journey into how data is analyzed and used to advance the way we all live and work. Thanks for listening, and come back next time.

Listen to the podcast. Find it on iTunesDownload the transcript. Sponsor: HP.

Transcript of a BriefingsDirect podcast on how service providers are harnessing the power of data analytics to improve customer service and customer relations. Copyright Interarbor Solutions, LLC, 2005-2014. All rights reserved.

Monday, August 04, 2014

A Gift That Keeps Giving, Software-Defined Storage Now Demonstrates Architecture-Wide Benefits

Transcript of a BriefingsDirect podcast on the future of software-defined storage and how it will have an impact on storage-hungry technologies, especially VDI.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: VMware.

Dana Gardner: Hi, this is Dana Gardner, Principal Analyst at Interarbor Solutions, and you're listening to BriefingsDirect.

Our latest podcast explores how one of the most costly and complex parts of any enterprises IT infrastructure -- storage -- is being dramatically improved by the accelerating adoption of software-defined storage.

Gardner
The ability to choose low-cost hardware, to manage across different types of storage, and radically simplified data storage via intelligent automation means a virtual rewriting of the economics of data.

But just as IT leaders seek to simultaneously tackle storage pain points of scalability, availability, agility, and cost -- software-defined storage is also providing significant strategic- and architectural-level benefits.

We're here now with two executives from VMware to unpack these efficiencies and examine the broad innovation behind the rush to exploit software-defined storage. Please join me now in welcoming our guests, Alberto Farronato, the Director of Product Marketing for Cloud Infrastructure Storage and Availability at VMware. Hello, Alberto.

Alberto Farronato: Hello, Dana. Glad to be here, thanks.

Gardner: We're also here with Christos Karamanolis, Chief Architect and a Principal Engineer in the Storage and Availability Engineering Organization at VMware. Welcome, Christos.

Christos Karamanolis: Thank you. Glad to be here.


Gardner: Alberto, we often focus on the speeds and feeds and the costs -- the hard elements -- when it comes to storage and modernization of storage. But what about the wider implications?

Software-defined storage is really changing something more fundamental than just data and economics of data. How do you see the wider implications of what’s happening now that software-defined storage is becoming more common?

Farronato: Software-defined storage is certainly about addressing the cost issue of storage, but more importantly, as you said, it’s also about operations. In fact, the overarching goal that VMware has is to bring to storage the efficient operational model that we brought to compute with server virtualization. So we have a set of initiatives around improving storage on all levels, and building a parallel evolution of storage to what we did with compute. We're very excited about what’s coming.

Gardner: Christos, one of my favorite sayings is that "architecture is IT destiny." How you see software-defined storage at that architectural level? How does it change the game?

Concept of flexibility

Karamanolis: The fundamental architectural principle behind software-defined storage is the concept of flexibility. It's the idea of being able to adapt to different hardware resources, whether those are magnetic disks, flash storage, or other types of non-volatile memories in the future.

Karamanolis
How does the end user adapt their storage platform to the needs they have in terms of the capabilities of the hardware, the ratios of the different types of storage, the networking, the CPU resources, and the memory resources needed for executing and providing their service to what's ahead?

That’s one part of flexibility, but there is another very interesting part, which is a very acute problem for VMware customers today. Their operational complexity of provisioning storage for applications and virtual machines (VMs) has been one way of packaging applications.

Today, customers virtualize environments, but also in general have to provision physical storage containers. They have to anticipate their uses over time and have make an investment up front in resources that they'll need over a long period of time. So they create those logical unit number (LUN) file services, or whatever that is needed, for a period of time that spans anything from weeks to years.

Software-defined storage advocates a new model, where applications and VMs are provisioned at the time that the user needs them. The storage resources that they need are provisioned on-demand, exactly for what the application and the user needs -- nothing more or less.

The idea is that you do this in a way that is really intuitive to the end-user, in a way that reflects the abstractions that user understands -- applications, the data containers that the applications need, and the characteristics of the application workloads.


So those two aspects of flexibility are the two fundamental aspects of any software-defined storage.

Gardner: As we see this increased agility, flexibility, the on-demand nature of virtualization now coupled with software-defined storage, how are organizations benefiting at a business level? Is there a marker that we can point to that says, "This is actually changing things beyond just a technology sphere and into the business sphere?"

Farronato
Farronato: There are several benefits and several outcomes of adopting software-defined storage. The first that I would call out is the ability to be much more responsive to the business needs -- and the changing business needs -- in the form of what your application needs faster.

As Christos was saying, in the old model, you had to guess ahead of time what the applications will need, spend a lot of time trying to preconfigure and predetermine the various services levels, performance, availability and other things that our storage really would be required by your application, and so spend a lot of time setting things up, and then hopefully, down the line, consume it the way you thought you would.

Difficult change management

In many cases, this causes long provisioning cycles. It causes difficult change management after you provision the application. You find that you need to change things around, because either the business needs have changed or what you guessed was wrong. For example, customers have to face constant data migration.

With the policy-driven approach that Christos has just described -- with the ability to create these storage services on-the-fly for a policy approach -- you don’t have to do all that pre-provisioning and preconfiguring. As you create the VMs and specify the requirements, the system responds accordingly. When you have to change things, you just modify the policy and everything in the underlying infrastructure changes accordingly.

Responsiveness, in my opinion, is the one biggest benefit that IT will deliver to the business by shifting to software-defined storage. There are many others, but I want to focus on the most important one.
When you have to change things, you just modify the policy and everything in the underlying infrastructure will change accordingly.

Gardner: As we gain more agility, that prompts more use of software-defined storage, or in your case, Virtual SAN. With that acceleration of adoption, we begin to see more beneficial consequences, such as better manageability of data as a lifecycle, perhaps operations being more responsive to developers so that a DevOps benefit kicks in.

Can you explain what happens when software-defined storage becomes strategic at the applications level, perhaps with implications across the entire data lifecycle?

Karamanolis: One thing we already see, not only among VMware customers, but as a more generic trend, is that infrastructure administrators -- the guys who do the heavy-lifting in the data centers day in and day out, who manage much more beyond what is traditionally servers and applications -- are getting more and more into managing networks and data storage.

Find SDS technical insights and best practices on the VSAN storage blog.

Talking about changing models here, what we see is that tools have to be developed and software-defined storage is a key technology evolution behind that. These are tools for those administrators to manage all those resources that they need to make their day-to-day jobs happen.

Here, software-defined storage is playing a key role. With technology like Virtual SAN, we make the management of storage visible for people who are not necessarily experts in the esoterics of a certain vendor's hardware. It allows more IT professionals to specify the requirements of their applications.

Then, the software storage platform can apply those requirements on the fly to provision, configure, and dynamically monitor and enforce compliance for the policy and requirements that are specified for the applications. This is a major shift we see in the IT industry today, and it’s going to be accelerated by technologies like Virtual SAN.

Gardner: When you go to software-defined storage, you can get to policy level, automation, and intelligence when it comes to how you're executing on storage. How does software-defined storage simplify storage overall?

Distributed platform

Karamanolis: That's an interesting point, because if you think about this superficially, we’ll now go from a single, monolithic storage entity to a storage platform that is distributed, controlled by software, and can span tens or sometimes hundreds of physical nodes and/or entities. Isn’t complexity harder in the latter case?

The reality is that whether it's because of necessity or because we've learned a lot over the last 10 to 15 years about how to manage and control large distributed systems, that there is a parallel evolution of these ideas of how you manage your infrastructure, including the management of storage.
The user has to be exposed to the consequences of the policy they choose. There is a cost there for every one of those services.

As we alluded to already, the fundamental model here is that the end user, the IT professional that manages this infrastructure, expresses in a descriptive way, what they need for their applications in terms of CPU, memory, networking, and, in our case, storage.

What do I mean by descriptive? The IT professional does not need to understand all the internal details of the technologies or the hardware used at any point in time, and which may evolve over a period of time.

Instead, they express at a high level a set of requirements -- we call them policies -- that capture the requirements of the application. For example, in the case of storage, they specify the level of availability that is required for certain applications and performance goals, and they can also specify things like the data protection policies for certain data sets.


Of course, for all those things, nothing comes for free. So the user has to be exposed to the consequences of the policy that they choose. There is a cost there for every one of those services.

But the key point is that the software platform automatically configures the appropriate resources, whether they're arrayed across multiple physical devices, arrayed across the network, or whether they get asynchronous data as specified in a remote location in order to comply with certain disaster recovery (DR) policies.

All those things are done by the software, without the user having to worry about whether the storage underneath is highly available storage, in which case they need to be able to create only two copies of the data, or whether it is of some low-end hardware for which that would require three or four copies of the data. All those things are determined automatically by the platform.

This is the new mode. Perhaps I'm oversimplifying some of these problems, but the idea is that the user should really not have to know the specific hardware configurations of a disk array. If the requirements can not be met, it is because these new technologies are not incorporated into the storage platform.

Policy driven

Farronato: Virtual SAN is a completely policy-driven product, and we call it VM-centric or application-centric. The whole management paradigm for storage, when you use Virtual SAN, is predicated around the VM and the policies that you create and you assign to the VMs as you create your VMs, as you scale your environment.

One of the great things that you can achieve with Virtual SAN is providing differentiated service levels to individual VMs from a single data store. In the past, you had to create individual LUNs or volumes, assign data services like replication or RAID levels to each individual volume, and then map the application to them.

With Virtual SAN, you're simply going to have a capacity container that happens to be distributed across a number of nodes in your cluster -- and everything that happens from that point on is just dropping your VMs into this container. It automatically instantiates all the data services by virtue of having built-in intelligence that interprets the requirements of the policy.
One of the great things that you can achieve with Virtual SAN is providing differentiated service levels to individual VMs from a single data store.

That makes this system extremely simple and intuitive to use. In fact, one of the core design objectives of Virtual SAN is simplicity. If you look at a short description of the system, the radically simple hypervisor-converged storage means bringing that idea of eliminating the complexity of storage to the next level.

Gardner: We've talked about simplicity, policy driven, automation, and optimization. It seems to me that those add up very quickly to a fit-for-purpose approach to storage, so that we are not under-provisioning or over-provisioning, and that can lead to significant cost-savings.

So let’s translate this back to economics. Alberto, do you have any thoughts on how we lower total cost of ownership (TCO) through these SDS approaches of simplicity, optimization, policy driven, and intelligence?


Farronato: There are always two sides of the equation. There is a CAPEX and an OPEX component. Looking at how a product like Virtual SAN reduces CAPEX, there are several ways, but I can mention a couple of key components or drivers.

First, I'd call out the fact that it is an x86 server-based storage area network (SAN). So it leverages server-side components to deliver shared storage. By virtue of using server-side resources right off the bat there are significant savings that you can achieve through lower-cost hardware components. So the same hard drive or solid-state drive (SSD) that you deploy on a shared external storage array could be on the order of 80 percent cheaper.

The other aspect that I would call out that reduces the overall CAPEX cost is more along the lines of this, as you said, consume on-demand approach or, as we put it in many other terms, grow-as-you-go. With a scale-out model, you can start with a small deployment and a small upfront investment.

You can then progressively scale out as your environment grows by the much finer granularity that you would with a monolithic array. And as you scale, you scale both compute, but also IOPs  and that goes hand in hand with often the number of VMs that you are running out of your cluster.

System growth
 
So the system grows with the size of your environment, rather than requiring you to buy a lot of resources upfront that many times remain under-utilized for a long time.

On the OPEX side, when things become simpler, it means that overall administration productivity increases. So we expect a trend where individual administrators will be able to manage a greater amount of capacity, and to do so in conjunction with management of the virtual infrastructure to achieve additional benefits.

Gardner: Christos, Virtual SAN has been in general availability now for several months, since March 2014, after being announced last year at VMworld 2013. Now that it’s in place and growing in the market, are there any unintended benefits or unintended consequences from that total-cost perspective in real-world day-in, day-out operations?
The system grows with the size of your environment, rather than require you to buy a lot of resources upfront that many times remain under-utilized for a long time.

I'm looking for ways in which a typical organization is seeing software-defined storage benefiting them culturally and organizationally in terms of skills, labor, and that sort of softer metric.

Karamanolis: That’s a very interesting point. Our technologists sometimes tend to overlook the cultural shifts that technology causes in the field. In the case of Virtual SAN, we see a lot of, as one customer put it, being empowered to manage their own storage, in the vertical that we are controlling in their IT organization, without having to depend on the centralized storage organization in this company.

Find SDS technical insights and best practices on the VSAN storage blog.

What we really see here is a shift in paradigm about how our customers use Virtual SAN today to enable them to have a much faster turnaround for trying new applications, new workloads, and getting them from test and dev into production without having to be constrained by the processes and the timelines that are imposed by a central storage IT organization.

This is a major achievement, and the major tool for VMware administrators in the field, which we believe is going to lead the way to a much wider adoption of Virtual SAN and software-defined storage in general.

Gardner: It sounds as if there's a simultaneous decentralized benefit here, similar to what we saw 30 years ago in manufacturing. Back in the day, you used to have an assembly line approach where one linear process would lead to another, but when you do simultaneous things, you can see a lot more productivity and innovation.

Do you think that there is a parallel between software modernization and manufacturing 30 years ago?

Managing storage

Karamanolis: Certainly we have a parallel here, taking into account the fact that the customers, the IT professionals that manage storage, understand the processes and the workflows without necessarily having to understand the internals of the technology that implement those workflows.

This is very much like being part of a production line and understanding the big picture, but without having to understand all the little details of every station of that production line. In both cases, you have a fundamental scalability benefit going down that path.

I say this this being fully aware that the real world is demanding. I understand that there may be situations where the IT administrator, whether a VMware admin or a storage expert, has to go and jump into the situation and troubleshoot something that is going wrong.
With this approach you have a more granular way to control the service levels that you deliver to your customers.

He has to troubleshoot, for example, a performance issue, or understanding what's happening under the covers when the requirements specified don’t seem to match what they're getting.

And what we do is we deliver, together with Virtual SAN in an integrated fashion, sophisticated monitoring and reporting tools that help customers not only understand what's happening in their system, but also do an analysis of any situation end-to-end, all the way from the application, down to the VM, the hypervisor and the resources the hypervisor assigns to those VMs, and including the storage resources that are consumed at any point in time across the cluster.


Those are the tools that always have to come together with those simple models we're introducing, because you need to be able to handle those exceptional situations.

Gardner: How does this simplification and automation have a governance, risk, and compliance (GRC) benefit?

Farronato: With this approach you have a more granular way to control the service levels that you deliver to your customers, to your internal customers, and a more efficient way to do it by standardizing through polices rather than trying to standardize service levels over a category of hardware.

Self-service consumption

You can more easily keep track of what each individual application is receiving, whether it’s in compliance to that particular policy that you specified. You can also now enable self-service consumption more easily and effectively.

We have, as part of our Policy-Based Management Engine, APIs that will allow for integration with cloud automation frameworks, such as vCloud Automation Center for OpenStack, where end users will be able to consume a predefined category of service.

It will speed up the provisioning process, while at the same time, enabling IT to maintain that control and visibility that all the admins want to maintain over how the resources are consumed and allocated.
You can also now enable self-service consumption more easily and effectively.

Gardner: I'm interested in hearing more examples about how this is being used. But before we go to that, there's one questions that I get a lot as an analyst.

Perhaps it's because people come from different parts of IT, or they have specializations, but people say, "We have software-defined storage, we have software-defined networking, a highly virtualized data center, and the goal is to become a software-defined data center, but I don't necessarily understand how these come together in what order. How do I go about that?"


Help us understand the role and impact of software-defined storage in the context of a larger software-defined data center.

Karamanolis: This is a challenging question, and I don’t know how far I can go in answering this. What we're trying to do at VMware is allow our customers to experience the various concepts of software-defined data center in a piecemeal fashion.

They can address the most acute of their problems, whether those are the traditional computer utilization questions, or more recently, whether that is a network scalability and flexibility question or a question of an easy-to-enter, low-cost storage platform. So, yes, we provide integration and fully support integration of all our software-defined aspects of the data center. That is in the three dimensions I mentioned.

We will soon be posting some demos of this kind of working with NSX, for example. But we do not prescribe that our IT professional has to use Virtual SAN with NSX, or vice versa, and only in that way. So Virtual SAN can be used on its own, with more traditional network configurations. NSX can replace those network infrastructure and it will work seamlessly with Virtual SAN. 

We see different parts of adoption by different customers. Some of the bigger enterprises, including financials, being more sophisticated and perhaps more forward-looking, they are more aggressive with total software-defined data center approach. Other customers are a bit more cautious and apply software-defined principles in the main areas they are concerned with.

Value proposition

Farronato: When you look at a product like Virtual SAN, one interesting finding, after the first three months that the product has been available, is that the value proposition is really resonating across pretty much all customer segments, from the smaller SMBs, all the way up to the larger enterprise customers.

While it’s difficult to comment on the exact sequence as to how software-defined data center has been deployed, it is interesting to see that a technology like Virtual SAN is resonating pretty much across all the market segments, and so it expresses a value proposition that is broadly applicable.

Gardner: I suppose there are as many on-ramps to software-defined data center as there are enterprises. So it's interesting that it can be done at that custom level, based on actual implementation, but also have a strategic vision or a strategic architectural direction. So, it's future-proof as well as supporting legacy.
The value proposition is really resonating across pretty much all customer segments, from the smaller SMBs, all the way up to the larger enterprise customers.

How about some examples? Do we have either use-case scenarios or an actual organization that we can look to and say that they have deployed these VSAN and they have benefited in certain ways and they are indicative of what others should expect? 

Farronato: Let me give you some statistics and some interesting facts. We can look at some of the early examples where, in the last three months since the product has become available, we've found a significant success already in the marketplace, with a great start in terms of adoption from our customers.

Find SDS technical insights and best practices on the VSAN storage blog.

We already have more than 300 paying customers in just one quarter. That follows the great success of the public beta that ran through the fall and the early winter with several thousand customers testing and taking a look at the product. 

We are finding that virtual desktop infrastructure (VDI) is the most popular use case for Virtual SAN right now. There are a number of reasons why Virtual SAN fits this model from the scale out, as well as the fact that the hyper-converged storage architecture is particularly suitable to address the storage issues of a VDI deployment.

DevOps, or if you want, preproduction environments, loosely defined as test dev, is another area. There are disaster recovery targets in combination with vSphere Replication and Site Recovery Manager. And some of the more aggressive customers are also starting to deploy it in production use cases.
In the last three months since the product has become available, we've found a significant success already in the marketplace.

As I said, the 300 customers that we already have span the gamut in terms of size and names. We have large enterprises, banking, down to the smaller accounts and companies, including education or smaller SMBs. 

There are a couple of interesting cases that we'll be showcasing at VMworld 2014 in late-August. If you look at the session list, they're already available as actual use cases presented by our customers themselves.

Adobe will be talking about their massive implementation of Virtual SAN. And for their our production environment, on their data analytics platform, there will be another interesting use case with TeleTech talking about how they have leveraged Cisco UCS to progress VDI deployments.

VDI equation

Gardner: I'd like to revisit the VDI equation for a moment, because one of the things that’s held people up is the impact on storage, and the costs associated with the storage to support VDI. But if you're able to bring down costs by 50 percent, in some cases, using software-defined storage. That radically changes the VDI equation. Isn’t that the case, Christos, where you can now say that you can do VDI cheaper than almost any other approach to a virtualized desktop?

Karamanolis: Absolutely, and the cost of storage is the main impediment in organizations to implement a VDI strategy. With Virtual SAN, as Alberto mentioned earlier, we provide a very compelling cost proposition, both in terms of the capacity of the storage, as well as the performance you gain out of the storage.
You get the needs, both capacity and performance of your VDI workloads for a fraction of the cost you would pay for with a traditional disk array storage.

Alberto already touched on the cost of the capacity, referring to the difference in prices one can get from server vendors and from the market, as opposed to single hardware being procured as part of a traditional disk array.

I'd like to touch on something that is an unsung hero of Virtual SAN and of VDI deployment especially, and that's performance. Virtual SAN, as should be clear by now, is a storage platform that is strongly integrated with our hypervisor. Specifically, the data path implementation and the distributed protocols that are implemented in Virtual SAN are part of the ESXi kernel.

That means that, because of that, we can actually achieve very high performance goals, while we minimize the CPU cycles that are consumed to serve those high I/Os per second. What that means, especially for VDI, is that we use a small slice of the CPU and memory of every single ESXi host to implement this distributed software-driven storage controller.


It doesn't affect all the VMs that run on the same ESXi host, who have already published extensive and detailed performance evaluations, where we compare VDI deployments only on Virtual SAN versus using an external disk array.

And even though Virtual SAN use percentage is cut to be 10 percent of local CPU and memory on those hosts, the consolidation ratio, the number of virtual desktops we run on those clusters, is virtually unaffected, while we get the full performance that is realized with an external, all-flash disk array. So this is the value of Virtual SAN in those environments.

Essentially, you get the needs, both capacity and performance of your VDI workloads, for a fraction of the cost you would pay for with a traditional disk array storage.

Gardner: We're only a few weeks from VMworld 2014 in San Francisco, and I know there's going to be a lot of interest in mobile and in desktop infrastructure for virtualized desktops and applications.

Do you think that we can make some sort of a determination about 2014? Maybe this is the year that we turn the corner on VDI, and that that is a bigger driver to some of these higher efficiencies. Any closing thoughts on the vision for software-defined data center and VDI and the timing with VMworld. Alberto?

Last barrier

Farronato: Certainly, one of the goals that we set ourselves for this Virtual SAN release, solving the VDI use case, eliminating probably the last barrier, and enabling a broader adoption of VDI across the enterprise, and we hope that will materialize. We're very excited about what the early findings show.

With respect to VMworld and some of the other things that we'll be talking about at the conference with respect to storage, we'll continue to explain our vision of software-defined storage, talk about the Virtual SAN momentum, some of the key initiatives that we are rolling out with our OEM partners around things such as Virtual SAN Ready Nodes.

We're going to talk about how we will extend the concept of policy management and dynamic composition of storage services to external storage, with a technology called Virtual Volumes.

There are many other things, and it's gearing up to be a very exciting VMworld Conference for storage-related issues.


Gardner: Last word to you, Christos. Do you have any thoughts about why 2014 is such a pivotal time in the software-defined storage evolution?

Karamanolis: I think that this is the year where the vision that we've been talking about, us and the industry at large, is going to become real in the eyes of some of the bigger, more conservative enterprise IT organizations.

With Virtual SAN from VMware, we're going to make a very strong case at VMworld that this is a real enterprise-class storage system that's applicable across a very wide range of use cases and customers.

With actual customers using the product in the field, I believe that it is going to be a strong evidence for the rest of the industry that software-defined storage is real, it is solving real world problems, and it is here to stay.

Together with opening up some of the management APIs that Virtual SAN uses in VMware products to third parties through this Virtual Volumes technology that Alberto mentioned, we'll also be initiating an industry-wide initiative of making, providing, and offering software-defined storage solutions beyond just VMware and the early companies, mostly startups so far, that have been adopting this model. It’s going to become a key industry direction.
I believe that it is going to be a strong evidence for the rest of the industry that software-defined storage is real, it is solving real world problems, and it is here to stay.

Gardner: You've been listening to a sponsored BriefingsDirect podcast discussion on how one of the most costly and complex parts of any enterprise’s IT infrastructure, storage, is being dramatically changed by the accelerating adoption of software-defined storage.

And we've heard how IT leaders are simultaneously tackling storage pain points, such as scalability, availability, agility, and cost, while also gaining significant strategic and architectural level benefits through software-defined storage. Of course, probably the poster child application for that is VDI.

So a big thank you to our guests, Alberto Farronato, Director of Product Marketing for Cloud Infrastructure, Storage, and Availability at VMware. Thank you so much, Alberto.

Farronato: Thank you. It was great being with you.

Gardner: And we've been joined also by Christos Karamanolis, Chief Architect and a Principal Engineer in the Storage and Availability Engineering Organization at VMware. Thanks so much, Christos.

Karamanolis: Thank you. It was a pleasure talking with you.

Gardner: And also a big thank you to our audience for joining us once again on BriefingsDirect. This is Dana Gardner, Principal Analyst at Interarbor Solutions. Thanks for listening, and don't forget to come back next time.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: VMware.

Transcript of a BriefingsDirect podcast on the future of Virtual SAN and how it will have an impact on storage-hungry technologies, especially VDI. Copyright Interarbor Solutions, LLC, 2005-2014. All rights reserved.

You may also be interested in:

Thursday, July 31, 2014

Cloud Service Automation Eases Application Delivery for Global Service Provider NNIT

Transcript of a BriefingsDirect podcast on how cloud service automation can improve deployment of IT applications and delivery for higher efficiency.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: HP.

Dana Gardner: Hello, and welcome to the next edition of the HP Discover Podcast Series. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for this ongoing sponsored discussion on IT innovation and how it’s making an impact on people’s lives.

Gardner
Once again, we’re focusing on how companies are adapting to the new style of IT to improve IT performance and deliver better user experiences, and business results. This time, we’re coming to you directly from the recent HP Discover Conference in Barcelona.

Our next innovation case study interview highlights how NNIT uses HP Cloud Service Automation (CSA) to improve their deployment of IT applications and data, and to provide higher overall efficiency. To learn more, we’re joined by Jesper Bagh, IT Architect and cloud expert at NNIT, based in Copenhagen. Welcome, Jesper.

Jesper Bagh: Thank you very much, Dana.

Gardner: So tell us a little about your company and what you do. Then, we’ll get into some of the problems and solutions that you've been tasked with resolving.

Bagh: NNIT is a service provider located in Denmark. We have offices around the world, China, Philippines, Czech Republic, and the United States. We’re 2,200 employees globally and we're a subsidiary of Novo Nordisk, the pharmaceutical company known for making insulin.

Bagh
Gardner: IT Architect, that’s an interesting title. Tell us what you do and what you were doing before you achieved that rank. What are your job responsibilities?

Bagh: My responsibility is to ensure for the company that business goals can be delivered through functional requirements, and in turning the functional requirements into projects that can be delivered by the organization.

Gardner: I know that the IT architect and cloud architect individuals are in high demand in a lot of companies. Tell us how you’ve evolved your thinking toward a cloud deployment, and explain how you are using HP CSA to accomplish that.

Full suite

Bagh: We embarked on CSA together with HP back in 2010. Back then, CSA consisted of many different software applications. It wasn't really complete software back then. Now, it’s a full suite of software.

It has helped us to show to our internal groups -- and our customers -- that we have services in the cloud. For us it has been a tremendous journey to show that you can deliver these services fully automatically, and by running them well, we can gain great efficiency.

Gardner: And has the ability to be more service-oriented in your cloud activities filtered back into more of IT? Are you extending this thinking about service, catalog, and delivery into other aspects of IT, in addition to cloud?

Bagh: We’re a wall-to-wall, full-service provider. So we provide both application development management and infrastructure outsourcing. Cloud is just one aspect that we’re delivering services on. Before we did the cloud project, we started off by doing service-portfolio management and cataloging of our services, trying to standardize the services that we have on the shelf ready for our customers.

That allowed us to put offerings into a cloud, and to show the process of standardizing of services, doing cloud well, and of focusing on the dedicated customers. We still have customers using our facility management who are not able to leverage cloud services because of compliance or regulatory demands.

We have roughly over 10,000 services in our data centers. We’re trying now to broaden the capabilities of cloud delivery to the rest of the infrastructure so that we get a more competitive edge. We’re able to deliver better quality, and the end users -- at the end of the day -- get their services faster.
Back in the good old days, developers were in one silo and operations were in another silo. Now, we see a mix of resources, both in operations and in development.

Gardner: Has this clearly benefited your speed-to-value when it comes to new applications. How do your developer and test and automation individuals react to this?

Bagh: The adaption of automation is an ongoing journey. I imagine other companies have also had the opportunity of adapting to a new breed of software, and a new life in automation and orchestration. What we see is that the traditional operations divisions now suddenly get developers trying to comprehend what they mean, and trying to have them work together to deliver operations automatically.

Back in the good old days, developers were in one silo, and operations were in another silo. Now, we see a mix of resources -- both in operations and in development. So the organizational change management derived from automation projects is key. We started up, when we did service cataloging and service portfolio management, by doing organizational change to see if this could fit into our vision.

Gardner:  Now, a lot of people these days like to measure things. It’s a very data-driven era. Have you been able to develop any metrics of how your service automation and cloud-infrastructure developments have shown results, whether it’s productivity benefits or speeds and feeds? Have you measured this as a time-to-value or a time-to-delivery benefit? What have you come up with?

Value-add

Bagh: As part of the cloud project, we did two things. We did infrastructure as a service (IaaS), but we also did a value add on IaaS. We were able to deliver qualified IaaS to the life science industry fully compliant. That alone, in the traditional infrastructure, would have taken us weeks or months to deliver servers because of all the process work involved. When we did the CSA and the GxP Cloud, we were able to deliver the same server within a matter of hours. So that’s a measurable efficiency that is highly recognized.

Gardner:  For other organizations that are also grappling with these issues and trying to go over organization and silo boundaries for improvement in collaboration, do you have any words of advice? Now that you've been doing this for some time and at that key architect level, which I think is really important, what thoughts do you have that you could share with others, lessons learned perhaps?

Bagh: The lesson learned is that having senior management focus on the entire process is key. Having the organization recognized is a matter of change management. So communication is key. Standardization before automation is key.

You need to start out by doing your standardization of your services, doing the real architectural work, identifying which components you have and which components you don't have, and matching them up. It’s trying to do all the Lego blocks in order to build the house. That’s key. The parallel that I always use is there is nothing different for me as an architect than there is for an architect building a house.
The next step for us is to be more proactive than reactive in our monitoring and reporting capabilities, because we want to be more transparent to our customers.

Gardner:  Looking to the future, are there other aspects of service delivery, perhaps ways in which you could gather insights into what's happening across your infrastructure and the results, that end users are seeing through the applications? Do you have any thoughts about where the next steps might be?

Bagh: The next step for us is to be more transparent to our customers. So the vision is now we can deliver services fully automatically. We can run them semi-automatically. We will still do funny stuff from time to time that you need to keep your eyes on. But in order for us to show the value, we need to report on it.

The next step for us is to be more proactive than reactive in our monitoring and reporting capabilities, because we want to be more transparent to our customers. We have a policy called Open and Honest Value-Adding. From that, we want to show our customers that if we can deliver a service fully automatically and standardized, they know what they get because they see it in a catalog. Then, we should be able to report on it live for the users.

Gardner: Very good. I’m afraid we will have to leave it there. We’ve been learning about how NNIT is improving their delivery and performance of applications through the use of an important cloud-service automation technologies.

Gardner: So a big thank you to our guest, Jesper Bagh, IT Architect and Cloud Expert at NNIT, based in Copenhagen. Thank you so much.

Bagh: Thank you, Dana.

Gardner: And thank you too to our audience for joining this special new style of IT discussion coming to you directly from the HP Discover 2013 Conference in Barcelona.

I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host for this ongoing series of HP Sponsored Discussions. Thanks again for listening, and come back next time.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: HP.

Transcript of a BriefingsDirect podcast on how cloud service automation can improve deployment of IT applications and delivery for higher efficiency. Copyright Interarbor Solutions, LLC, 2005-2014. All rights reserved.

You may also be interested in: