Showing posts with label data center. Show all posts
Showing posts with label data center. Show all posts

Tuesday, August 30, 2016

Loyalty Management Innovator Aimia's Transformation Journey to Modernized and Standardized IT

Transcript of a discussion on how improving end user experiences and using big data analytics helps head off digital disruption and improve core operations.


Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: Hewlett Packard Enterprise.

Dana Gardner: Hello, and welcome to the next edition to the Hewlett Packard Enterprise (HPE) Voice of the Customer podcast series. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for this ongoing discussion on IT Innovation -- and how it's making an impact on people's lives.

Gardner
Our next digital business transformation case study examines how loyalty management innovator Aimia is modernizing, consolidating, and standardizing its global IT infrastructure. As a result of rapid growth and myriad acquisitions, Montreal-based Aimia is in a leapfrog mode -- modernizing applications, consolidating data centers, and adopting industry standard platforms.

We'll now learn how improving end-user experiences and leveraging big data analytics helps IT organizations head off digital disruption and improve core operations and processes.

To describe how Aimia is entering a new era of strategic IT innovation, we're joined by André Hébert, Senior Vice President of Technology at Aimia in Montreal. Welcome, André.
Hébert: Thank you.

Gardner: What are some of the major drivers that have made you seek a common IT strategy? And tell us about your organization and why having a common approach is now so important.

Hébert: If you go back in time, Aimia grew through a bunch of acquisitions. We started as Aeroplan, Air Canada's frequent flyer program and decided to go in the loyalty space. That was the corporate strategy all along. We acquired two major companies, one in the UK and one that was US-based, which gave us a global footprint. As a result of these acquisitions, we ended up with quite a large IT footprint worldwide and wanted to look at ways of globalizing and also consolidating our IT footprint.

Hébert
Gardner: For many people, when they think of a loyalty program, it's frequent flyer miles, perhaps points at a specific retail outlet, but this varies quite a bit market to market around the globe. How do you take something that's rather fractured as a business and make it a global enterprise?

Hébert: We've split the business into two different business units. The first one is around coalition loyalty. This is where Aimia actually runs the program. Good examples are Aeroplan in Canada or Nectar in the UK, where we own the currency, we operate the program, and basically manage all of the coalition partners. That's one side.

The other side is what we call our global loyalty solutions. This is where we run loyalty programs for other companies. Through our standard technology, we set up a technology footprint within the customer site or preferably in one of our data centers and we run the technology, but the program is often white-labeled, so Aimia's name doesn't appear anywhere. We run it for banks, retailers and many industry verticals.

Almost like money

Gardner: You mentioned the word currency, and as I think about it, loyalty points are almost like money -- it is currency -- it can be traded, and it can be put into other programs. Tell us about this idea. Are you operating almost like a bank or a virtual currency trader of some sort?

Hébert: You could say that the currency is like money. It is accumulated. If you look at our systems, they're very similar to bank-account systems. So our systems are like banks'. If you look at debit and credit transactions, they mimic the accumulation and redemption transactions that our members do. 

Gardner: That's pretty important when it comes to transactions, making sure integration works among systems. Let's look at this from the perspective of your challenge. As you say, you came together through a lot of acquisitions. What's been your challenge from an IT perspective to allow your company to thrive in this digital economy, given that there is a transactional integrity issue, but also a lot of disparity in terms of the types of systems and heterogeneity in systems?

Hébert: Our biggest challenge was how large the technology footprint was. We still operate many dozens of data centers across the globe. The project with HPE is to consolidate all of our technology footprint into four Tier 3 data centers that are scattered across the globe to better serve our customers. Those will benefit from the best security standards and extremely robust data-center infrastructure. 

On the infrastructure side, it's all about simplifying, consolidating, virtualizing, using the cloud, leveraging the cloud, but in a virtual private way, so that we also keep our data very secured. That's on the infra side.

On the application side, we probably have more applications than we have customers. One of the big drivers there is that we have a global product strategy. Several loyalty products have now been developed. We're slowly migrating all of our customers over to our new loyalty systems that we've created to simplify our application portfolios. We have a large number of applications today, and the plan is to try to consolidate all these applications into key products that we've been developing over the last few years.
We've shopped around for a partner that can help us in that space and we thought that HPE had the best credentials, the best offer for us to go forward.

Gardner: That’s quite a challenge. You're modernizing and consolidating applications. At the same time, you're consolidating and modernizing your infrastructure. It reminds me of what HPE did just a few years ago when it decided to split and to consolidate many data centers. Was that something that attracted you to HPE, that they have themselves gone through a similar activity?

Hébert: Yes, that is one of the reasons. We've shopped around for a partner that can help us in that space and we thought that HPE had the best credentials, the best offer for us to go forward. 

Virtual Private Cloud (VPC), a solution that they have offered, is both innovative, yet it is virtual and private. So, we feel that our customer’s data will be significantly more secure than just going to any public cloud.

Gardner: Other key issues for you are data privacy and security. Again, if this is like currency, if transactions are involved and you're also dealing with multiple markets, different regulatory agencies, and different regulatory environments, that's another complication. How is consolidating applications and modernizing infrastructure at the same time helping you to manage these compliance and data-protection issues?

Raising the bar

Hébert: The modernization and infrastructure consolidation is, in fact, helping greatly in continuing to secure data and meet ever more difficult security standards, such as PCI and DSS 3.0. Through this process, we're going to raise the bar significantly over data privacy.

Gardner: André, a lot of organizations don't necessarily know how to start. There's so much to do when it comes to apps, data, infrastructure modernization and, in your case, moving to VPC. Do you have any thoughts about how to chunk that out, how to prioritize, or are you making this sort of a big bang approach, where you are going to do it all at once and try to do it as rapidly as possible? Do you have a philosophy about how to go about something so complex?

Hébert: We've actually scheduled the whole project. It’s a three-year journey into the new HPE world. We decided to attack it by region, starting with Canada and the US, North America. Then, we moved on to zooming into Asia-Pacific, and the last phase of the project is to do Europe. We decided to go geographically.
The program is run centrally from Canada, but we have boots on the ground in all of those regions. HPE has taken the lead into the actual technical work. Aimia does the support work, providing documentation, helping with all of the intricacies of our systems and the infrastructure, but it's a co-led project, with HPE doing the heavy lifting.

Gardner: Something about costs comes to mind when you go standard. Sometimes, there are some upfront cost, you have to leapfrog that hurdle, but your long-term operating costs can be significantly lower. What is it about the cost structure? Is it the standardized infrastructure platforms, are you using cheaper hardware, is it open source software, all the above? How do you factor this as a return on investment (ROI) type of an equation?

Hébert: It’s all of the above. Because we're right in the middle of this project, it will allow us to standardize, to evergreen, a lot of our technology that was getting older. A lot of our servers were getting old. So, we're giving the infrastructure a shot in the arm as far as modernization. 

From a VPC point of view, we're going to leverage this internal cloud much more significantly. From a CPU point of view, and from an infrastructure point of view, we're going to have significantly fewer physical servers than what we have today. It's all operated and run by HPE. So, all of the management, all of the ITO work is done by HPE, which means that we can focus on apps, because our secret sauce is in apps, not in infrastructure. Infrastructure is a necessary evil.

Gardner: That brings up another topic, DevOps. When you're developing, modernizing, or having a continuous-development process for your applications, if you have that cloud and infrastructure in place and it’s modern, that can allow you to do more with the development phase. Is that something you've been able to measure at all in terms of the ability to generate or update apps more rapidly?

Hébert: We're just dipping our toe into advanced DevOps, but definitely there are some benefits around that. We're currently focused on trying to get more value from that.

Gardner: When you think about ROI, there are, of course, those direct costs on infrastructure, but there are ancillary benefits in terms of agility, business innovation, and being able to come to market faster with new products and services. Is that something that is a big motivator for you and do you have anything to demonstrate yet in terms of how that could factor?

Relationship 2.0

Hébert: We're very much focused right now on what I would say is Relationship 1.0, but HPE was selected as a partner for their ability to innovate. They also are in a transition phase, as we all know, so while we're focused on getting the heavy lifting done, we're focusing on innovation and focusing on new projects with HPE. We actually call that Relationship 2.0.

Gardner: For others who are looking at similar issues -- consolidation, modernization, reducing costs over time, leveraging cloud models -- any words of advice now that you are into this journey as to how to best go about it or maybe things to avoid?

Hébert: When we first looked at this, we thought that we could do a lot of that consolidation work ourselves. Consolidating 42 data centers into 4 is a big job, and where HPE helps in that regard is that they bring the experience, they bring the teams, and they bring the focus to this. 

We probably could have done it ourselves. It probably would have cost more and it probably would have taken longer. One of the benefits that I also see is that HPE manages thousands and thousands of servers. With their ability to automate all of the server management, they've taken it to a level. As a small company, we couldn’t afford to do all of the automation that they can afford doing on these thousands of servers.
We probably could have done it ourselves. It probably would have cost more and it probably would have taken longer.

Gardner: Before we close out, André, looking to the future -- two, three, four years out -- when you've gone through this process, when you've gotten those modern apps and they are running on virtual private clouds and you can take advantage of cloud models, where do you see this going next? 

Do you have some ideas about mobile applications, about different types of transactional capabilities, maybe getting more into the retail sector? How does this enable you to have even greater growth strategically as a company in a few years?

Hébert: If you start with the cloud, the world is about to see a very different cloud model. If you fast forward five years, there will be mega clouds, and everybody will be leveraging these clouds. Companies that actually purchase servers will be a thing of the past. 

When it comes to mobile, clearly Aimia’s strategy around mobile is very focused. The world is going mobile. Most apps will require mobile support. If you look at analytics, we have a whole other business that focuses on analytics. Clearly, loyalty is all about making all this data make sense, and there's a ton of data out there. We have got a business unit that specializes in big data, in advanced analytics, as it pertains to the consumers, and clearly for us it is a very strategic area that we're investing in significantly.

Gardner: Getting all your i’s dotted and t's crossed in the infrastructure can pay huge dividends for years to come, especially, as you say, when you can focus more on the analytics, on the applications, on your business model, and less on server maintenance.

Hébert: That’s correct.

Gardner: I'm afraid we'll have to leave it there. We've been learning how loyalty management innovator Aimia is modernizing, consolidating, and standardizing its global IT infrastructure. We've heard how improving end-user experiences and using big data analytics helps head off digital disruption and improve core operations.

So please join me in thanking our guest, André Hébert, Senior Vice President of Technology at Aimia in Montreal. Thank you, André.
Hébert: Thank you very much.

Gardner: And I'd like to thank our audience as well for joining us for this Hewlett Packard Enterprise Voice of the Customer podcast. I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host for this ongoing series of HPE-sponsored discussions. Thanks again for listening, and do come back next time.

Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: Hewlett Packard Enterprise.

Transcript of a discussion on how improving end user experiences and using big data analytics helps head off digital disruption and improve core operations. Copyright Interarbor Solutions, LLC, 2005-2016. All rights reserved.

You may also be interested in:

Friday, June 03, 2016

Catbird CTO on Why New Security Models are Essential for Highly Virtualized Data Centers

Transcript of a BriefingsDirect discussion on how increased virtualization across data centers translates into the need for new approaches to security, compliance, and governance.

Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: Hewlett Packard Enterprise.

Dana Gardner: Hello, and welcome to the next edition of the Hewlett Packard Enterprise (HPE) Voice of the Customer interview series. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for this ongoing discussion on IT transformation and innovation -- and how that's making an impact on people's lives.

Gardner
Our next hybrid-computing case study discussion explores how increased virtualization across data centers translates into the need for new approaches to security, compliance, and governance. Just as next-generation data centers and private clouds are gaining traction, security threats are on the rise -- and attack techniques are becoming more sophisticated.

Are yesterday’s perimeter-based security infrastructure methods up to the task? Or are new approaches needed to gain policy-based control over all virtual assets at all times?

Here to explore the future of security for virtual workloads is Holland Barry, CTO at Catbird in Scotts Valley, California. Welcome, Holland.

Holland Barry: Thank you. Good to be here.
Learn How
Cloud Protection Starts
With a Security-First Mindset
Gardner: Tell us why it’s a different picture nowadays when we look at data centers and private clouds. Oftentimes, people think similarly about security -- just wrap a firewall around it and you're okay. Why isn’t that the case? What’s new?

Barry
Barry: As we've introduced many layers of abstraction into the data center, trying to adapt those physical appliances that don’t move around as fluid as the workloads they're protecting, it has become an issue. And as people virtualize more and we go more to this notion of a software-defined data center (SDDC), it has just proven a challenge to keep up, and we know that that layer on the perimeter is probably not sufficient anymore.

Gardner: It also strikes me that it’s a moving target, virtual workloads come and go. You want elasticity. You want to be able to have fit-for-purpose infrastructure, but that's also a challenge when you can’t keep track of things and therefore secure them. 

Barry: That’s absolutely right. The transient nature of workloads themselves make any type of rigid enforcement from a single device pretty tough to deal with. So you need something that was built to be fluid alongside those dynamic workloads.

Gardner: And I suppose, too, that enterprise architects that are putting more virtualization together across the data center, the SDDC, aren’t always culturally aligned with the security folks. So you have more than just a technology issue here. Tell us what Catbird does that goes beyond just the technology, and perhaps works toward a cultural and organizational benefit?

Greater skill set

Barry: Even just from an interface standpoint or trying to create a tool that can cater to those different administrative silos, you have people who have virtualization expertise, compute expertise, and then different security practice expertise. There are many slim lanes within that security category, and the next generation set of workloads in the hybrid IT environment is going to demand more of a skill set that can span all those domains. 

Gardner: We talk a lot about DevOps and SecOps combining. There's also this need for automation and orchestration. So policy-based seems to be really the only option to keep up with the speed on security. 

Barry: That’s exactly right. There has to be an application-centric approach to how you're applying security to your workloads. Ideally that would be something that could be templatized or defined up front. So as new workloads present themselves in the network, there's already a predetermined way that they're going to be secured and that security will take place right up against the edge of that workload.

Gardner: Holland, tell us about Catbird, what you do, how you're deployed, and how you go about solving some of these challenges.
Having that single point of policy definition and enforcement is going to be critical to people adopting and really taking the next leap to put a layer of defense in their data center.

Barry: Catbird was born and raised in virtualized environments. We've been around for a number of years. It was this notion of bringing the perimeter and the control landscape closer to the workload, and that’s via hypervisor integration and also via the virtual data-path integration. So it's having a couple of different vantage points from within the fabric and applying security with a purpose-built solution that can span multiple platforms.

So that hybrid IT environment, which is becoming a reality, may have a little bit of OpenStack, it may have a little bit of VMware. Having that single point of policy definition and enforcement is going to be critical to people adopting and really taking the next leap to put a layer of defense in their data center.

Gardner: How are you deployed, you are a software appliance yourself, virtualized software?

Barry: Exactly right. Our solutions are comprised of two components, and it’s a very basic hub-and-spoke architecture. We have a policy enforcement point, a virtual machine (VM) appliance that installs out on each hypervisor, and we have a management node that we call the Control Center. That’s another VM, and those two components talk together in a secure manner. 

Gardner: What’s a typical scenario? Where in this type of east-west traffic virtualization environment, security works better and how it protects? Are there some examples that would demonstrate where the perimeter approach breaks down would but your model got the task done?

Doing enforcement

Barry: I think that anytime that you need to have the granularity of not only visibility, but enforcement -- I'm going to get a little technical here -- down to the UUID of the vNIC, that smallest unit of measure as it relates to a workload, that’s really where we shine, because that’s where we do our enforcement. 

Gardner: Okay. How about partnerships? Obviously you're working in an environment where there are a lot of different technologies, lots of moving parts. What’s going on with you and HPE in terms of deployment, working with private cloud, operating systems, and then perhaps even moving toward modeling and some of the HPE ArcSight technology?

Barry: We have a number of different integration points inside HPE’s portfolio. We're a Helion-ready certified partner. We just announced our support for the 2.0 Helion OpenStack release.
Learn How
Cloud Protection Starts
With a Security-First Mindset
We're doing a lot of work the ArcSight team in terms of getting very detailed event feeds and visibility into the virtualized workloads.

And we just announced some work that we are doing with HPE’s HPN team around their software-defined networking (SDN) VAN Controller as well, extending Catbird’s east-west visibility into the physical domain, leveraging the placement of the SDN controller and its command over the switches. So it’s pretty exciting work there.

Gardner: Let’s dig into that a bit, the (SDN) advances that are going on and how that’s changing how people think about deployment and management of infrastructure and data centers. Doesn’t this really give you some significant boost in the way that you can engage with security, intercept and stop issues before they propagate? What is it about SDN that is good for security?
Knowing the state of the workload, is going to be critical to applying those traditional security controls.

Barry: As the edges of what has traditionally been rigid network boundaries become fluid as well, knowing the state of the network, knowing the state of the workload, is going to be critical to applying those traditional security controls. So we're really trying to tie all this together -- not only with our integration with Helion, but also utilizing the knowledge that the SDN Controller has of the data path. We can surface indications that compromise and maybe get you to a problem a little bit quicker than traditional methods.

Gardner: I always like to try to show and not just tell. Do you have any examples of organizations that are doing this, what it has done for them, and why it’s a path to even greater future benefits as they further virtualize and go to even larger hybrid environments?

Barry: Absolutely. I can’t name them by name, but one of the US’ largest carriers telcos is one of our customers. They came to us to solve a problem of that consistency of policy definition and enforcement across those hybrid platforms. So it’s amongst VMware and OpenStack workloads.

That's not only for the application of the security controls and not only for the visibility of the traffic, but also the evidence of assurance of compliance, being able to do mapping back to regulatory frameworks and things like that.

Agentless fashion

There are a couple of different use cases in there, but it’s really that notion where I can do it in an agentless fashion, and I think that’s an important thing to differentiate and point out about our solution. You don’t have to install an agent within the workload. We don’t require a presence inside the OS.

We're doing it just outside of the workload, at the hypervisor level. It’s key that we have the specific tailored integrations to the different hypervisor platforms, so we can abstract away the complexity of applying the security controls where you just have a single pane of glass. You define the security policy and it doesn’t matter which platform you're on, it’s going to be able to do it in that agentless fashion.
We're aware of it, and I think our method of doing the security control application is going to be the one that wins.

Gardner: Of course, the march of technology continues, and we're not just dealing with virtualization. We're now talking about containers, micro-services, composable infrastructure. How will your solution, in conjunction with HPE, adapt to that, and is there more of a role as you get closer to the edge, even out into the Internet of Things (IoT), where we're talking about all sorts of more discrete devices really extending the network in all directions?

Barry: As the workload types proliferate and we get fancier about how we virtualize, whether it’s using a container or a virtualization platform, and then the vast amount of IoT devices that are going to present themselves, we're working closely with the HPE team in lockstep as mass adoption of these technologies happens.

We have plans in place to solve platform by platform, and we believe taking an approach where we're looking at that specific problem and asking how we're going to attack this thing while keeping that bigger vision of, "We're still going to keep you in that same console and the method in which you apply the security is going to be the same."

Containers are a great example, something that we know we need to tackle, something that’s getting adopted in a fashion far more than I have ever seen with anything else. That’s a pretty exciting one. But at the end of the day, it’s a way of virtualizing a service or micro-services. We're aware of it, and I think our method of doing the security control application is going to be the one that wins.

Gardner: Pretty hard to secure a perimeter when there really isn’t a perimeter.

Barry: Perimeter is quickly fading, it seems.
Learn How
Cloud Protection Starts
With a Security-First Mindset
Gardner: OK, we'll have to leave it there. We've been exploring how increased virtualization across data centers translates into the need for new approaches to security, compliance, and governance. And we have seen how policy-based control over all virtual assets provides a greater protection and management for next-generation data centers. So a big thank you to our guest, Holland Barry, CTO at Catbird. Thank you, Holland

Barry: Pleasure to be here. Thank you.

Gardner: And a big thank you to our audience as well for joining us for this Hewlett Packard Enterprise Voice of the Customer interview. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host for this ongoing series of HPE-sponsored discussions. Thanks again for listening, and come back next time.

Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: Hewlett Packard Enterprise.

Transcript of a BriefingsDirect discussion on how increased virtualization across data centers translates into the need for new approaches to security, compliance, and governance. Copyright Interarbor Solutions, LLC, 2005-2016. All rights reserved.

You may also be interested in:

Friday, March 07, 2014

Fast-Changing Demands on Data Centers Drive Need for Automated Data Center Infrastructure Management

Transcript of a BriefingsDirect discussion on how organizations need to better manage the impact that IT and big data now have on data centers and how Data Center Infrastructure Management helps.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: HP.

Dana Gardner: Hi, this is Dana Gardner, Principal Analyst at Interarbor Solutions, and you’re listening to BriefingsDirect.

Gardner
Today, we present a sponsored podcast discussion on improving the management and automation of data centers. As data centers have matured and advanced to support unpredictable workloads like hybrid cloud, big data, and mobile applications, the ability to manage and operate that infrastructure efficiently has grown increasingly difficult.

At the same time, as enterprises seek to rationalize their applications and data, centralization and consolidation of data centers has made their management even more critical -- at ever larger scale and density.

So how do enterprise IT operators and planners keep their data centers from spinning out of control despite these new requirements? How can they leverage the best of converged systems and gain increased automation, as well as rapid analysis for improving efficiency?

We’re here to pose such questions to two experts from HP Technology Services, and thereby explore how new integrated management capabilities are providing the means for better and automated data center infrastructure management (DCIM).

Here now to explain how disparate data center resources can be integrated into broader enterprise management capabilities and processes, we’re here with Aaron Carman, HP Worldwide Critical Facilities Strategy Leader. Welcome to BriefingsDirect, Aaron. [Learn more about DCIM.]

Aaron Carman: It's pleasure to be here. Thank you.

Gardner: We’re also here with Steve Wibrew, HP Worldwide IT Management Consulting Strategy and Portfolio Lead. Welcome, Steve.

Steve Wibrew: Hello, and glad to be here. Thank you.

Gardner: Aaron, let me start with you. From a high level, what’s forcing these changes in data center management and planning and operations? What are these big new requirements? Why is it becoming so difficult?

Carman: It's a very interesting question that people are actually trying to deal with. What it comes down to is that in the past, folks were dealing with traditional types of services that were on a traditional type of IT infrastructure.

Standard, monolithic-type data centers were designed one-off. In the past few years, with the emergence of cloud and hybrid service delivery, as well as some of the different solutions around convergence like converged infrastructures, the environment has become much more dynamic and complex.

Hybrid services

So, many organizations are trying to grapple with, and deal with, not only the traditional silos that are in place between facilities, IT, and the business, but also deal with how they are going to host and manage hybrid service delivery and what impact that’s going to have on their environment.

Carman
It’s not only about what the impact is going to be on rolling out new infrastructure solutions like converged infrastructures from multiple vendors, but how to increasingly provide more flexibility and services to their end users as digital services.

It's become much more complex and it's a little bit harder to manage, because there are many, separate types of tools that they use to manage these environments, and it has continued to increase.

Gardner: Steve, do you have anything more to offer in terms of how the function of IT is changing? I suppose that with ITIL v3 and more focus on a service-delivery model, even the goal of IT has changed.

Wibrew: That's very true. We’re seeing a trend in the change and role of IT to the business. Previously IT was a cost center, an overhead to the business, to deliver the required services. Nowadays, IT is very much the business of an organization, and without IT, most organizations simply cease to function. So IT, its availability and performance, is a critical aspect of the success of the business.

Gardner: What about this additional factor of big data and analysis as applied to IT and IT infrastructure. We’re getting reams and reams of data that needs to be used and managed. Is that part of what you’re dealing with as well, the idea that you can be analyzing in real-time what all of your systems are doing and then leverage that?

Wibrew
Wibrew: That’s certainly a very important part of the converged-management solution. There’s been a tremendous explosion in the amount of data, the amount of management information, that's available. If you narrow that down to the management information associated with operating management and supporting data centers from the facility to the applications, to the platforms right up to the services to the business, clearly that's a huge amount of information that’s collected or maintained on a 24×7 basis.

Making good and intelligent decisions on that is quite a challenge for many organizations. Quite often, we would be saying that people still remain in isolated silo teams without good interaction between the different teams. It's a challenge trying to draw that information together so businesses can make intelligent choices based on analytics of that end-to-end information.

Gardner: Aaron, I’ve heard that word "silo" now a few times, siloed teams, siloed infrastructure, and also siloed management of infrastructure. Are we now talking about perhaps a management of management capabilities? Is that part of your story here now?

Added burden

Carman: It is. For the most part, most organizations when faced with trying to manage these different areas, facilities IT and service delivery, have come up with their own set of run books, processes, tools, and methodologies for operating their data center.

When you put that onto an organization, it's just an added burden for them to try to get vendors to work with one another and integrate software tools and solutions. What the folks that provide these solutions have started to realize is that there needs to be an interoperability between these tools. There has never really been a single tool that could do that, except for what has just emerged in the past few years, which is DCIM.

HP really believes that DCIM is a foundational, operational tool that will, when properly integrated into an environment, become the backbone for operational data to traverse from many of the different tools that are used to operate the data center, from IT service management (ITSM), to IT infrastructure management, and the critical facilities management tools.

Gardner: I suppose yet another trend that we’re all grappling with these days is the notion of things moving to as-a-service, on-demand, or even as a cloud technology. Is that the case, too, with DCIM, that people are looking to do this as a service? Are we starting to do this across the hybrid model as well?
Today, clients have a huge amount of choice in terms of how they provision and obtain their IT.

Carman: Yes. These solution providers are looking toward how they can penetrate the market and provide services to all different sizes of organizations. Many of them are looking to a software-as-a-service (SaaS) model to provide DCIM. There has to be a very careful analysis of what type of a licensing model you're going to actually use within your environment to ensure that the type of functionality you're trying to achieve is interoperable with existing management tools.

Gardner: Steve, do you have anything more to offer in terms of where this is going, perhaps over time on that services delivery question? [Learn more about DCIM.]

Wibrew: Today, clients have a huge amount of choice in terms of how they provision and obtain their IT. Obviously, there are the traditional legacy environments and the converged systems and clients operate in their own cloud solutions.

Or maybe they’re even going out to external cloud providers and some interesting dynamics that really do increase the complexity of where they get services from. This needs to be baked into that converged solution around the interoperability and interfacing between multiple systems. So IT is truly a business supporting the organization and providing end-to-end services.

Gardner: Well I can certainly see why IDC recently named 2014 is the year of DCIM. It seems that the timing now is critical. If you let your systems languish in legacy status for too long, you won’t be able to keep up with the new demand. If you don’t create management-of-management capabilities, you won’t be able to cross these boundaries of service delivery and hybrid models and you certainly won’t be able to exploit the analysis change from all the data.

So it seems to me that this is really the time to get on this before you lose ground and/or can’t keep up with the modern requirements. What’s happening right now in terms of HP and how it’s trying to help organizations obtain do some sooner rather than later? Let me start with you, Aaron.

Organizations struggling

Carman: Most organizations are really struggling to introduce DCIM into their environment, since at this point, it’s really viewed as more as a facilities-type tool. The approach from different DCIM providers varies greatly on the functions and features they provide in their tool. Many organizations are struggling just to understand which DCIM product is best for them and how to incorporate into a long term strategy for operations management.

So the services that we brought to market address that specifically, not only from which DCIM tool will be best for their environment, but how it fits strategically into the direction they want to take from hosting their digital services in the future.

Gardner: Steve, I think we should also be careful not to limit the purview of DCIM. This is not just IT. This does include facilities, hybrid and service delivery model, management capabilities. Maybe you could help us put the proper box around DCIM. How far and why does it go or should we narrow it so that it doesn’t become deluded or confused?

Wibrew: Yeah, that’s a very good question, an important one to address. What we’ve seen is what the analysts have predicted. Now is the time, and we’re going to see huge growth in DCIM solutions over the next few years.
DCIM alone is not the end-to-end solution.

DCIM has really been the domain of the facilities team, and there’s traditionally been quite a lack of understanding of what DCIM is all about within the IT infrastructure management team. If you talk to lot of IT specialists, the awareness of DCIM is still quite limited at the moment. So they certainly need to find out more about it and understand the value that DCIM can bring to IT infrastructure management.

I understand that features and functions do vary, and the extent of what DCIM delivers will vary from one product to another. It’s very good certainly around the facilities space in terms of power, cooling, and knowing what’s out on the data center floor. It’s very good at knowing what’s in the rack and how much power and space has been used within the rack.

It’s very good at cable management, the networks, and for storage and the power cabling. The trend is that DCIM will evolve and grow more into the IT management space as well. So it’s becoming very aware of things like server infrastructure and even down to the virtual infrastructure, as well, getting into those domains.

DCIM will typically have work protectabilities for change in activity management. But DCIM alone is not the end-to-end solution, and we realized the importance of the need to integrate it with the full ITSM solutions and platform management solutions. A major focus, over the past few months, is to make sure that the DCIM solutions do integrate very well with the wider IT service-management solutions to provide that integrated end-to-end holistic management solution across the entire data-center ecosystem.

Gardner: Aaron, when I hear Steve talking about this more general inclusion description of DCIM, it occurs to me that this isn’t something you buy in a box. This is not just a technology or a product that we’re talking about. We’re talking about methodology. We’re talking about consulting, expertise, and tribal knowledge that’s shared. Maybe you could help us better understand not only HP’s approach to this, but how one attains DCIM. What is the process by which one becomes an expert in this? [Learn more about DCIM.]

Great variation

Carman: With DCIM being a newer solution within the industry, I want to be very careful about calling folks DCIM specialists. We feel that we have a very great knowledge of the solutions out there. They vary so greatly.

It takes a collaborative team of folks within HP, as well as with the client, to truly understand what they’re trying to achieve. You could even pull it down to what types of use cases they’re trying to achieve for the organization, which tool works best and in interoperability and coordination with the other tools and processes they have.

We have a methodology framework called the Converged Management Framework that focuses on four distinct areas for a optimized solution and strategy for starting with business goals and understanding what the true key performance indicators are and what dashboards are required.

It looks at what the metrics are going to be for measuring success and couples that with understanding organizationally who is responsible for what types of services we provide as an ultimate service to our end user. Most of the time, we’re focusing on the facilities in IT organization.

Also, those need to be aligned to the process and workflows for provisioning services to the end users, supported directly by a system’s reference architecture, which is primarily made up of operational management tools and software. All those need to be supported by one another and purposefully designed, so that you can meet and achieve the goals of the business.
IT infrastructure, right up to services of a business, end to end, is very large and very, very complex.

When you don’t do that, the time it takes for you to deliver services to your end user lengthens and costs money. When you have separate tools that are not referencing single points of data, then you’re spending a lot of time rationalizing and understanding if you have the accurate data in front of you. All this boils down to not only cost but having a resilient operations, knowing that when you’re looking at a particular device or setup devices, you truly understand what it’s providing end to end to your users.

Gardner: Steve, it seems to me that this is a little bit of a chameleon. People who have a certain type of requirement can look at DCIM, some of the methodologies and framework, and get something unique or tailored.

If someone has real serious energy issues, they’re worried about not being able to supply sufficient energy. So they could approach DCIM from that energy vantage point. If someone is building a new data center, they could bring facilities planning together with other requirements and have that larger holistic view.

Am I reading this right? Is this sort of a chameleon or an adaptive type of affair, and how does that sort of manifest itself in terms of how you deliver the service?

Wibrew: If you think about the possibilities in the management of facilities, the IT infrastructure, right up to services of a business, end-to-end, is very large and very, very complex. We have to break it down into small or more manageable chunks and focus on the key priorities.

Most-important priorities

So we look at the trans-organization, work with them to identify to them what their most important priorities are in terms of their converged-management solution and their journey.

It’s heavily structured around ITSM and ITIL processes, and we’ve identified some great candidates within ITIL for integration between facilities in IT. It’s really a case of working out the prioritized journey for that particular client. Probably one of the most important integrations would be to have a single view of the truth of operational data. So it would be unified asset information.

CMDBs within a configuration management system might be the very first and important integration between the two, because that’s the foundation for other follow-on services until you know what you’ve got, it’s very difficult to plan, what you need in the future in terms of infrastructure.

Another important integration that is now possible with these converged solutions is the integration of power management in terms of energy consumption between the facilities and the IT infrastructure.
These integrated solutions can be more granular, far more dynamic around energy consumption.

If you think about managing the power consumption of things like efficiency of the data center with PoE, generally speaking, in the past, that would be the domain of the facilities team. The IT infrastructure would simply be hosted in the facility.

The IT teams didn’t really care about how much power was used. But these integrated solutions can be more granular, far more dynamic around energy consumption with much more information being collected, not just at a facility level but within the racks and in the power-distribution units (PDUs), and in the blade chassis, right down to individual service.

We can now know what the energy consumption is. We can now incentivize the IT teams to take responsibility for energy management and energy consumption. This is a great way of actually reducing a client’s carbon foot print and energy consumption within the data center through these integrated solutions.

Gardner: Aaron, I suppose another important point to be clear on is that, like many services within HP Technology Services, this is not just designed for HP products. This is an ecumenical approach to whatever is installed in terms of product facility management capability. I wonder if you could explain a bit more HP’s philosophy when it comes to supporting the entire portfolio. [Learn more about DCIM.]

Carman: HP’s professional services we’re offering in this space are really agnostic to the final solution. We understand that a customer has been running their environment for years and has made investments into a lot of different operational tools over the years.

That’s a part of our analysis and methodology, to come in and understand the environment and what the client is trying to achieve. Then we put together a strategy, a roadmap of different products, that will help them achieve their goals that are interoperable.

Next level

We continue to transform them to the next level of abilities or capabilities that they are looking to achieve, especially around how they provision services and help them become, at the end, most likely a cloud-service provider to their end users, where heavy levels of automation are built in, so that they can get digital services to their end users in a much shorter period of time.

Gardner: One of the things I really like in talking about technology is to focus on the way it’s being used, to show rather than just tell. I’m hoping that either of you, Aaron or Steve, have some use cases or examples where this has been put to good use -- DCIM processes, methodologies, the über-holistic approach, and planning right down to the chassis included.

I hope you can not only discuss a little bit about by who and how this is being done, but what they get for it. Are there any data points we can look to that tell us that, when people do this right -- and here are some folks that have done it right -- what they got back for their efforts. Why don’t we start with you, Aaron?

Carman: HP has been offering operational services for years. So this is nothing new to HP, but traditionally, we’ve been providing these services in silos. When we reorganized ourselves just recently and really started to put together the IT-plus-facilities store, it quickly became very apparent that from an operations management perspective, a lot of the services we provide really needed to have a lifecycle approach and be brought together.

So we have a lot of different examples. We’ve rolled out different forms of converged-management consulting to other clients, and there are a lot of different benefits you get from the different tools that are a part of the overall solution.
We’re providing folks with a means of optimizing how they provision services, which is going to lower their cost structures.

You can point to DCIM and a lot of the benefits you get from understanding your assets and being able to decommission those more quickly, understanding the power relationship, and then understanding many different elements of tying the IT infrastructure chain to the facilities chain.

In the end, when you look at all these together, it’s going to be different for every client. You have to come in and understand the different components that are going to make up a return on investment (ROI) for the client based upon what they’re willing to do and what they’re trying to achieve.

In the end, we’re providing folks with a means of optimizing how they provision services, which is going to lower their cost structures. Everyone is looking to lower cost, but also increase resiliency, as well as then possibly defer large capital expenditures like expanding data centers. So many of these different outcomes could apply to a customer that engages with converged management.

Gardner: I realize this is fairly new. It was just on Jan. 23 that HP announced some newservices that are converged-management consulting, and that management framework was updated with new technical requirements. You have four new services organized with the management workshop, roadmap, design implementations, and so forth. [Learn more about DCIM.]

So this is fairly new, but Steve Wibrew, is there any instance where you’ve worked with some organization and that some of the really powerful benefits of doing this properly have shown through? Do you have any anecdotes you can recall of an organization that’s done this and maybe some interesting ways that it’s benefited them, maybe unintended consequences?

Data-center transformation

Wibrew: I certainly can give some real examples. Where I've worked in the past with some major projects for transformation within the data center, we would be deploying large amounts of new infrastructure within the data center.

The starting point is to understand what’s there in the first place. I’ve been engaged with many clients where if you ask them about inventory, what’s in the data center, you get totally different answers from different groups of people within the organization. The IT team wants to put more stuff into the data center. The facilities team says, “No more space. We’re full. We can’t do that.”

I found that when you pull this data together from multiple sources and get a consistent feel of the truth, you can start to plan far more accurately and efficiently. Perhaps the lack of space in the data center is because there may be infrastructure that’s sitting there, powered on, and not being utilized by anybody.

It’s a fact that we’re redundant. I’ve had many situations where, in pulling together a consistent inventory, we can get rid of a lot of redundant equipment, allowing space for major initiatives and expansion projects. So there are some examples of the benefits of consolidated inventory and information.
DCIM is the only tool poised to become that backbone between the facilities and IT infrastructures.

Gardner: We’re almost out of time, but I just wanted to look towards the future about the requirements and the dynamic nature of workloads and the scale and density of consolidated data centers. I have to imagine that these are only going to become more urgent and more pressing.

So what about that, Aaron, as we look a few years out at big-data requirements, hybrid cloud requirements, infrastructure KPIs for service delivery, energy, and carbon pressures? What’s the outlook in terms of doing this, and should we expect that there will be an ongoing demand, but also ongoing and improving return on investments you make, vis-à-vis these consulting services and DCIM?

Carman: Based upon a lot of the challenges that we outlined earlier in the program, we feel that in order to operate efficiently, this type of a future state operational-tools architecture is going to have to be in place, and DCIM is the only tool poised to become that backbone between the facilities and IT infrastructures.

So more-and-more, with a lot of the challenges of my compute footprint shrinking and having a different requirements that I had in the past, we’re now dealing with a storage or data explosion, where my data center is all filled up with storage files.

As these new demands from the business come down and force organizations onto new types of technology infrastructure platforms they haven’t dealt within the past, it requires them to be much more flexible when they have, in most cases, very inflexible facilities. That’s the strength of DCIM and what it can provide just in that one instance.

But more-and-more, the business is expecting digital services to almost be instant. They want to capitalize on the market at that time. They don't want to wait weeks or months for enterprise IT to provide them with a service to take advantage of a new service offering. So it's forcing folks into operating differently, and that's where converged management is poised to help these customers.

Looking to the future

Gardner: Last word to you, Steve. When you look into your crystal ball and think about how things will be in three to five years, what is it about DCIM rather and some of these services that you think will be most impacting?

Wibrew: I think the trend we're going to see is a far greater adoption of DCIM. It's only deployed in a small number of data centers at the moment. That's going to increase quite dramatically, and this could be a much tighter alignment between how the facilities are run and how the IT infrastructure is operated and supported. It could be far more integrated than it is today.

The roles of IT are going to change, and a lot of the work now is still around design, planning, scripting, and orchestrating. In the future, we're going to see people, almost like a conductor in an orchestra, overseeing the operations within the data center through leading highly automated and optimized processes, which are actually delivered by automated solutions.
The trend we're going to see is a far greater adoption of DCIM. It's only deployed in a small number of data centers at the moment.

Gardner: Very good. I should also point out that I benefited greatly in learning more about DCIM on the HP website. There were videos, white-papers, and blog-posts. So, there’s quite a bit of information for those interested in learning more about DCIM. HP Technology Services website was a great resource for me. [Learn more about DCIM.]

We'll have to leave it there, gentlemen. You’ve been listening to a sponsored BriefingsDirect discussion on improving the management and automation of data centers and facilities. We’ve seen how IT operators and planners can keep their data centers from spinning out of control via exploiting new data-center infrastructure management capabilities.

I want to thank our guests, Aaron Carman, the HP Worldwide Critical Facilities Strategy Leader. Thanks so much, Aaron.

Carman: It's my pleasure. Thank you.

Gardner: And also Steve Wibrew, HP Worldwide IT Management Consulting Strategy and Portfolio Lead. Thanks so much, Steve.

Wibrew: Thank you for listening.

Gardner: This is Dana Gardner, Principal Analyst at Interarbor Solutions. Thanks to our audience and come back next time for the next BriefingsDirect podcast discussion.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: HP.

Transcript of a BriefingsDirect discussion on how organizations need to better manage the impact that IT and big data now have on data centers and how Data Center Infrastructure Management helps. Copyright Interarbor Solutions, LLC, 2005-2014. All rights reserved.

You may also be interested in: