Thursday, November 02, 2017

How Mounting Complexity, Multi-Cloud Sprawl, and Need for Maturity Confront Hybrid IT’s Ability to Grow and Thrive

Transcript of a discussion on how companies and IT leaders are seeking to manage an increasingly complex transition to sustainable hybrid IT.
 
Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: Hewlett Packard Enterprise.

Dana Gardner: Hello, and welcome to the next edition of the BriefingsDirect Voice of the Analyst podcast series. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator. 

Join us now as we hear from leading IT industry analysts and consultants on how to make the hybrid IT journey to successful digital business transformation.

Our next interview examines how the economics and risk management elements of hybrid IT factor into effective cloud adoption and choice. We’ll now explore how mounting complexity and a lack of multi-cloud services management maturity must be solved in order to have businesses grow and thrive as digital enterprises.

https://www.linkedin.com/in/timcrawford/
Crawford
To report on how companies and IT leaders are managing an increasingly complex transition to sustainable hybrid IT, we are joined by Tim Crawford, CIO Strategic Advisor at AVOA in Los Angeles. Welcome, Tim.

Tim Crawford: Thanks, Dana. Thanks for having me on the program; I’m looking forward to our conversation.

Gardner: You and I have appeared on a number of panels and videos over the years, but it’s great to have you on BriefingsDirect. I appreciate your time.

Crawford: It’s always a pleasure to get an opportunity to chat with you, and now actually getting a chance to talk to your audience as well. I’m happy to share what I can.

Gardner: Tim, there’s a lot of evidence that businesses are adopting cloud models at a rapid pace. But there is also lingering concern about how to best determine the right mix of cloud, what kinds of cloud, and how to mitigate the risks and manage change over time.

As someone who regularly advises chief information officers (CIOs), who or which group is surfacing that is tasked with managing this cloud adoption and its complexity within these businesses? Who will be managing this dynamic complexity?

To IT and beyond


Crawford: For the short-term, I would say everyone. It’s not as simple as it has been in the past where we look to the IT organization as the end-all, be-all for all things technology. As we begin talking about different consumption models -- and cloud is a relatively new consumption model for technology -- it changes the dynamics of it. It’s the combination of changing that consumption model -- but then there’s another factor that comes into this. There is also the consumerization of technology, right? We are “democratizing” technology to the point where everyone can use it, and therefore everyone does use it, and they begin to get more comfortable with technology.

It’s not as it used to be, where we would say, “Okay, I'm not sure how to turn on a computer.” Now, businesses may be more familiar outside of the IT organization with certain technologies. Bringing that full-circle, the answer is that we have to look beyond just IT. Cloud is something that is consumed by IT organizations. It’s consumed by different lines of business, too. It’s consumed even by end-consumers of the products and services. I would say it’s all of the above.
 
Learn More About
Solutions From HPE

Gardner: The good news is that more and more people are able to -- on their own – innovate, to acquire cloud services, and they can factor those into how they obtain business objectives. But do you expect that we will get to the point where that becomes disjointed? Will the goodness of innovation become something that spins out of control, or becomes a negative over time?

Crawford: To some degree, we’ve already hit that inflection-point where technology is being used in inappropriate ways. A great example of this -- and it’s something that just kind of raises the hair on the back of my neck -- is when I hear that boards of directors of publicly traded companies are giving mandates to their organization to “Go cloud.”

The board should be very business-focused and instead they're dictating specific technology -- whether it’s the right technology or not. That’s really what this comes down to. 

What’s the right use of cloud – in all forms, public, private, software as a service (SaaS). What’s the right combination to use for any given application? 
Another example is folks that try and go all-in on cloud but aren’t necessarily thinking about what’s the right use of cloud – in all forms, public, private, software as a service (SaaS). What’s the right combination to use for any given application? It’s not a one-size-fits-all answer.

We in the enterprise IT space haven't really done enough work to truly understand how best to leverage these new sets of tools. We need to both wrap our head around it but also get in the right frame of mind and thought process around how to take advantage of them in the best way possible.

Another example that I've worked through from an economic standpoint is if you were to do the math, which I have done a number of times with clients -- you do the math to figure out what’s the comparative between the IT you're doing on-premises in your corporate data center with any given application -- versus doing it in a public cloud.

Think differently


If you do the math, taking an application from a corporate data center and moving it to public cloud will cost you four times as much money. Four times as much money to go to cloud! Yet we hear the cloud is a lot cheaper. Why is that?

When you begin to tease apart the pieces, the bottom line is that we get that four-times-as-much number because we’re using the same traditional mindset where we think about cloud as a solution, the delivery mechanism, and a tool. The reality is it’s a different delivery mechanism, and it’s a different kind of tool.

When used appropriately, in some cases, yes, it can be less expensive. The challenge is you have to get yourself out of your traditional thinking and think differently about the how and why of leveraging cloud. And when you do that, then things begin to fall into place and make a lot more sense both organizationally -- from a process standpoint, and from a delivery standpoint -- and also economically.

Gardner: That “appropriate use of cloud” is the key. Of course, that could be a moving target. What’s appropriate today might not be appropriate in a month or a quarter. But before we delve into more … Tim, tell us about your organization. What’s a typical day in the life for Tim Crawford like?

It’s not tech for tech’s sake, rather it’s best to say, “How do we use technology for business advantage?” 
Crawford: I love that question. AVOA stands for that position in which we sit between business and technology. If you think about the intersection of business and technology, of using technology for business advantage, that’s the space we spend our time thinking about. We think about how organizations across a myriad of different industries can leverage technology in a meaningful way. It’s not tech for tech’s sake, and I want to be really clear about that. But rather it’s best to say, “How do we use technology for business advantage?”

We spend a lot of time with large enterprises across the globe working through some of these challenges. It could be as simple as changing traditional mindsets to transformational, or it could be talking about tactical objectives. Most times, though, it’s strategic in nature. We spend quite a bit of time thinking about how to solve these big problems and to change the way that companies function, how they operate.

A day in a life of me could range from, if I'm lucky, being able to stay in my office and be on the phone with clients, working with folks and thinking through some of these big problems. But I do spend a lot of time on the road, on an airplane, getting out in the field, meeting with clients, understanding what people really are contending with.

I spent well over 20 years of my career before I began doing this within the IT organization, inside leading IT organizations. It’s incredibly important for me to stay relevant by being out with these folks and understanding what they're challenged by -- and then, of course, helping them through their challenges.

Any given day is something new and I love that diversity. I love hearing different ideas. I love hearing new ideas. I love people who challenge the way I think.

It’s an opportunity for me personally to learn and to grow, and I wish more of us would do that. So it does vary quite a bit, but I'm grateful that the opportunities that I've had to work with have been just fabulous, and the same goes for the people.

Learn More About
Solutions From HPE

Gardner: I've always enjoyed my conversations with you, Tim, because you always do challenge me to think a little bit differently -- and I find that very valuable.

Okay, let’s get back to this idea of “appropriate use of cloud.” I wonder if we should also expand that to be “appropriate use of IT and cloud.” So including that notion of hybrid IT, which includes cloud and hybrid cloud and even multi-cloud. And let’s not forget about the legacy IT services.

How do we know if we’re appropriately using cloud in the context of hybrid IT? Are there measurements? Is there a methodology that’s been established yet? Or are we still in the opening innings of how to even measure and gain visibility into how we consume and use cloud in the context of all IT -- to therefore know if we’re doing it appropriately?

The monkey-bread model


Crawford: The first thing we have to do is take a step back to provide the context of that visibility -- or a compass, as I usually refer to these things. You need to provide a compass to help understand where we need to go.

If we look back for a minute, and look at how IT operates -- traditionally, we did everything. We had our own data center, we built all the applications, we ran our own servers, our own storage, we had the network – we did it all. We did it all, because we had to. We, in IT, didn’t really have a reasonable alternative to running our own email systems, our own file storage systems. Those days have changed.

Fast-forward to today. Now, you have to pick apart the pieces and ask, “What is strategic?” When I say, “strategic,” it doesn’t mean critically important. Electrical power is an example. Is that strategic to your business? No. Is it important? Heck, yeah, because without it, we don’t run. But it’s not something where we’re going out and building power plants next to our office buildings just so we can have power, right? We rely on others to do it because there are mature infrastructures, mature solutions for that. The same is true with IT. We have now crossed the point where there are mature solutions at an enterprise level that we can capitalize on, or that we can leverage.

Part of the methodology I use is the monkey bread example. If you're not familiar with monkey bread, it’s kind of a crazy thing where you have these balls of dough. When you bake it, the balls of dough congeal together and meld. What you're essentially doing is using that as representative of, or an analogue to, your IT portfolio of services and applications. You have to pick apart the pieces of those balls of dough and figure out, “Okay. Well, these systems that support email, those could go off to Google or Microsoft 365. And these applications, well, they could go off to this SaaS-based offering. And these other applications, well, they could go off to this platform.”

And then, what you're left with is this really squishy -- but much smaller -- footprint that you have to contend with. That problem in the center is much more specific -- and arguably that’s what differentiates your company from your competition.

Whether you run email [on-premises] or in a cloud, that’s not differentiating to a business. It’s incredibly important, but not differentiating. When you get to that gooey center, that’s the core piece, that’s where you put your resources in, that’s what you focus on.

This example helps you work through determining what’s critical, and -- more importantly -- what’s strategic and differentiating to my business, and what is not. And when you start to pick apart these pieces, it actually is incredibly liberating. At first, it’s a little scary, but once you get the hang of it, you realize how liberating it is. It brings focus to the things that are most critical for your business.
Identify opportunities where cloud makes sense – and where it doesn’t. It definitely is one of the most significant opportunities for most IT organizations today. 

That’s what we have to do more of. When we do that, we identify opportunities where cloud makes sense -- and where it doesn’t. Cloud is not the end-all, be-all for everything. It definitely is one of the most significant opportunities for most IT organizations today.

So it’s important: Understand what is appropriate, how you leverage the right solutions for the right application or service.

Gardner: IT in many organizations is still responsible for everything around technology. And that now includes higher-level strategic undertakings of how all this technology and the businesses come together. It includes how we help our businesses transform to be more agile in new and competitive environments.

So is IT itself going to rise to this challenge, of not doing everything, but instead becoming more of that strategic broker between in IT functions and business outcomes? Or will those decisions get ceded over to another group? Maybe enterprise architects, business architects, business process management (BPM) analysts? Do you think it’s important for IT to both stay in and elevate to the bigger game?

Changing IT roles and responsibilities


Crawford: It’s a great question. For every organization, the answer is going to be different. IT needs to take on a very different role and sensibility. IT needs to look different than how it looks today. Instead of being a technology-centric organization, IT really needs to be a business organization that leverages technology.

The CIO of today and moving forward is not the tech-centric CIO. There are traditional CIOs and transformational CIOs. The transformational CIO is the business leader first who happens to have responsibility for technology. IT, as a whole, needs to follow the same vein.

For example, if you were to go into a traditional IT organization today and ask them what’s the nature of their business, ask them to tell you what they do as an administrator, as a developer, to help you understand how that’s going to impact the company and the business -- unfortunately, most of them would have a really hard time doing that.

The IT organization of the future, will articulate clearly the work they’re doing and how that impacts their customers and their business, and how making different changes and tweaks will impact their business. They will have an intimate knowledge of how their business functions much more than how the technology functions. That’s a very different mindset, that’s the place we have to get to for IT on the whole. IT can’t just be this technology organization that sits in a room, separate from the rest of the company. It has to be integral, absolutely integral to the business.

Gardner: If we recognize that cloud is here to stay -- but that the consumption of it needs to be appropriate, and if we’re at some sort of inflection point, we’re also at the risk of consuming cloud inappropriately. If IT and leadership within IT are elevating themselves, and upping their game to be that strategic player, isn’t IT then in the best position to be managing cloud, hybrid cloud and hybrid IT? What tools and what mechanisms will they need in order to make that possible?

Learn More About
Solutions From HPE

Crawford: Theoretically, the answer is that they really need to get to that level. We’re not there, on the whole, yet. Many organizations are not prepared to adopt cloud. I don’t want to be a naysayer of IT, but I think in terms of where IT needs to go on the whole, on the sum, we need to move into that position where we can manage the different types of delivery mechanisms -- whether it’s public cloud, SaaS, private cloud, appropriate data centers -- those are all just different levers we can pull depending on the business type.

Businesses change, customers change, demand changes and revenue comes from different places. IT needs to be able to shift gears just as fast and in anticipation of where the company goes. 
As you mentioned earlier, businesses change, customers change, demand changes, and revenue comes from different places. In IT, we need to be able to shift gears just as fast and be prepared to shift those gears in anticipation of where the company goes. That’s a very different mindset. It’s a very different way of thinking, but it also means we have to think of clever ways to bring these tools together so that we’re well-prepared to leverage things like cloud.

The challenge is many folks are still in that classic mindset, which unfortunately holds back companies from being able to take advantage of some of these new technologies and methodologies. But getting there is key.

Gardner: Some boards of directors, as you mentioned, are saying, “Go cloud,” or be cloud-first. People are taking them at that, and so we are facing a sort of cloud sprawl. People are doing micro services and as developers spinning up cloud instances and object storage instances. Sometimes they’ll keep those running into production; sometimes they’ll shut them down. We have line of business (LOB) managers going out and acquiring services like SaaS applications, running them for a while, perhaps making them a part of their standard operating procedures. But, in many organizations, one hand doesn’t really know what the other is doing.

Are we at the inflection point now where it’s simply a matter of measurement? Would we stifle innovation if we required people to at least mention what it is that they’re doing with their credit cards or petty cash when it comes to IT and cloud services? How important is it to understand what’s going on in your organization so that you can begin a journey toward better management of this overall hybrid IT?


Why, oh why, oh why, cloud?


Crawford: It depends on how you approach it. If you’re doing it from an IT command-and-control perspective, where you want to control everything in cloud -- full stop, that’s failure right out of the gate. But if you’re doing it from a position of -- I’m trying to use it as an opportunity to understand why are these folks leveraging cloud, and why are they not coming to IT, and how can I as CIO be better positioned to be able to support them, then great! Go forth and conquer.

The reality is that different parts of the organization are consuming cloud-based services today. I think there’s an opportunity to bring those together where appropriate. But at the end of the day, you have to ask yourself a very important question. It’s a very simple question, but you have to ask it, and it has to do with each of the different ways that you might leverage cloud. Even when you go beyond cloud and talk about just traditional corporate data assets -- especially as you start thinking about Internet of things (IoT) and start thinking about edge computing -- you know that public cloud becomes problematic for some of those things.

The important question you have to ask yourself is, “Why?” A very simple question, but it can have a really complicated answer. Why are you using public cloud? Why are you using three different forms of public cloud? Why are you using private cloud and public cloud together?

Once you begin to ask yourself those questions, and you keep asking yourself that question … it’s like that old adage. Ask yourself why three times and you kind of get to the core as the true reason why. You’ll bring greater clarity as to the reasons, and typically the business reasons, of why you’re actually going down that path. When you start to understand that, it brings clarity to what decisions are smart decisions -- and what decisions maybe you might want to think about doing differently.

Learn More About
Solutions From HPE

Gardner: Of course, you may begin doing something with cloud for a very good reason. It could be a business reason, a technology reason. You’ll recognize it, you gain value from it -- but then over time you have to step back with maturity and ask, “Am I consuming this in such a way that I’m getting it at the best price-point?” You mentioned a little earlier that sometimes going to public cloud could be four times as expensive.

So even though you may have an organization where you want to foster innovation, you want people to spread their wings, try out proofs of concept, be agile and democratic in terms of their ability to use myriad IT services, at what point do you say, “Okay, we’re doing the business, but we’re not running it like a good business should be run.” How are the economic factors driven into cloud decision-making after you’ve done it for a period of time?

Cloud’s good, but is it good for business?


Crawford: That’s a tough question. You have to look at the services that you’re leveraging and how that ties into business outcomes. If you tie it back to a business outcome, it will provide greater clarity on the sourcing decisions you should make.

For example, if you’re spending $5 to make $6 in a specialty industry, that’s probably not a wise move. But if you’re spending $5 to make $500, okay, that’s a pretty good move, right? There is a trade-off that you have to understand from an economic standpoint. But you have to understand what the true cost is and whether there’s sufficient value. I don’t mean technological value, I mean business value, which is measured in dollars.

If you begin to understand the business value of the actions you take -- how you leverage public cloud versus private cloud versus your corporate data center assets -- and you match that against the strategic decisions of what is differentiating versus what’s not, then you get clarity around these decisions. You can properly leverage different resources and gain them at the price points that make sense. If that gets above a certain amount, well, you know that’s not necessarily the right decision to make.

Economics plays a very significant role -- but let’s not kid ourselves. IT organizations haven’t exactly been the best at economics in the past. We need to be moving forward. And so it’s just one more thing on that overflowing plate that we call demand and requirements for IT, but we have to be prepared for that.

Gardner: There might be one other big item on that plate. We can allow people to pursue business outcomes using any technology that they can get their hands on -- perhaps at any price – and we can then mature that process over time by looking at price, by finding the best options.

But the other item that we need to consider at all times is risk. Sometimes we need to consider whether getting too far into a model like a public cloud, for example, that we can’t get back out of, is part of that risk. Maybe we have to consider that being completely dependent on external cloud networks across a global supply chain, for example, has inherent cyber security risks. Isn’t it up to IT also to help organizations factor some of these risks -- along with compliance, regulation, data sovereignty issues? It’s a big barrel of monkeys.

Before we sign off, as we’re almost out of time, please address for me, Tim, the idea of IT being a risk factor mitigator for a business.

Safety in numbers


Crawford: You bring up a great point, Dana. Risk -- whether it is risk from a cyber security standpoint or it could be data sovereignty issues, as well as regulatory compliance -- the reality is that nobody across the organization truly understands all of these pieces together.
It really is a team effort to bring it all together -- where you have the privacy folks, the information security folks, and the compliance folks -- that can become a united team. 

It really is a team effort to bring it all together -- where you have the privacy folks, the information security folks, and the compliance folks -- that can become a united team. I don’t think IT is the only component of that. I really think this is a team sport. In any organization that I’ve worked with, across the industry it’s a team sport. It’s not just one group.

It’s complicated, and frankly, it’s getting more complicated every single day. When you have these huge breaches that sit on the front page of The Wall Street Journal and other publications, it’s really hard to get clarity around risk when you’re always trying to fight against the fear factor. So that’s another balancing act that these groups are going to have to contend with moving forward. You can’t ignore it. You absolutely shouldn’t. You should get proactive about it, but it is complicated and it is a team sport.

Gardner: Some take-aways for me today are that IT needs to raise its game. Yet again, they need to get more strategic, to develop some of the tools that they’ll need to address issues of sprawl, complexity, cost, and simply gaining visibility into what everyone in the organization is – or isn’t -- doing appropriately with hybrid cloud and hybrid IT.

I’m afraid we’ll have to leave it there. We’ve been exploring how the economics and risk management elements of hybrid IT factor into effective cloud adoption and choice. And we’ve learned how mounting complexity and a lack of multi-cloud services management maturity must be solved in order for businesses to continue to grow -- and for IT organizations to continue to fulfill what could very well be their new charter.

So please join me now in thanking our guest, Tim Crawford, CIO Strategic Advisor at AVOA in Los Angeles. Thank you, Tim.

Crawford: Thanks for having me on the program.

Gardner: Tim, how can our listeners and readers best follow you to gain more of your excellent insights?

Crawford: There are two great ways to do that. One is on Twitter, @tcrawford or my blog at www.avoa.com.

Gardner: Thanks again, that was really great. A big thank you as well to our audience for joining us for this BriefingsDirect Voice of the Analyst discussion on how to best manage the hybrid IT journey to digital business transformation.

I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host for this ongoing series of Hewlett Packard Enterprise-sponsored interviews. Follow me on Twitter at @Dana_Gardner and find more hybrid IT-focused podcasts at www.briefingsdirect.com. Lastly, please pass this content on to your IT community, and do come back next time.

Listen to the podcast. Find it on iTunes. Get the mobile appDownload the transcript. Sponsor: Hewlett Packard Enterprise.

Transcript of a discussion on how companies and IT leaders are seeking to manage an increasingly complex transition to sustainable hybrid IT. Copyright Interarbor Solutions, LLC, 2005-2017. All rights reserved.

You may also be interested in:

Tuesday, October 24, 2017

Case Study: How HCI-Powered Private Clouds Accelerate Digital Transformation

Transcript of a discussion on how public cloud-like experiences, agility, and cost structures are being delivered via private cloud models.

Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: Hewlett Packard Enterprise.

Dana Gardner: Welcome to the next edition of the BriefingsDirect Voice of the Customer podcast series. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for this ongoing discussion on digital transformation success. Stay with us now to learn how agile businesses are fending off disruption -- in favor of innovation.

Our next thought leadership interview examines how a world-class private cloud project evolved in the financial sector. We’ll now learn how public cloud-like experiences, agility, and cost structures are being delivered via a strictly private cloud model.

McKittrick
Jim McKittrick is here to help us explore the potential for cloud benefits when retaining control over the data center is a critical requirement. He is Senior Account Manager at Applied Computer Solutions (ACS) in Huntington Beach, California. Welcome, Jim.

Jim McKittrick: Thank you for having me. I’m glad to be here.

Gardner: Many enterprises want a private cloud for security and control reasons. They want an OpEx-like public cloud model, and that total on-premises control. Can you have it both ways?

McKittrick: We are showing that you can. People are learning that the public cloud isn't necessarily all it has been hyped up to be, which is what happens with newer technologies as they come out.

Gardner: What are the drivers for keeping it all private?

McKittrick: Security, of course. But if somebody actually analyzes it, a lot of times it will be about cost and data access, and the ease of data egress because getting your data back can sometimes be a challenge.

Also, there is a realization that even though I may have strict service-level agreements (SLAs), if something goes wrong they are not going to save my business. If that thing tanks, do I want to give that business away? I have some clients who absolutely will not.

Sleep well at night

Gardner: Control, and so being able to sleep well at night.

McKittrick: Absolutely. I have other clients that we can speak about who have HIPAA requirements, and they are privately held and privately owned. And literally the CEO says, “I am not doing it.” And he doesn’t care what it costs.

Gardner: If there were a huge delta between the price of going with a public cloud or staying private, sure. But that deltais closing. So you can have the best of both worlds -- and not pay a very high penalty nowadays.

McKittrick: If done properly, certainly from my experience. We have been able to prove that you can run an agile, cloud-like infrastructure or private cloud as cost-effectively -- or even more cost effectively -- than you can in the public clouds. There are certainly places for both in the market.

Gardner: It's going to vary, of course, from company to company -- and even department to department within a company -- but the fact is that that choice is there.

McKittrick: No doubt about it, it absolutely is.

Gardner: Tell us about ACS, your role there, and how the company is defining what you consider the best of hybrid cloud environments.

McKittrick: We are a relatively large reseller, about $600 million. We have specialized in data center practices for 27 years. So we have been in business quite some time and have had to evolve with the IT industry.
We have a head start on what's really coming down the pipe -- we are one to two years ahead of the general marketplace.

Structurally, we are fairly conventional from the standpoint that we are a typical reseller, but we pride ourselves on our technical acumen. Because we have some very, very large clients and have worked with them to get on their technology boards, we feel like we have a head start on what's really coming down the pipe --  we are maybe one to two years ahead of the general marketplace. We feel that we have a thought leadership edge there, and we use that as well as very senior engineering leadership in our organization to tell us what we are supposed to be doing.

Gardner: I know you probably can't mention the company by name, but tell us about a recent project that seems a harbinger of things to come.

Hyper-convergent control 

McKittrick: It began as a proof of concept (POC), but it’s in production, it’s live globally.
I have been with ACS for 18 years, and I have had this client for 17 of those years. We have been through multiple data center iterations.

When this last one came up, three things happened. Number one, they were under tremendous cost pressure -- but public cloud was not an option for them.

The second thing was that they had grown by acquisition, and so they had dozens of IT fiefdoms. You can imagine culturally and technologically the challenges involved there. Nonetheless, we were told to consolidate and globalize all these operations.

Thirdly, I was brought in by a client who had run the US presence for this company. We had created a single IT infrastructure in the US for them. He said, “Do it again for the whole world, but save us a bunch of money.” The gauntlet was thrown down. The customer was put in the position of having to make some very aggressive choices. And so he effectively asked me bring them “cool stuff.”

You could give control to anybody in the organization across the globe and they would be able to manage it.

They asked, “What's new out there? How can we do this?” Our senior engineering staff brought a couple of ideas to the table, and hyper-converged infrastructure (HCI) was central to that. HCI provided the ability to simplify the organization, as well as the IT management for the organization. You could give control of it to anybody in the organization across the globe and they would be able to manage it, working with partners in other parts of the world.

Gardner: Remote management being very important for this.

Learn How to Transform
Environment

McKittrick: Absolutely, yes. We also gained failover capabilities, and disaster recovery within these regional data centers. We ended going from -- depending on whom you spoke to -- somewhere between seven to 19 data centers globally, down to three. We were able to consolidate down to three. The data center footprint shrank massively. Just in the US, we went to one data center; we got rid of the other data center completely. We went from 34 racks down to 3.5.

Gardner: Hyper-convergence being a big part of that?

McKittrick: Correct, that was really the key, hyper-convergence and virtualization.
The other key enabling technology was data de-duplication, so the ability to shrink the data and then be able to move it from place to place without crushing bandwidth requirements, because you were only moving the changes, the change blocks.

Gardner: So more of a modern data lifecycle approach?

McKittrick: Absolutely. The backup and recovery approach was built in to the solution itself. So we also deployed a separate data archive, but that's different than backup and recovery. Backup and recovery were essentially handled by VMware and the capability to have the same machine exist in multiple places at the same time.

Gardner: Now, there is more than just the physical approach to IT, as you described it, there is the budgetary financial approach. So how do they maybe get the benefit of the  OpEx approach that people are fond of with public cloud models and apply that in a private cloud setting?

Budget benefits 

McKittrick: They didn't really take that approach. I mean we looked at it. We looked at essentially leasing. We looked at the pay-as-you-go models and it didn't work for them. We ended up doing essentially a purchase of the equipment with a depreciation schedule and traditional support. It was analyzed, and they essentially said, “No, we are just going to buy it.”

Gardner: So total cost of ownership (TCO) is a better metric to look at. Did you have the ability to measure that? What were some of the metrics of success other than this massive consolidation of footprint and better control over management?

McKittrick: We had to justify TCO relative to what a traditional IT refresh would have cost. That's what I was working on for the client until the cost pressure came to bear. We then needed to change our thinking. That's when hyper-convergence came through.

What we would have spent on just hardware and infrastructure costs, not including network and bandwidth -- would have been $55 million over five years, and we ended up doing it for $15 million.

The cost analysis was already done, because I was already costing it with a refresh, including compute and traditional SAN storage. The numbers I had over a five-year period – just what we would have spent on hardware and infrastructure costs, and not including network and bandwidth – would have been $55 million over five years, and we ended up doing it for $15 million.

Gardner: We have mentioned HCI several times, but you were specifically using SimpliVity, which is now part of Hewlett Packard Enterprise (HPE). Tell us about why SimpliVity was a proof-point for you, and why you think that’s going to strengthen HPE's portfolio.

Learn How to Transform
Environment

McKittrick: This thing is now built and running, and it's been two years since inception. So that's a long time in technology, of course. The major factors involved were the cost savings.

As for HPE going forward, the way the client looked at it -- and he is a very forward-thinking technologist -- he always liked to say, “It’s just VMware.” So the beauty of it from their perspective – was that they could just deploy on VMware virtualization. Everyone in our organization knows how to work with VMware, we just deploy that, and they move things around. Everything is managed in that fashion, as virtual machines, as opposed to traditional storage, and all the other layers of things that have to be involved in traditional data centers.

The HCI-based data centers also included built-in WAN optimization, built-in backup and recovery, and were largely on solid-state disks (SSDs). All of the other pieces of the hardware stack that you would traditionally have -- from the server on down -- folded into a little box, so to speak, a physical box. With HCI, you get all of that functionality in a much simpler and much easier to manage fashion. It just makes everything easier.

Gardner: When you bring all those HCI elements together, it really creates a solution. Are there any other aspects of HPE’s portfolio, in addition now to SimpliVity, that would be of interest for future projects?

McKittrick: HPE is able to take this further. You have to remember, at the time, SimpliVity was a widget, and they would partner with the server vendors. That was really it, and with VMware.

Now with HPE, SimpliVity can really build out their roadmap. There is all kinds of innovation that's going to come.

Now with HPE, SimpliVity has behind them one of the largest technology companies in the world. They can really build out their roadmap. There is all kinds of innovation that’s going to come. When you then pair that with things like Microsoft Azure Stack and HPE Synergy and its composable architecture -- yes, all of that is going to be folded right in there.

I give HPE credit for having seen what HCI technology can bring to them and can help them springboard forward, and then also apply it back into things that they are already developing. Am I going to have more opportunity with this infrastructure now because of the SimpliVity acquisition? Yes.

Gardner:  For those organizations that want to take advantage of public cloud options, also having HCI-powered hybrid clouds, and composable and automated-bursting and scale-out -- and soon combining that multi-cloud options via HPE New Stack – this gives them the best of all worlds.

Learn How to Transform
Environment

McKittrick: Exactly. There you are. You have your hybrid cloud right there. And certainly one could do that with traditional IT, and still have that capability that HPE has been working on. But now, [with SimpliVity HCI] you have just consolidated all of that down to a relatively simple hardware approach. You can now quickly deploy and gain all those hybrid capabilities along with it. And you have the mobility of your applications and workloads, and all of that goodness, so that you can decide where you want to put this stuff.

Gardner: Before we sign off, let's revisit this notion of those organizations that have to have a private cloud. What words of advice might you give them as they pursue such dramatic re-architecting of their entire IT systems?

A people-first process 

McKittrick: Great question. The technology was the easy part. This was my first global HCI roll out, and I have been in the business well over 20 years. The differences come when you are messing with people -- moving their cheese, and messing with their rice bowl. It’s profound. It always comes back to people.

The people and process were the hardest things to deal with, and quite frankly, still are. Make sure that everybody is on-board. They must understand what's happening, why it's happening, and then you try to get all those people pulling in the same direction. Otherwise, you end up in a massive morass and things don't get done, or they become almost unmanageable.

Gardner: Unfortunately, there are plenty of examples of that out there.

McKittrick: Certainly. Recently, I have been saying it more, “It always comes back to the people, that’s always the case.”

Gardner: I’m afraid we’ll have to leave it there. We have been exploring how a world-class private cloud project evolved in the financial sector. And we have learned how a private cloud model using HCI can deliver a public cloud-like experience -- with agility and cost structures that mimic public cloud attributes. This is especially important for those organizations that need to retain control over their data centers.

So please join me in thanking our guest, Jim McKittrick, Senior Account Manager at Applied Computer Solutions in Huntington Beach, California. Thanks so much, Jim.

McKittrick: Thank you for having me. I appreciate it.

Gardner: And a big thank you to our audience as well for joining this BriefingsDirect Voice of the Customer digital transformation success story. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host for this ongoing series of Hewlett Packard Enterprise-sponsored interviews.

Thanks again for listening. Please pass this along as you can in your IT community, and do come back next time.

Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: Hewlett Packard Enterprise.

Transcript of a discussion on how public cloud-like experiences, agility, and cost structures are being delivered via private cloud models. Copyright Interarbor Solutions, LLC, 2005-2017. All rights reserved.

You may also be interested in:

Tuesday, October 10, 2017

Inside Story on HPC’s AI Role in Bridges Strategic Reasoning Research Project at CMU

Transcript of a discussion on how Carnegie Mellon University researchers are advancing strategic reasoning and machine learning capabilities using the latest in high performance computing.  

Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: Hewlett Packard Enterprise.

Dana Gardner: Hello, and welcome to the next edition of the BriefingsDirect Voice of the Customer podcast series. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for this ongoing discussion on digital transformation success stories. Stay with us now to learn how agile businesses are fending off disruption -- in favor of innovation.

Our next high performance computing (HPC) success interview examines how strategic reasoning is becoming more common and capable -- even using imperfect information. We’ll now learn how Carnegie Mellon University and a team of researchers there are producing amazing results with strategic reasoning thanks in part to powerful new memory-intense systems architectures.

Sandholm
To learn more about strategic reasoning advances, please join me in welcoming Tuomas Sandholm, Professor and Director of the Electronic Marketplaces Lab at Carnegie Mellon University in Pittsburgh.

Tuomas Sandholm: Thank you very much.

Gardner: Tell us about strategic reasoning and why imperfect information is often the reality that these systems face?

Sandholm: In strategic reasoning we take the word “strategic” very seriously. It means game theoretic, so in multi-agent settings where you have more than one player, you can't just optimize as if you were the only actor -- because the other players are going to act strategically. What you do affects how they should play, and what they do affects how you should play.

That's what game theory is about. In artificial intelligence (AI), there has been a long history of strategic reasoning. Most AI reasoning -- not all of it, but most of it until about 12 years ago -- was really about perfect information games like Othello, Checkers, Chess and Go.

And there has been tremendous progress. But these complete information, or perfect information, games don't really model real business situations very well. Most business situations are of imperfect information.

Know what you don’t know

So you don't know the other guy's resources, their goals and so on. You then need totally different algorithms for solving these games, or game-theoretic solutions that define what rational play is, or opponent exploitation techniques where you try to find out the opponent's mistakes and learn to exploit them.

So totally different techniques are needed, and this has way more applications in reality than perfect information games have.

Gardner: In business, you don't always know the rules. All the variables are dynamic, and we don't know the rationale or the reasoning behind competitors’ actions. People sometimes are playing offense, defense, or a little of both.

Before we dig in to how is this being applied in business circumstances, explain your proof of concept involving poker. Is it Five-Card Draw?

Heads-Up No-Limit Texas Hold'em has become the leading benchmark in the AI community.
Sandholm: No, we’re working on a much harder poker game called Heads-Up No-Limit Texas Hold'em as the benchmark. This has become the leading benchmark in the AI community for testing these application-independent algorithms for reasoning under imperfect information.

The algorithms have really nothing to do with poker, but we needed a common benchmark, much like the IC chip makers have their benchmarks. We compare progress year-to-year and compare progress across the different research groups around the world. Heads-Up No-limit Texas Hold'em turned out to be great benchmark because it is a huge game of imperfect information.

It has 10 to the 161 different situations that a player can face. That is one followed by 161 zeros. And if you think about that, it’s not only more than the number of atoms in the universe, but even if, for every atom in the universe, you have a whole other universe and count all those atoms in those universes -- it will still be more than that.

Gardner: This is as close to infinity as you can probably get, right?

Sandholm: Ha-ha, basically yes.

Gardner: Okay, so you have this massively complex potential data set. How do you winnow that down, and how rapidly does the algorithmic process and platform learn? I imagine that being reactive, creating a pattern that creates better learning is an important part of it. So tell me about the learning part.

Three part harmony

Sandholm: The learning part always interests people, but it's not really the only part here -- or not even the main part. We basically have three main modules in our architecture. One computes approximations of Nash equilibrium strategies using only the rules of the game as input. In other words, game-theoretic strategies.

That doesn’t take any data as input, just the rules of the game. The second part is during play, refining that strategy. We call that subgame solving.

Then the third part is the learning part, or the self-improvement part. And there, traditionally people have done what’s called opponent modeling and opponent exploitation, where you try to model the opponent or opponents and adjust your strategies so as to take advantage of their weaknesses.

However, when we go against these absolute best human strategies, the best human players in the world, I felt that they don't have that many holes to exploit and they are experts at counter-exploiting. When you start to exploit opponents, you typically open yourself up for exploitation, and we didn't want to take that risk. In the learning part, the third part, we took a totally different approach than traditionally is taken in AI.

We are letting the opponents tell us where the holes are in our strategy. Then, in the background, using supercomputing, we are fixing those holes.
We said, “Okay, we are going to play according to our approximate game-theoretic strategies. However, if we see that the opponents have been able to find some mistakes in our strategy, then we will actually fill those mistakes and compute an even closer approximation to game-theoretic play in those spots.”

One way to think about that is that we are letting the opponents tell us where the holes are in our strategy. Then, in the background, using supercomputing, we are fixing those holes.


HPC from HPE
To Supercomputing and Deep Learning

Gardner: Is this being used in any business settings? It certainly seems like there's potential there for a lot of use cases. Business competition and circumstances seem to have an affinity for what you're describing in the poker use case. Where are you taking this next?

Sandholm: So far this, to my knowledge, has not been used in business. One of the reasons is that we have just reached the superhuman level in January 2017. And, of course, if you think about your strategic reasoning problems, many of them are very important, and you don't want to delegate them to AI just to save time or something like that.

Now that the AI is better at strategic reasoning than humans, that completely shifts things. I believe that in the next few years it will be a necessity to have what I call strategic augmentation. So you can't have just people doing business strategy, negotiation, strategic pricing, and product portfolio optimization.

You are going to have to have better strategic reasoning to support you, and so it becomes a kind of competition. So if your competitors have it, or even if they don't, you better have it because it’s a competitive advantage.

Gardner: So a lot of what we're seeing in AI and machine learning is to find the things that the machines do better and allow the humans to do what they can do even better than machines. Now that you have this new capability with strategic reasoning, where does that demarcation come in a business setting? Where do you think that humans will be still paramount, and where will the machines be a very powerful tool for them?

Human modeling, AI solving

Sandholm: At least in the foreseeable future, I see the demarcation as being modeling versus solving. I think that humans will continue to play a very important role in modeling their strategic situations, just to know everything that is pertinent and deciding what’s not pertinent in the model, and so forth. Then the AI is best at solving the model.

That's the demarcation, at least for the foreseeable future. In the very long run, maybe the AI itself actually can start to do the modeling part as well as it builds a better understanding of the world -- but that is far in the future.

Gardner: Looking back as to what is enabling this, clearly the software and the algorithms and finding the right benchmark, in this case the poker game are essential. But with that large of a data set potential -- probabilities set like you mentioned -- the underlying computersystems must need to keep up. Where are you in terms of the threshold that holds you back? Is this a price issue that holds you back? Is it a performance limit, the amount of time required? What are the limits, the governors to continuing?

Sandholm: It's all of the above, and we are very fortunate that we had access to Bridges; otherwise this wouldn’t have been possible at all.  We spent more than a year and needed about 25 million core hours of computing and 2.6 petabytes of data storage.

This amount is necessary to conduct serious absolute superhuman research in this field -- but it is something very hard for a professor to obtain. We were very fortunate to have that computing at our disposal.

Gardner: Let's examine the commercialization potential of this. You're not only a professor at Carnegie Mellon, you’re a founder and CEO of a few companies. Tell us about your companies and how the research is leading to business benefits.

Superhuman business strategies

Sandholm: Let’s start with Strategic Machine, a brand-new start-up company, all of two months old. It’s already profitable, and we are applying the strategic reasoning technology, which again is application independent, along with the Libratus technology, the Lengpudashi technology, and a host of other technologies that we have exclusively licensed to Strategic Machine. We are doing research and development at Strategic Machine as well, and we are taking these to any application that wants us.

                                                                  HPC from HPE
Overcomes Barriers 
To Supercomputing and Deep Learning

Such applications include business strategy optimization, automated negotiation, and strategic pricing. Typically when people do pricing optimization algorithmically, they assume that either their company is a monopolist or the competitors’ prices are fixed, but obviously neither is typically true.

We are looking at how do you price strategically where you are taking into account the opponent’s strategic response in advance. So you price into the future, instead of just pricing reactively. The same can be done for product portfolio optimization along with pricing.

Let's say you're a car manufacturer and you decide what product portfolio you will offer and at what prices. Well, what you should do depends on what your competitors do and vice versa, but you don’t know that in advance. So again, it’s an imperfect-information game.

Gardner: And these are some of the most difficult problems that businesses face. They have huge billion-dollar investments that they need to line up behind for these types of decisions. Because of that pipeline, by the time they get to a dynamic environment where they can assess -- it's often too late. So having the best strategic reasoning as far in advance as possible is a huge benefit.

If you think about machine learning traditionally, it's about learning from the past. But strategic reasoning is all about figuring out what's going to happen in the future.
Sandholm: Exactly! If you think about machine learning traditionally, it's about learning from the past. But strategic reasoning is all about figuring out what's going to happen in the future. And you can marry these up, of course, where the machine learning gives the strategic reasoning technology prior beliefs, and other information to put into the model.

There are also other applications. For example, cyber security has several applications, such as zero-day vulnerabilities. You can run your custom algorithms and standard algorithms to find them, and what algorithms you should run depends on what the other opposing governments run -- so it is a game.

Similarly, once you find them, how do you play them? Do you report your vulnerabilities to Microsoft? Do you attack with them, or do you stockpile them? Again, your best strategy depends on what all the opponents do, and that's also a very strategic application.

And in upstairs blocks trading, in finance, it’s the same thing: A few players, very big, very strategic.

Gaming your own immune system

The most radical application is something that we are working on currently in the lab where we are doing medical treatment planning using these types of sequential planning techniques. We're actually testing how well one can steer a patient's T-cell population to fight cancers, autoimmune diseases, and infections better by not just using one short treatment plan -- but through sophisticated conditional treatment plans where the adversary is actually your own immune system.

Gardner: Or cancer is your opponent, and you need to beat it?

Sandholm: Yes, that’s right. There are actually two different ways to think about that, and they lead to different algorithms. We have looked at it where the actual disease is the opponent -- but here we are actually looking at how do you steer your own T-cell population.

Gardner: Going back to the technology, we've heard quite a bit from HPE about more memory-driven and edge-driven computing, where the analysis can happen closer to where the data is gathered. Are these advances of any use to you in better strategic reasoning algorithmic processing?

Algorithms at the edge

Sandholm: Yes, absolutely! We actually started running at the PSC on an earlier supercomputer, maybe 10 years ago, which was a shared-memory architecture. And then with Bridges, which is mostly a distributed system, we used distributed algorithms. As we go into the future with shared memory, we could get a lot of speedups.

We have both types of algorithms, so we know that we can run on both architectures. But obviously, the shared-memory, if it can fit our models and the dynamic state of the algorithms, is much faster.

Gardner: So the HPE Machine must be of interest to you: HPE’s advanced concept demonstration model, with a memory-driven architecture, photonics for internal communications, and so forth. Is that a technology you're keeping a keen eye on?


HPC from HPE
Overcomes Barriers 
To Supercomputing and Deep Learning

Sandholm: Yes. That would definitely be a desirable thing for us, but what we really focus on is the algorithms and the AI research. We have been very fortunate in that the PSC and HPE have been able to take care of the hardware side.

We really don’t get involved in the hardware side that much, and I'm looking at it from the outside. I'm trusting that they will continue to build the best hardware and maintain it in the best way -- so that we can focus on the AI research.

Gardner: Of course, you could help supplement the cost of the hardware by playing superhuman poker in places like Las Vegas, and perhaps doing quite well. 
It's unethical to pretend to be a human when you are not. The monetary opportunities in the business applications, are much bigger than what you could hope to make in poker anyway.

Sandholm: Actually here in the live game in Las Vegas they don't allow that type of computational support. On the Internet, AI has become a big problem on gaming sites, and it will become an increasing problem. We don't put our AI in there; it’s against their site rules. Also, I think it's unethical to pretend to be a human when you are not. The business opportunities, the monetary opportunities in the business applications, are much bigger than what you could hope to make in poker anyway.

Gardner: I’m afraid we’ll have to leave it there. We have been learning how Carnegie Mellon University researchers are using strategic reasoning advances and pertaining that to poker as a benchmark -- but clearly with a lot more runway in terms of other business and strategic reasoning benefits.

So a big thank you to our guest, Tuomas Sandholm, Professor at Carnegie Mellon University as well as Director of the Electronic Marketplace Lab there.

Sandholm: Thank you, my pleasure.

Gardner: And a big thank you to our audience as well for joining this BriefingsDirect Voice of the Customer digital transformation success story discussion. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host for this ongoing series of Hewlett Packard Enterprise-sponsored interviews.

Thanks again for listening. Please pass this along to your IT community, and do come back next time.

Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: Hewlett Packard Enterprise.

Transcript of a discussion on how Carnegie Mellon University researchers are advancing strategic reasoning and machine learning capabilities using high performance computing. Copyright Interarbor Solutions, LLC, 2005-2017. All rights reserved.


You may also be interested in: