Showing posts sorted by date for query Deloitte. Sort by relevance Show all posts
Showing posts sorted by date for query Deloitte. Sort by relevance Show all posts

Tuesday, October 23, 2018

The New Procurement Advantage: How Business Networks Generate Multi-Party Ecosystem Solutions

Transcript of a discussion on how ecosystems built from business networks like the SAP Ariba Network incubate innovative third-party collaboration solutions that benefit both buyers and sellers.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: SAP Ariba.

Dana Gardner: Hi, this is Dana Gardner, Principal Analyst at Interarbor Solutions, and you’re listening to BriefingsDirect. Our next intelligent enterprise discussion explores new opportunities for innovation and value creation inside of business-to-business (B2B) ecosystems.

Gardner
We’ll explore how business and technology platforms have evolved and why third-party businesses and modern analytics solutions are joining forces to create new breeds of digital commerce benefits.

To explain more on how business ecosystems are becoming incubators for value-added services for both business buyers and sellers, we are joined by Sean Thompson, Senior Vice President and Global Head of Business Development and Ecosystem at SAP Ariba. Welcome to BriefingsDirect, Sean.

Sean Thompson: Good morning, Dana. Thank you very much for having me.

Gardner: Why is now the right time to highlight collaboration inside of business ecosystems?

Thompson: It’s a fascinating time to be alive when you look at the largest companies on this planet, the five most valuable companies: Apple, Amazon, Google, Microsoft, and Facebook -- they all share something in common, and that is that they have built and hosted very rich ecosystems.

Ecosystems enrich the economy

These platforms represent wonderful economics for the companies themselves. But the members of the ecosystems also enjoy a very profitable place to do business. This includes the end-users profiting from the network effect that Facebook provides in terms of keeping in touch with friends, etc., as well as the advertisers who get value from the specific targeting of Facebook users based on end-user interests and values.

Thompson
So, it’s an interesting time to look at where these companies have taken us in the overall economy. It’s also an indication for other parts of the technology world that ecosystems in the cloud era are becoming more important. In the cloud era, you have multitenancy where you have the hosts of these applications, like SAP Ariba, using multitenant platforms. No longer are these applications delivered on-premise.

Now, it’s a cloud application enjoyed by more than 3.5 million organizations around the world. It’s hosted by SAP Ariba in the cloud. As a result, you have a wonderful ecosystem that evolved around a particular audience to which you can provide new value. For us, at SAP Ariba, the opportunity is to have an open mindset, much like the companies that I mentioned.

It is a very interesting time because business ecosystems now matter more than ever in the technology world, and it’s mainly due to cloud computing.

Gardner: These platforms create escalating value. Everybody involved is a winner, and the more they play, the more winnings there are for all. The participation grows the pie, builds a virtuous adoption cycle.


Is that how you view business ecosystems, as an ongoing value-added creation mechanism? How do you define a business ecosystem, and how is that different from five years ago?

Thompson: I say this to folks that I work with everyday -- not only inside of SAP Ariba, but also to members of our partner community, our ecosystem – “We are privileged in that not every company can talk about an ecosystem, mainly because you have to have relevance in order for such an ecosystem to develop.”

I wrote an article recently wherein I was reminded of growing up in Montana. I’m a big fly fisherman. I grew up with a fly rod in my hand. It didn’t dawn on me until later in my professional life that I used to talk about ecosystems as a kid. We used to talk about the various bug hatches that would happen and how that would make the trout go crazy.

I was taught by my dad about the certain ecosystems that supported different bugs and the different life that the trout feed on. In order to have an ecosystem -- whether it was fly-fishing as a kid in the natural environment or business ecosystems built today in the cloud -- it starts with relevance. Do you have relevance, much like Microsoft had relevance back in the personal computer (PC) era?

Power of relevance 

Apple created the PC era, but Microsoft decided to license the PC operating system (OS) to many and thus became relevant to all the third-party app developers. The Mac was closed. The strategy that Apple had in the beginning was to control this closed environment. That led to a wonderful user experience. But it didn’t lead to a place where third-party developers could build applications and get them sold.

Windows and a Windows-compatible PC environment created a profitable place that had relevance. More PC manufacturers used Windows as a standard, third-party app developers could build and sell the applications through a much broader distribution network, and that then was Microsoft’s relevance in the early days of the PC.

Other ecosystems have to have relevance, too. There have to be the right conditions for third parties to be attracted, and ultimately -- in the business world -- it’s all about, if you will, profit. Can I enjoy a profitable existence by joining the ecosystem?
You have to have the right conditions for third parties to be attracted. In the business world, it's all about profit. Can I enjoy a profitable existence by joining the ecosystem?

At SAP Ariba, I always say, we are privileged because we do have relevance.

Salesforce.com also had relevance in its early days when it distributed its customer resource management (CRM) app widely and efficiently. They pioneered the notion of only needing a username, a password, and credit card to distribute and consume a CRM app. Once that Sales Force Automation app was widely distributed, all of a sudden you had an ecosystem that began to pay attention because of the relevancy that Salesforce had. It was able to turn the relevancy of the app into an ecosystem that was based on a platform, and they introduced Force.com and the AppExchange for the third parties to extend the value of the applications and the platform.

It’s very similar to what we have here at SAP Ariba. The relevance in the ecosystem is supported by market relevance from the network. So it’s a fascinating time.

Gardner: What exactly is the relevance with the SAP Ariba platform? You’re in an auspicious place -- between buyers and sellers at the massive scale that the cloud allows. And increasingly the currency now is data, analytics, and insights.

Global ERP efficiency

Thompson: It’s very simple. I first got to know Ariba professionally back in the 1990s. I was at Deloitte, where I was one of those classic re-engineering consultants in the mid-90s. Then during the Y2K era, companies were getting rid of the old mainframes because they thought the code would fail when the calendar turned over to the year 2000. That was a wonderful perfect storm in the industry and led to the first major wave of consuming enterprise resource planning (ERP) technology and software.

Ariba was born out of that same era, with an eye toward procurement and helping the procurement organization within companies better manage spend.

ERP was about making spend more efficient, too, and making the organization more efficient overall. It was not just about reducing waste inherent within the silos of an organization. It was also about the waste in how companies spent money, managed suppliers, and managed spend against contracts that they had with those suppliers.

And so, Ariba -- not unlike Salesforce and other business applications that became relevant -- was the first to focus on the buyer, in particular the buyer within the procurement organization. The focus was on using a software application to help companies make better decisions around who they are sourcing from, their supply chain, and driving end-users to buy based on contracts that can be negotiated. It became an end-to-end way of thinking about your source-to-settle process. That was very much an application-led approach that SAP Ariba has had for the better part of 20 years.

When SAP bought Ariba in 2012, it included Ariba naturally within the portfolio of the largest ERP provider, SAP. But instead of thinking of it as a separate application, now Ariba is within SAP, enabling what we call the intelligent enterprise. The focus remains on making the enterprise more intelligent.

Pioneers in the cloud

SAP Ariba was also one of the first to pioneer moving from an on-premises world into the cloud. And by doing so, Ariba created a business network. It was very early in pioneering the concept of a network where -- by delighting the buyer and the procurement organization – that organization also brought in their suppliers with them.

Ariba early on had the concept of, “Let’s create a network where it’s not just one-to-one between a buyer and a supplier. Rather let’s think about it as a network -- as a marketplace -- where suppliers can make connections with many buyers.”

And so, very early on, SAP Ariba created a business network. That network today is made up 3.5 million buyers and sellers doing $2.2 trillion annually in commerce through the Ariba Network.

Now, as you pointed out, the currency is all about data. Because we are in the cloud, a network, and multitenant, our data model is structured in such a way that is far better than in an on-premises world. We now live within a cloud environment with a consistent data structure. Everybody is operating within the same environment, with the same code base. So now the data we have within SAP Ariba -- within that digital commerce data set -- becomes incredibly valuable to third parties. They can think about how they can enhance that value.
Because we are in a cloud, a network, and multitenant, our data model is structured in a way that's far better than in an on-premises world. We now live in a cloud environment with a consistent data structure.

As an example, we are working with banks today that are very interested in using data to inform new underwriting models. A supplier will soon be able to log-in to the SAP Ariba Network and see that there are banks offering them loans based on data available in the network. It informs about new loans at better rates because of the data value that the SAP Ariba Network provides. The notion of an ecosystem is now extending to very interesting places like banking, with financial service providers being part of a business network and ecosystem.

We are going beyond the traditional old applications -- what we used to call independent software vendors (ISVs). We’re now bringing in service providers and data services providers. It’s very interesting to see the variety of different business models joining today’s ecosystems.

Gardner: Another catalyst to the power and value of the network and the platform is that many of these third parties are digital organizations. They’re sharing their value and adding value as pure services so that the integration pain points have been slashed. It’s much easier for a collaborative solution to come together.

Can you provide any other examples, Sean, of how third parties enter into a platform-network ecosystem and add value through digital transformation and innovation?

Relationships rule

Thompson: Yes. When you look back at my career, 25 years ago, I met SAP for the first time when I was with Deloitte. And Deloitte is still a very strong partner of SAP, a very strong player within the technology industry as a systems integrator (SI) and consulting organization.

We have enjoyed relationships with Deloitte, Accenture, IBM, Capgemini, and many other organizations. Today they play a role -- as they did in the past -- of delivering value to the end customer by providing expertise, human capital, and intellectual property that is inherent in their many methodologies -- change management methodologies, business process change methodologies. And there’s still a valuable role for these professional services organizations, consultants, and SIs today.

But their role has evolved, and it’s a fascinating evolution. It’s no longer customizing on-premises software. Back in the day, when I was at Deloitte, we made a lot of money by helping companies adopt an application like an SAP or an Oracle ERP and customizing it. But you ended up customizing for one and building a single-family home, if you will, that was isolated. You ended up forking the code, if you will, so that you had a very difficult time upgrading because you customized the code so much that you then fell behind.

Now, on cloud, the SI is no longer customizing on-premises, it’s now configuring cloud environments. That configuring of cloud environments allows for not only the customer to never be left behind -- a wonderful value for the industry in general -- but it also allows the SI to play a new role.

That role is now a hybrid of both consulting and of helping companies to understand how to adopt and change their multicloud processes to become more efficient. The SIs are also becoming [cloud service providers] themselves because – what they used to do in customizing on-premises -- they’re now building extensions to clouds and among clouds.

They can create extensions of a solution like SAP Ariba for certain industries, like oil and gas, for example. You will see SAP continue to evolve its relationships with these service providers so that those services companies begin to look more like hybrid business models -- where they enjoy some intellectual property and extensions to cloud environments, as well as monetizing their methodologies as they have in the past.

This is a fascinating evolution that’s profitable for those companies because they go from a transactional business model -- where they have to sell one client at a time and one implementation at a time -- to monetizing based on a subscription model, much like we in the ISV world have done.

There are many other examples of new and interesting ways within the SAP Ariba ecosystem and network of buyers and suppliers where third-party ecosystem participants gather additional data about suppliers -- and sometimes about buyers. For example, in helping both suppliers and buyers manage their risk better in terms of financial risk, for supply chain disruption, and if you want to ensure there isn’t slave labor in your supply chain, or if there is sufficient diversity in your supply chain.

The supplier risk category for us is very important. It requires an ecosystem of provider data that enriches the supplier profile. And that can then become an enhancement to the overall value of the business network.

We are now able to reach out and offer ways in which third parties can contribute their intellectual property -- be it a methodology, data, analytics, or financial services. And that’s why it’s a really exciting time to be in the environment we are today.

Gardner: This network effect certainly relates to solution sets like financial services and risk management. You mentioned also that it pertains to such vertical industries like oil and gas, pharmaceutical, life sciences, and finance. Does it also extend to geographies and a localization-solution benefit? Does it also pertain to going downstream for small- to medium-sized businesses (SMBs) that might not have been able to afford or accommodate this high-level collaboration?

Reach around the world

Thompson: Absolutely, and it’s a great question. I remember the first wave of ERP and it marked a major consumption of technology to improve business. And that led to a tremendous amount of productivity gains that we’ve enjoyed through the growth of the world economy. Business productivity through technology investment has led to a tremendous amount of growth in the economy.

Now, you ask, “Does this extend?” And that’s what’s so fascinating about cloud and when you combine cloud with the concept of ecosystem -- because everybody enjoys a benefit from that.

As an example, you mentioned localization. Within SAP Ariba, we are all about intelligent business commerce, and how can we make business commerce more efficient all around the world. That’s what we are about.

In some countries, business commerce involves the good old-fashioned invoicing, orders, and taxation tasks. At Ariba, we don’t want to solve all of that so-called last mile of the tax data and process needed in for invoices in, say, Mexico.
And that's what's so fascinating about cloud and when you combine cloud with the concept of ecosystem -- because everybody enjoys a benefit.

We want to work with members of the ecosystem that do that. An example is Thomson Reuters, whose business is in part about managing a database of local tax data that is relevant to what’s needed in these different geographies.

By having one relationship with a large provider of that data and being able to distribute that data to the end users -- which are companies in places like Mexico and Korea that need a solution – means they are going to be compliant with the local authorities and regulations thanks to up-to-date tax data.

That’s an example of an extremely efficient way for us to distribute to the globe based on cloud and an ecosystem from within which Thomson Reuters provides that localized and accurate tax data.

Support for all sizes

You also asked about SMBs. Prior to being at SAP Ariba, I was part of an SMB support organization with the portfolio of Business ByDesign and Business One, which are smaller ERP applications designed for SMBs. And one of them, Business ByDesign, is a cloud-based offering.

In the past, the things that large companies were able to do were often too expensive for SMBs. That’s because they required on-premises data centers, with servers, software consultants, and all of the things that large enterprises could afford to drive innovation in the pre-cloud world. This was all just too expensive for SMBs.

Now the distribution model is represented by cloud and the multitenant nature of these solutions that allow for configuration -- as opposed to costly and brittle customization. They now have an easy upgrade path and all the wonderful benefits of the cloud model. And when you combine that with a business solutions ecosystem then you can fully support SMBs.

For example, within SAP Ariba, we have an SMB consulting organization focused on helping midsize companies adopt solutions in an agile way, so that it’s not a big bang. It’s not an expensive consulting service, instead it’s prescriptive in terms of how you should begin small and grow in terms of adopting cloud solutions.

Such an SMB mindset has enabled us to take the same SAP Ariba advantage of no code, to just preconfigure it, and start small. As we like to say at SAP Ariba, it’s a T-shirt size implementation: small, medium, and large.

That’s an example of how the SMB business segment really benefits from this era of cloud and ecosystem that drives efficiency for all of us.

Gardner: Given that the value of any business network and ecosystem increases with the number of participants – including buyers, sellers, and third-party service providers -- what should they be thinking to get in the best position to take advantage of these new trends, Sean? What should you be thinking in order to begin leveraging and exploiting this overall ecosystem approach and its benefits?

Thompson: I’m about to get on an airplane to go to South Korea. In some of these geographies where we do business, the majority of businesses are SMBs.

And I am still shocked that some of these companies have not prioritized technology adoption. I’m still surprised that there are a lot of industries, and a lot of companies in different segments, that are still very much analog. They are doing business the way they’ve been doing business for many years, and they have been resistant to change because their cottage industry has allowed them to maintain, if you will, Excel spreadsheet approaches to business and process.

I spent a decade of my life at Microsoft, and when we looked at the different ways Excel was used we were fascinated by the fact that Excel in many ways was used as a business system. Oftentimes, that was very precarious because you can’t manage a business on Excel. But I still see that within companies today.

The number one thing that every business owner needs to understand is that we are in an exponential time of transformation. What was linear in terms of how we expect transformation is now in an exponential phase. Disruption of industries is happening in real time and rapidly. If you’re not prioritizing and investing in technology -- and not thinking of your business as a technology business -- then you will get left behind.

Never underestimate the impact that technology can have to drive topline growth. But technology also preserves the option value for your company in the future because disruption is happening. It’s exponential and cloud is driving that.

Get professional advice 

You also have to appreciate the value of getting good advice. There are good companies that are looking to help. We have many of those within our ecosystem, such as providers of assistance like the large SIs as well as midsize companies focused on helping SMBs.

As I mentioned before, I grew up fly fishing. But anybody that comes to me and says, “Hey, I’d love to go learn how to fly fish.” I say, “Start with hiring a professional guide. Spend a day on a river with a professional guide because they will show you how to do things.” I honestly think that that same advice applies to the professional guide who can help you understand how to consume cloud software services.

And that professional guide fee is not going to be as much as it was in the past. So I would say get professional help to start.

Gardner: I’d like to close out with a look to the future. It seems that for third-party organizations that want to find a home in an ecosystem that there’s never been a better time for them to innovate, and find new business models, new ways of collaborating.

You mentioned risk management and financial improvements and efficiency. What are some of the other areas for new business models within ecosystems? Where are we going to see some new and innovative business models cropping up, especially within the SAP Ariba network ecosystem?

Thompson: You mentioned it earlier in the conversation. The future is about data. The future is about insights that we gather from the data.
We're still early in a very interesting future. We're still understanding how to gather insights from data. At SAP Ariba we have a treasure trove of data from $2.1 trillion in commerce among 3.5 million members in the Ariba Network.

I started a company in the natural language processing world. I spent five years of my life understanding how to drive a new type of user experience by using voice. It’s about natural language and understanding how to drive domain-specific knowledge of what people want through a natural user interface.

I’ve played on the edge of where we are in terms of artificial intelligence (AI) within that natural language processing. But we’re still fiddling in many respects. We still fiddle in the business software arena, talking about chatbots, talking about natural user interfaces.

We’re still early in a very interesting future. We’re still very early in understanding how to gather insights from data. At SAP Ariba we have a treasure trove of data from $2.1 trillion in commerce among 3.5 million members in the Ariba Network.

The future is data driven 

There are so many data insights available on contracts and supplier profiles alone. So the future is about being able to harvest insights from that data. It’s now very exciting to be able to leverage the right infrastructure like the S/4 HANA data platform.

But we have a lot of work to do still to clean data and ensure the structure, privacy, and security of the data. The future certainly is bright. It will be magical in how we will be able to be proactive in making recommendations based on understanding all the data.

Buyers will be proactively alerted that something is going on in the supply chain. We will be able to predict and be a prescriptive in the way the business operates. So it is a fascinating future that we have ahead of us. It’s very exciting to be a part of it.

Gardner: I’m afraid we’ll have to leave it there. You’ve been listening to a sponsored BriefingsDirect discussion on new opportunities for innovation and value creation among business ecosystem participants. And we’ve learned how business ecosystems are incubating new levels of buyer and seller value-added services.

So a big thank you to our guest, Sean Thompson, Senior Vice President and Global Head of Business Development and Ecosystem at SAP Ariba. Thank you, sir.

Thompson: Thanks, Dana.

Gardner: And thank you as well to our audience for joining this BriefingsDirect digital business innovation discussion. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host throughout this series of SAP Ariba-sponsored BriefingsDirect discussions. Thanks again for listening, and do come back next time.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: SAP Ariba.

Transcript of a discussion on how ecosystems built from business networks like the SAP Ariba Network incubate innovative third-party collaboration solutions that benefit both buyers and sellers. Copyright Interarbor Solutions, LLC, 2005-2018. All rights reserved.

You may also be interested in:

Friday, June 15, 2018

Legacy IT Evolves: How Cloud Choices Like Microsoft Azure Can Conquer the VMware Tax

Transcript of a panel discussion exploring how organizations can gain a future-proof path to hybrid computing that simplifies architecture and makes total economic sense.

Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: Sponsor: Navisite.

Dana Gardner: Hello, and welcome to a panel discussion on how enterprises can gain a future-proof path to hybrid cloud computing. We'll now explore cloud adoption strategies that seek to simplify IT operations, provide cloud deployment choice -- and that make the most total economic sense.

Gardner
I'm Dana Gardner, principal analyst at Interarbor Solutions, and I'll be your moderator for this discussion on how cloud choices can help conquer the “VMware tax” by moving beyond a virtualization legacy.

Many data center operators face a crossroads now as they consider the strategic implications of new demands on their IT infrastructure and the new choices that they have when it comes to a cloud continuum of deployment options. These hybrid choices span not only cloud hosts and providers, but also platform technology choices such as containers, intelligent network fabrics, serverless computing, and, yes, even good old bare metal.

The complexity of choice goes further because long-term decisions about technology must also include implications for long-term recurring costs -- as well as business continuity. As IT architects and operators seek to best map a future from a VMware hypervisor and traditional data center architecture, they also need to consider openness and lock-in. They must evaluate the companies behind the platforms, their paths and motivations and how well they will be partners -- and not just vendors. And we will examine the best metrics for making decisions and weighing the trade-offs that impact performance, total cost, and risk.

Our panelists will review how cloud providers such as 
Microsoft Azure are sweetening the deal to transition to predicable hybrid cloud models. The discussion is designed to help IT leaders to find the right trade-offs and the best rationale for making the strategic decisions for their organization's digital transformation.

With that, please join me in welcoming our guests. We are joined by David Grimes, Vice President of Engineering at Navisite. Welcome, David.

David Grimes: Good morning, Dana. I’m excited to be here.

Gardner: We're also here with
David Linthicum, Chief Cloud Strategy Officer at Deloitte Consulting. Welcome, Dave.

David Linthicum: It's great to be here, Dana. Thank you very much.

Gardner: And we're also here with
Tim Crawford, CIO Strategic Advisor at AVOA. Welcome, Tim.

Tim Crawford: Hey, Dana, thanks for having me on the program.

Gardner: Clearly, over the past decade or two, countless virtual machines have been spun up to redefine data center operations and economics. And as server and storage virtualization were growing dominant, 
VMware was crowned -- and continues to remain -- a virtualization market leader. The virtualization path broadened over time from hypervisor adoption to platform management, network virtualization, and private cloud models. There have been a great many good reasons for people to exploit virtualization and adopt more of a software-defined data center (SDDC) architecture. And that brings us to where we are today.

Dominance in virtualization, however, has not translated into an automatic path from virtualization to a public-private cloud continuum. Now, we are at a crossroads, specifically for the economics of hybrid cloud models. Pay-as-you-go consumption models have forced a reckoning on examining your virtual machine past, present, and future.

My first question to the panel is ... What are you now seeing as the top drivers for people to reevaluate their enterprise IT architecture path?

The cloud-migration challenge

Grimes: It's a really good question. As you articulated it, VMware radically transformed the way we think about deploying and managing IT infrastructure, but cloud has again redefined all of that. And the things you point out are exactly what many businesses face today, which is supporting a set of existing applications that run the business. In most cases they run on very traditional infrastructure models, but they're looking at what cloud now offers them in terms of being able to reinvent that application portfolio.

Grimes
But that's going to be a multiyear journey in most cases. One of the things that I think about as the next wave of transformation takes place is how do we enable development in these new models, such as containers and serverless, and using all of the platform services of the hyperscale cloud. How do we bring those to the enterprise in a way that will keep them adjacent to the workloads? Separating off in the application and the data is very challenging.

Gardner: Dave, organizations would probably have it easier if they're just going to go from running their on-premises apps to a single public cloud provider. But more and more, we're quite aware that that's not an easy or even a possible shift. So, when organizations are thinking about the hybrid cloud model, and moving from traditional virtualization, what are some of the drivers to consider for making the right hybrid cloud model decision, where they can do both on-premises private cloud as well as public cloud?

Know what you have, know what you need

Linthicum: It really comes down to the profiles of the workloads, the databases, and the data that you're trying to move. And one of the things that I tell clients is that cloud is not necessarily something that's automatic. Typically, they are going to be doing something that may be even more complex than they have currently. But let's look at the profiles of the existing workloads and the data -- including security, governance needs, what you're running, what platforms you need to move to -- and that really kind of dictates which resources we want to put them on.

Linthicum
As an architect, when I look at the resources out there, I see traditional systems, I see private clouds, virtualization -- such as VMware -- and then the public cloud providers. And many times, the choice is going to be all four. And having pragmatic hybrid clouds, which are paired with traditional systems and private and public clouds -- means multiple clouds at the same time. And so, this really becomes an analysis in terms of how you're going to look at the existing as-is state. And the to-be state is really just a functional matter of what the to-be state should be based on the business requirements that you see. So, it's a little easier than I think most people think, but I think the outcome is typically going to be more expensive and more complex than they originally anticipated.

Gardner: Tim Crawford, do people under-appreciate the complexity of moving from a highly virtualized on-premises, traditional data center to hybrid cloud?

Crawford: Yes, absolutely. Dave's right. There are a lot of assumptions that we take as IT professionals and we bring them to cloud, and then find that those assumptions kind of fall flat on their face. Many of the myths and misnomers of cloud start to rear their ugly heads. And that's not to say that cloud is bad; cloud is great. But we have to be able to use it in a meaningful way, and that's a very different way than how we've operated our corporate data centers for the last 20, 30, or 40 years. It's almost better if we forget what we've learned over the last 20-plus years and just start anew, so we don't bring forward some of those assumptions.

Crawford
And I want to touch on something else that I think is really important here, which has nothing to do with technology but has to do with organization and culture, and some of the other drivers that go into why enterprises are leveraging cloud today. And that is that the world is changing around us. Our customers are changing, the speed in which we have to respond to demand and need is changing, and our traditional corporate data center stacks just aren't designed to be able to make those kinds of shifts.

And so that's why it’s going to be a mix of cloud and corporate data centers. We're going to be spread across these different modes like peanut butter in a way. But having the flexibility, as Dave said, to leverage the right solution for the right application is really, really important. Cloud presents a new model because our needs have not been able to be fulfilled in the past.

Gardner: David Grimes, application developers helped drive initial cloud adoption. These were new apps and workloads of, by, and for the cloud. But when we go to enterprises that have a large on-premises virtualization legacy -- and are paying high costs as a result -- how frequently are we seeing people move existing workloads into a cloud, private or public? Is that gaining traction now?

Lift and shift the workload

Grimes: It absolutely is. That's really been a core part of our business for a while now, certainly the ability to lift and shift out of the enterprise data center. As Dave said, the workload is the critical factor. You always need to understand the workload to know which platform to put it on. That's a given. With a lot of that existing legacy application stacks running in traditional infrastructure models, very often those get lifted and shifted into a like-model -- but in a hosting provider's data center. That’s because many CIOs have a mandate to close down enterprise data centers and move to the cloud. But that does, of course, mean a lot of different things.

You mentioned the push by developers to get into the cloud, and really that was what I was alluding to in my earlier comments. Such a reinventing of the enterprise application portfolio has often been led by the development that takes place within the organization. Then, of course, there are all of the new capabilities offered by the hyperscale clouds -- all of them, but notably some of the higher-level services offered by Azure, for example. You're going to end up in a scenario where you've got workloads that best fit in the cloud because they're based on the services that are now natively embodied and delivered as-a-service by those cloud platforms.

But you're going to still have that legacy stack that still needs to leave the enterprise data center. So, the hybrid models are prevailing, and I believe will continue to prevail. And that's reflected in Microsoft's move with Azure Stack, of making much of the Azure platform available to hosting providers to deliver private Azure in a way that can engage and interact with the hyperscale Azure cloud. And with that, you can position the right workloads in the right environment.

Gardner: Now that we're into the era of lift and shift, let's look at some of the top reasons why. We will ask our audience what their top reasons are for moving off of legacy environments like VMware. But first let’s learn more about our panelists. David Grimes, tell us about your role at Navisite and more about Navisite itself.

Panelist profiles

Grimes: I've been with Navisite for 23 years, really most of my career. As VP of Engineering, I run our product engineering function. I do a lot of the evangelism for the organization. Navisite's a part of Spectrum Enterprise, which is the enterprise division of Charter. We deliver voice, video, and data services to the enterprise client base of Navisite, and also deliver cloud services to that same base. It's been a very interesting 20-plus years to see the continued evolution of managed infrastructure delivery models rapidly accelerating to where we are today.

Gardner: Dave Linthicum, tell us a bit about yourself, particularly what you're doing now at Deloitte Consulting.
It's been a very interesting 20-plus years to see the continued evolution of managed infrastructure delivery models.

Linthicum: I've been with Deloitte Consulting for six months. I'm the Chief Cloud Strategy Officer, the thought leadership guy, trying to figure out where the cloud computing ball is going to be kicked and what the clients are doing, what's going to be important in the years to come. Prior to that I was with Cloud Technology Partners. We sold that to Hewlett Packard Enterprise (HPE) last year. I’ve written 13 books. And I do the cloud blog on InfoWorld, and also do a lot of radio and TV. And the podcast, Dana.


Gardner: Yes, of course. You've been doing that podcast for quite a while. Tim Crawford, tell us about yourself and AVOA.

Crawford: After spending 20-odd years within the rank and file of the IT organization, also as a CIO, I bring a unique perspective to the conversation, especially about transformational organizations. I work with Fortune 250 companies, many of the Fortune 50 companies, in terms of their transformation, mostly business transformation. I help them explore how technology fits into that, but I also help them along their journey in understanding the difference between the traditional and transformational. Like Dave, I do a lot of speaking, a fair amount of writing and, of course, with that comes with travel and meeting a lot of great folks through my journeys.

Survey says: It’s economics

Gardner: Let's now look at our first audience survey results. I'd like to add that this is not scientific. This is really an anecdotal look at where our particular audience is in terms of their journey. What are their top reasons for moving off of legacy environments like VMware?

The top reason, at 75 percent, is a desire to move to a pay-as-you-go versus a cyclical CapEx model. So, the economics here are driving the move from traditional to cloud. They're also looking to get off of dated software and hardware infrastructure. A lot of people are running old hardware, it's not that efficient, can be costly to maintain and in some cases, difficult or impossible, to replace. There is a tie at 50 percent each in concern about the total cost of ownership, probably trying to get that down, and a desire to consolidate and integrate more apps and data, so seeking a transformation of their apps and data.

Coming up on the lower end of their motivations are complexity and support difficulties, and the developer preference for cloud models. So, the economics are driving this shift. That should come as no surprise, Tim, that a lot of people are under pressure to do more with less and to modernize at the same time. The proverbial changing of the wings of the airplane while keeping it flying. Is there any more you would offer in terms of the economic drivers for why people should consider going from a traditional data center to a hybrid IT environment?

Crawford: It's not surprising, and the reason I say that is this economic upheaval actually started about 10 years ago when we really felt that economic downturn. It caused a number of organizations to say, "Look, we don't have the money to be able to upgrade or replace equipment on our regular cycles."

And so instead of having a four-year cycle for servers, or a five-year cycle for storage, or in some cases as much as 10-plus cycle for network -- they started kicking that can down the road. When the economic situation improved, rather than put money back into infrastructure, people started to ask, "Are there other approaches that we can take?" Now, at the same time, cloud was really beginning to mature and become a viable solution, especially for mid- to large- enterprises. And so, the combination of those two opened the door to a different possibility that didn't have to do with replacing the hardware in corporate data centers.
Instead of having a four-year cycle for servers or five-year cycle for storage, they started kicking the can down the road.

And then you have the third piece to that trifecta, which are the overall business demands. We saw a very significant change in customer buying behavior at the same time, which is people were looking for things now. We saw the uptick of Amazon use and away from traditional retail, and that trend really kicked into gear around the same time. All of these together lead into this shift to demand for a different kind of model, looking at OpEx versus CapEx.


Gardner: Dave, you and I have talked about this a lot over the past 10 years, economics being a driver. But you don't necessarily always save money by going to cloud. To me, what I see in these results is not just seeking lower total cost -- but simplification, consolidation and rationalization for what enterprises do spend on IT. Does that make sense and is that reflected in your practice?

Savings, strategy and speed

Linthicum: Yes, it is, and I think that the primary reason for moving to the cloud has morphed in the last five years from the CapEx saving money, operational savings model into the need for strategic value. That means gaining agility, ability to scale your systems up as you need to, to adjust to the needs of the business in the quickest way -- and be able to keep up with the speed of change.
A lot of the Global 2,000 companies out there are having trouble maintaining change within the organization, to keep up with change in their markets. I think that's really going to be the death of a thousand cuts if they don't fix it. They're seeing cloud as an enabling technology to do that.

In other words, with cloud they can have the resources they need, they can get to the storage levels they need, they can manage the data that they need -- and do so at a price point that typically is going to be lower than the on-premise systems. That's why they're moving in that direction. But like we said earlier, in doing so they're moving into more complex models. They're typically going to be spending a bit more money, but the value of IT -- in its ability to delight the business in terms of new capabilities -- is going to be there. I think that's the core metric we need to consider.

Gardner: David, at Navisite, when it comes to cost balanced by the business value from IT, how does that play out in a managed hosting environment? Do you see organizations typically wanting to stick to what they do best, which is create apps, run business processes, and do data science, rather than run IT systems in and out of every refresh cycle? How is this shaking out in the managed services business?

Grimes: That's exactly what I'm seeing. Companies are really moving toward focusing on their differentiation. Running infrastructure has become almost like having power delivered to your data center. You need it, it's part of the business, but it's rarely differentiating. So that's what we're seeing.
Running infrastructure has become almost like having power delivered to your data center. You need it, but its rarely differentiating.

One of the things in the survey results that does surprise me is the relatively low scoring for the operations complexity and support difficulties. With the pace of technology innovation happening, and even within VMware, within the enterprise context, but certainly within the context of the cloud platforms, Azure in particular, the skillsets to use those platforms, manage them effectively and take the biggest advantage of them are in exceedingly high demand. Many organizations are struggling to acquire and retain that talent. That's certainly been my experience in with dealing with my clients and prospects.


Gardner: Now that we know why people want to move, let's look at what it is that's preventing them from moving. What are the chief obstacles that are preventing those in our audience from moving off of a legacy environment like VMware?

There's more than just a technological decision here. Dell Technologies is the major controller of VMware, even with VMware being a publicly traded company. But Dell Technologies, in order to go private, had to incur enormous debt, still in the vicinity of $48 billion. There's been reports recently of a reverse merger, where VMware as a public company will take over Dell as a private company. The markets didn't necessarily go for that, and it creates a bit of confusion and concern in the market. So Dave, is this something IT operators and architects should concern themselves with when they're thinking about which direction to go?

Linthicum: Ultimately, we need to look at the health of the company we're buying hardware and software from in terms of their ability to be around over the next few years. The reality is that VMware, Dell, and [earlier Dell merger target] EMC are mega forces in terms of a legacy footprint in a majority of data centers. I really don't see any need to be concerned about the viability of that technology. And when I look at viability of companies, I look at the viability of the technology, which can be bought and sold, and the intellectual property can be traded off to other companies. I don't think the technology is going to go away, it's just too much of a cash cow. And the reality is, whoever owns VMware is going to be able to make a lot of money for a long period of time.


Gardner: Tim, should organizations be concerned in that they want to have independence as VMware customers and not get locked in to a hardware vendor or a storage vendor at the same time? Is there concern about VMware becoming too tightly controlled by Dell at some point?

Partnership prowess

Crawford: You always have to think about who it is that you're partnering with. These days when you make a purchase as an IT organization, you're really buying into a partnership, so you're buying into the vision and direction of that given company.

And I agree with Dave about Dell, EMC, and VMware in that they're going to be around for a long period of time. I don't think that's really the factor to be as concerned with. I think you have to look beyond that.

You have to look at what it is that your business needs, and how does that start to influence changes that you make organizationally in terms of where you focus your management and your staff. That means moving up the chain, if you will, and away from the underlying infrastructure and into applications and things closely tied to business advantage.

As you start to do that, you start to look at other opportunities beyond just virtualization. You start breaking down the silos, you start breaking down the components into smaller and smaller components -- and you look at the different modes of system delivery. That's really where cloud starts to play a role.

Gardner: Let's look now to our audience for what they see as important. What are the chief obstacles preventing you from moving off of a legacy virtualization environment? Again, the economics are quite prevalent in their responses.

By a majority, they are not sure that there's sufficient return on investment (ROI) benefits. They might be wondering why they should move at all. Their fear of a lock-in to a primary cloud model is also a concern. So, the economics and lock-in risk are high, not just from being stuck on a virtualization legacy -- but also concern about moving forward. Maybe they're like the deer in the headlights.
You have to look at what it is that your business needs, and how does that start to influence changes that you make organizationally, of where you focus your management and your staff.

The third concern, a close tie, are issues around compliance, security, and regulatory restrictions from moving to the cloud. Complexity and uncertainty that the migration process will be successful, are also of concern. They're worried about that lift and shift process.

They are less concerned about lack of support for moving from the C-Suite or business leadership, of not getting buy-in from the top. So … If it's working, don't fix it, I suppose, or at least don't break it. And the last issue of concern, very low, is that it’s still too soon to know which cloud choices are best.

So, it's not that they don't understand what's going on with cloud, they're concerned about risk, and complexity of staying is a concern -- but complexity of moving is nearly as big of a concern. David, anything in these results that jump out to you?

Feel the fear and migrate anyway

Grimes: Of those not being sure of the ROI benefits, that's been a common thread for quite some time in terms of looking at these cloud migrations. But in our experience, what I've seen are clients choosing to move to a VMware cloud hosted by Navisite. They ultimately end up unlocking the business agility of their cloud, even if they weren't 100 percent sure going into it that they would be able to.

But time and time again, moving away from the enterprise data center, repurposing the spend on IT resources to become more valuable to the business -- as opposed to the traditional keeping the lights on function -- has played out on a fairly regular basis.

I agree with the audience and the response here around the fear of lock-in. And it's not just lock-in from a basic deployment infrastructure perspective, it's fear of lock-in if you choose to take advantage of a cloud’s higher-level services, such as data analytics or all the different business things that are now as-a-service. If you buy into them, you certainly increase your ability to deliver. Your own pace of innovation can go through the roof -- but you're often then somewhat locked in.

You're buying into a particular service model, a set of APIs, et cetera. It's a form of lock-in. It is avoidable if you want to build in layers of abstraction, but it's not necessarily the end of the world either. As with everything, there are trade-offs. You're getting a lot of business value in your own ability to innovate and deliver quickly, yes, but it comes at the cost of some lock-in to a particular platform.

Gardner: Dave, what I'm seeing here is people explaining why hybrid is important to them, that they want to hedge their bets. All or nothing is too risky. Does that make sense to you, that what these results are telling us is that hybrid is the best model because you can spread that risk around?

IT in the balance between past and future

Linthicum: Yes, I think it does say that. I live this on a daily basis in terms of ROI benefits and concern about not having enough, and also the lock-in model. And the reality is that when you get to an as-is architecture state, it's going to be a variety -- as we mentioned earlier – of resources that we're going to leverage.

So, this is not all about taking traditional systems – and the application workloads around traditional systems -- and then moving them into the cloud and shutting down the traditional systems. That won't work. This is about a balance or modernization of technology. And if you look at that, all bets are on the table -- including traditional, including private cloud, and public cloud, and hybrid-based computing. Typically, it's going to be the best path to success at looking at all of that. But like I said, the solution's really going to be dependent on the requirements on the business and what we're looking at.

Going forward, these kinds of decisions are falling into a pattern, and I think that we're seeing that this is not necessarily going to be pure-cloud play. This is not necessarily going to be pure traditional play, or pure private cloud play. This is going to be a complex architecture that deals with a private and public cloud paired with traditional systems.

And so, people who do want to hedge their bets will do that around making the right decisions that they leverage the right resources for the appropriate task at hand. I think that's going to be the winning end-point. It's not necessarily moving to the platforms that we think are cool, or that we think can make us more money -- it's about localization of the workloads on the right platforms, to gain the right fit.

Gardner: From the last two survey result sets, it appears incumbent on legacy providers like VMware to try to get people to stay on their designated platform path. But at the same time, because of this inertia to shift, because of these many concerns, the hyperscalers like
Google Cloud, Microsoft Azure, and Amazon Web Services also need to sweeten their deals. What are these other cloud providers doing, David, when it comes to trying to assuage the enterprise concerns of moving wholesale to the cloud?
It's not moving to the platforms that we think are cool, or that can make us money, it's about localization of the workloads on the right platforms, to get the right fit.

Grimes: There are certainly those hyperscale players, but there are also a number of regional public cloud players in the form of the VMware partner ecosystem. And I think when we talk about public versus private, we also need to make a distinction between public hyperscale and public cloud that still could be VMware-based.


I think one interesting thing that ties back to my earlier comments is when you look at Microsoft Azure and their Azure Stack hybrid cloud strategy. If you flip that 180 degrees, and consider the VMware on AWS strategy, I think we'll continue to see that type of thing play out going forward. Both of those approaches actually reflect the need to be able to deliver the legacy enterprise workload in a way that is adjacent from an equivalence of technology as well as a latency perspective. Because one thing that's often overlooked is the need to examine the hybrid cloud deployment models via the acceptable latency between applications that are inherently integrated. That can often be a deal-breaker for a successful implementation.

What we'll see is this continued evolution of ensuring that we can solve what I see as a decade-forward problem. And that is, as organizations continue to reinvent their applications portfolio they must also evolve the way that they actually build and deliver applications while continuing to be able to operate their business based on the legacy stack that's driving day-to-day operations.

Moving solutions

Gardner: Our final survey question asks What are your current plans for moving apps and data from a legacy environment like VMware, from a traditional data center?
And two strong answers out of the offerings come out on top. Public clouds such as Microsoft Azure and Google Cloud, and a hybrid or multi-cloud approach. So again, they're looking at the public clouds as a way to get off of their traditional -- but they're looking not for just one or a lock-in, but they're looking at a hybrid or multi-cloud approach.

Coming up zero, surprisingly, is VMware on AWS, which you just mentioned, David. Private cloud hosted and private cloud on-premises both come up at about 25 percent, along with no plans to move. So, staying on-premises in a private cloud has traction for some, but for those that want to move to the dominant hyperscalers, a multi-cloud approach is clearly the favorite. 

Linthicum: I thought there would be a few that would pick VMware on AWS, but it looks like the audience doesn't necessarily see that that's going to be the solution. Everything else is not surprising. It's aligned with what we see in the marketplace right now. Public cloud movement to Azure, Google Cloud and then also the movement to complex clouds like hybrid and multi-cloud also seem to be the two trends worth seeing right now in the space, and this is reflective of that.

Gardner: Let's move our discussion on. It's time to define the right trade-offs and rationale when we think about these taxing choices. We know that people want to improve, they don't want to be locked in, they want good economics, and they're probably looking for a long-term solution.

Now that we've mentioned it several times, what is it about Azure and Azure Stack that provides appeal? Microsoft’s cloud model seems to be differentiated in the market, by offering both a public cloud component as well as an integrated – or adjacent -- private cloud component. There’s a path for people to come onto those from a variety of different deployment histories including, of course, a Microsoft environment -- but also a VMware environment. What should organizations be thinking about, what are the proper trade-offs, and what are the major concerns when it comes to picking the right hybrid and multi-cloud approach?

Strategic steps on the journey

Grimes: At the end of the day, it's ultimately a journey and that journey requires a lot of strategy upfront. It requires a lot of planning, and it requires selecting the right partner to help you through that journey.

Because whether you're planning an all-in on Azure, or an all-in on Google Cloud, or you want to stay on VMware but get out of the enterprise data center, as Dave has mentioned, the reality is everything is much more complex than it seems. And to maximize the value of the models and capabilities that are available today, you're almost necessarily going to end up in a hybrid deployment model -- and that means you're going to have a mix of technologies in play, a mix of skillsets required to support them.
Whether you're planning on an all-Azure or all-Google, or you want to stay on VMware, it's about getting out of the enterprise datacenter, and the reality is far more complex than it seems.

And so I think one of the key things that folks should do is consider carefully how they partner regardless of where they are in that journey, if they are on step one or step three, to continue that journey is going to be critical on selecting the right partner to help them.


Gardner: Dave, when you're looking at risk versus reward, cost versus benefits, when you're wanting to hedge bets, what is it about Microsoft Azure and Azure Stack in particular that help solve that? It seems to me that they've gone to great pains to anticipate the state of the market right now and to try to differentiate themselves. Is there something about the Microsoft approach that is, in fact, differentiated among the hyperscalers?

A seamless secret

Linthicum: The paired private and public cloud, with similar infrastructures and similar migration paths, and dynamic migration paths, meaning it could be workloads in between them -- at least this is the way that it's been described -- is going to be unique in the market. Kind of the dirty little secret.

It's going to be very difficult to port from a private cloud to a public cloud because most private clouds are typically not AWS and not Google, and they don't make private clouds. Therefore, you have to port your code between the two, just like you've had to port systems in the past. And the normal issues about refactoring and retesting, and all the other things, really come home to roost.

But Microsoft could have a product that provides a bit more of a seamless capability of doing that. And the great thing about that is I can really localize on whatever particular platform I'm looking at. And if I, for example, “mis-localize” or I misfit, then it's a relatively easy thing to move it from private to public or public to private. And this may be at a time where the market needs something like that, and I think that's what is unique about it in the space.

Gardner: Tim, what do you see as some of the trade-offs, and what is it about a public, private hybrid cloud that's architected to be just that -- that seemingly Microsoft has developed? Is that differentiating, or should people be thinking about this in a different way?

Crawford: I actually think it's significantly differentiating, especially when you consider the complexity that exists within the mass of the enterprise. You have different needs, and not all of those needs can be serviced by public cloud, not all of those needs can be serviced by private cloud.

There's a model that I use with clients to go through this, and it's something that I used when I led IT organizations. When you start to pick apart these pieces, you start to realize that some of your components are well-suited for software as a service (SaaS)-based alternatives, some of the components and applications and workloads are well-suited for public cloud, some are well-suited for private cloud.

A good example of that is if you have sovereignty issues, or compliance and regulatory issues. And then you'll have some applications that just aren't ready for cloud. You've mentioned lift and shift a number of times, and for those that have been down that path of lift and shift, they've also gotten burnt by that, too, in a number of ways.

And so, you have to be mindful of what applications go in what mode, and I think the fact that you have a product like Azure Stack and Azure being similar, that actually plays pretty well for an enterprise that's thinking about skillsets, thinking about your development cycles, thinking about architectures and not having to create, as Dave was mentioning, one for private cloud and a completely different one for public cloud. And if you get to a point where you want to move an application or workload, then you're having to completely redo it over again. So, I think that Microsoft combination is pretty unique, and will be really interesting for the average enterprise.

Gardner: From the managed service provider (MSP) perspective, at Navisite you have a large and established hosted VMware business, and you’re helping people transition and migrate. But you're also looking at the potential market opportunity for an Azure Stack and a hosted Azure Stack business. What is it for the managed hosting provider that might make Microsoft's approach differentiated?

A full-spectrum solution

Grimes: It comes down to what both Dave and Tim mentioned. Having a light stack and being able to be deployed in a private capacity, which also -- by the way -- affords the ability to use bare metal adjacency, is appealing. We haven't talked a lot about bare metal, but it is something that we see in practice quite often. There are bare metal workloads that need to be very adjacent, i.e. land adjacent, to the virtualization-friendly workloads.

Being able to have the combination of all three of those things is what makes AzureStack attractive to a hosting provider such as Navisite. With it, we can solve the full-spectrum of the needs of the client, covering bare metal, private cloud, and hyperscale public -- and really in a seamless way -- which is the key point.

Gardner: It's not often you can be as many things to as many people as that given the heterogeneity of things over the past and the difficult choices of the present.

We have been talking about these many cloud choices in the abstract. Let's now go to a concrete example. There's an organization called Ceridian. Tell us about how they solved their requirements problems?
Azure Stack is attractive to a hosting provider like Navisite. With it we can solve the full-spectrum of the needs of the client in a seamless way.

Grimes: Ceridian is a global human capital management company, global being a key point. They are growing like gangbusters and have been with Navisite for quite some time. It's been a very long journey.

But one thing about Ceridian is they have had a cloud-first strategy. They embraced the cloud very early. A lot of those barriers to entry that we saw, and have seen over the years, they looked at as opportunity, which I find very interesting.

Requirements around security and compliance are critical to them, but they also recognized that a SaaS provider that does a very small set of IT services -- delivering managed infrastructure with security and compliance -- is actually likely to be able to do that at least as effectively, if not more effectively, than doing it in-house, and at a competitive and compelling price point as well.

So some of their challenges really were around all the reasons that we see, that we talked about here today, and see as the drivers to adopting cloud. It's about enabling business agility. With the growth that they've experienced, they've needed to be able to react quickly and deploy quickly, and to leverage all the things that virtualization and now cloud enable for the enterprises. But again, as I mentioned before, they worked closely with a partner to maximize the value of the technologies and ensure that we're meeting their security and compliance needs and delivering everything from a managed infrastructure perspective.

Overcoming geographical barriers

One of the core challenges that they had with that growth was a need to expand into geographies where we don't currently operate our hosting facilities, so Navisite's hosting capabilities. In particular, they needed to expand into Australia. And so, what we were able to do through our partnership with Microsoft was basically deliver to them the managed infrastructure in a similar way.

This is actually an interesting use case in that they're running VMware-based cloud in our data center, but we were able to expand them into a managed Azure-delivered cloud locally out of Australia. Of course, one thing we didn't touch on today -- but is a driver in many of these decisions for global organizations -- is a lot of the data sovereignty and locality regulations are becoming increasingly important. Certainly, Microsoft is expanding the Azure platform. And so their presence in Australia has enabled us to deliver that for Ceridian.

As I think about the key takeaways and learnings from this particular example, Ceridian had a very clear, very well thought out cloud-centric and cloud-first strategy. You, Dana, mentioned it earlier, that that really enables them to continue to keep their focus on the applications because that's their bread and butter, that's how they differentiate.

By partnering, they're able to not worry about the keeping the lights on and instead focus on the application. Second, of course, is they're a global organization and so they have global delivery needs based on data sovereignty regulations. And third, and I'd say probably most important, is they selected a partner that was able to bring to bear the expertise and skillsets that are difficult for enterprises to recruit and retain. As a result, they were able to take advantage of the different infrastructure models that we're delivering for them to support their business.

Gardner: We're now going to go to our question and answer portion. Kristen Allen of Navisite is moderating our Q and A section.

Bare metal and beyond

Kristen Allen: We have some very interesting questions. The first one ties into a conversation you were just having, "What are the ROI benefits to moving to bare metal servers for certain workloads?"

Grimes: Not all software licensing is yet virtualization-friendly, or at least on a virtualization platform-agnostic platform, and so there's really two things that play into the selection of bare metal, at least in my experience. There is kind of a model of bare metal computing, small cartridge-based computers, that are very specific to certain workloads. But when we talk in more general terms for a typical enterprise workload, it really revolves around either software licensing incompatibility with some of the cloud deployment models or a belief that there is a performance that requires bare metal, though in practice I think that's more of optics than reality. But those are the two things that typically drive bare metal adoption in my experience.

Linthicum: Ultimately, people want access directly for at the end-of-the-line platforms, and if there's some performance reason, or some security reason, or some kind of a direct access to some of the input-output systems, we do see these kinds of one-offs for bare metal. I call them special needs applications. I don't see it as something that's going to be widely adopted, but from time to time, it's needed, and the capabilities are there depending on where you want to run it.

Allen: Our next question is, "Should there be different thinking for data workloads versus apps ones, and how should they be best integrated in a hybrid environment?"
The compute aspect and data aspect of an application should be decoupled. If you want to you can then assemble them on different platforms, even one on public cloud and one on private cloud.

Linthicum: Ultimately, the compute aspect of an application and the data aspect of that application really should be decoupled. Then, if you want to, you can assemble them on different platforms. I would typically think that we're going to place them either on all public or all private, but you can certainly do one on private and one on public, and one on public and one on private, and link them that way.

As we're migrating forward, the workloads are getting even more complex. And there's some application workloads that I've seen, that I've developed, where the database would be partitioned against the private cloud and the public cloud for disaster recovery (DR) purposes or performance purposes, and things like that. So, it's really up to you as the architect as to where you're going to place the data in adjacent relation to the workload. Typically, a good idea to place them as close to each other as they can so they have the highest bandwidth to communicate to each other. However, it's not necessary depending on what the application's doing.

Gardner: David, maybe organizations need to place their data in a certain jurisdiction but might want to run their apps out of a data center somewhere else for performance and economics?

Grimes: The data sovereignty requirement is something that we touched on and that's becoming increasingly important and increasingly, that's a driver too, in deciding where to place the data.

Just following on Dave's comments, I agree 100 percent. If you have the opportunity to architect a new application, I think there's some really interesting choices that can be made around data placement, network placement, and decoupling them is absolutely the right strategy.

I think the challenge many organizations face is having that mandate to close down the enterprise data center and move to the "cloud." Of course, we know that “cloud” means a lot of different things but, do that in a legacy application environment and that will present some unique challenges as well, in terms of actually being able to sufficiently decouple data and applications.

Curious, Dave, if you've had any successes in kind of meeting that challenge?

Linthicum: Yes. It depends on the application workload and how flexible the applications are and how the information is communicating between the systems; also security requirements. So, it's one of those obnoxious consulting responses, “it depends” as to whether or not we can make that work. But the thing is the architecture is a legitimate architectural pattern that I've seen before and we've used it.

Allen: Okay. How do you meet and adapt for Health Insurance Portability and Accountability Act of 1996
(HIPAA) requirements and still maintain stable connectivity for the small business?

Grimes: HIPAA, like many of the governance programs, is a very large and co-owned responsibility. I think from our perspective at Navisite, part of Spectrum Enterprise, we have the unique capability of delivering both the network services and the cloud services in an integrated way that can address the particular question around stable connectivity. But ultimately, HIPAA is a blended responsibility model where the infrastructure provider, the network provider, the provider managing up to whatever layer of the application stack will have certain obligations. But then the partner, the client would also retain some obligations as well.

Gardner: I'm afraid we'll have to leave it there. You have been an essential part of this panel discussion on how organizations can gain a future-proof path to hybrid computing that simplifies IT operations, provides cloud deployment choices, and makes total economic sense. Please join me in thanking our guests, David Grimes, Vice President of Engineering at Navisite; David Linthicum, Chief Cloud Strategy Officer at Deloitte Consulting, and Tim Crawford, CIO Strategic Advisor at AVOA.

And a big thank you as well to our audience. Please feel free to pass this link as well to others who you think would benefit from this discussion. I'm Dana Gardner, Principal Analyst at Interarbor Solutions. Thanks again for joining and do come back next time.

Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: Sponsor: Navisite.

Transcript of a panel discussion exploring how organizations can gain a future-proof path to hybrid computing that simplifies architecture and makes total economic sense. Copyright Interarbor Solutions, LLC, 2005-2018. All rights reserved.

You may also be interested in: