Showing posts with label Cloud computing. Show all posts
Showing posts with label Cloud computing. Show all posts

Monday, November 30, 2020

How Transforming Source-to-Pay Procurement Delivers Agility and Improves Outcomes at Zuellig Pharma


Transcript of a discussion on how to bring agility, resilience, and managed risk to the end-to-end procurement process for significantly better overall business outcomes.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: SAP Ariba.

Dana Gardner: Hi, this is Dana Gardner, Principal Analyst at Interarbor Solutions, and you’re listening to BriefingsDirect.

Gardner

Our next intelligent procurement discussion explores the rationales and results when companies transform how they acquire goods and services. By injecting intelligence, automation, and standardization into the source-to-pay process, organizations are supporting even larger digital business transformation efforts.

Stay with us now as we hear from two pioneers in how to bring agility, resilience, and managed risk to the end-to-end procurement process for significantly better overall business outcomes. To show how organizations can embark on such a procurement and sourcing transformation journey, please join me now in welcoming our guests.

We’re here with Victoria Folbigg, Vice President of Procurement at Zuellig Pharma Holdings. Welcome, Victoria.

Victoria Folbigg: Thank you for having me, Dana.


Gardner:
We’re also joined by Baber Farooq, Senior Vice President of Product Strategy for Procurement Solutions at SAP. Welcome, Baber.

Baber Farooq: Thank you for having me, Dana.

Gardner: Baber, what are the top procurement trends and adoption patterns you’re seeing globally? Why is now such an important time to transform how you’re doing your sourcing and procurement?

Efficiency over productivity

Farooq: When we talk about trends in procurement, the macroeconomic factors governing the world -- particularly in this COVID-induced economy -- need to be kept in mind. Not only are we in a very dynamic situation, but the current scenario is impacting the function of the profession. This changing, evolving time presents an opportunity for procurement professionals to impact their businesses company-wide.

Farooq
Firstly, if you look at the world, some of these trends existed prior to COVID hitting -- but I think they have accelerated. For the past 10 years, we’ve had declining productivity growth across the world. You can slice this by industry or by geography, but in general -- despite the technological advances from cloud computing, mobile technologies, and et cetera -- organizations from a labor productivity perspective are not becoming more-and-more productive.

This trend has existed for 10 to 15 years, but we really started seeing flattening over the past two to three years, particularly in the G7 countries. Now, it’s interesting because that past 10 years or so also correlates with some of the greatest economic expansion that the world has experienced. When things are going well, you can kind of say, “Yeah, productivity may be not necessarily so important.” But now that we’re in this unfortunate recession of remarkable scale, efficiency is going to become more-and-more important.

The second trend is we know that the digital economy has been expanding in this new millennium. It’s been expanding rapidly, and by all indications that trend will further accelerate in this new COVID-normal that everyone is trying to come to grips with. We have seen this in terms of our daily lives being disrupted and how digital tools have helped us to remain functional. Sometimes circumstances in the world that change everything become fuel for transformation. And to a large extent, I think the expansion of the digital economy will end up continuing and accelerating and procurement will play a significant role in that.

The third trend I see is this concept that The Economist has dubbed slowbalisation, which is the idea that despite the past 30 years of increasing globalization -- even prior to COVID, we saw a slowdown in globalization due to trade wars and nationalistic tendencies.

Post-COVID, I think organizations will ask the question, “Hey, this complicated global supply chain that has existed puts me at risk like I never thought of before if there’s something disruptive in the market.”

So, expect more focus on nearshore manufacturing across many industries. It’s going to become more prevalent from a goods perspective when we talk about trade. On the flip-side, from a service’s perspective, digitization will actually allow for more cross-border services to be provided. That includes things we never thought we could do cross-border before.

It will be a very interesting shift to see how the world changes with these trends, and how it impacts procurement. Procurement is going to play a pivotal and central role.

It will be a very interesting shift to see how the world changes with these trends, and how that impacts procurement. It doesn’t take a lot of reflection to see where synergies exist. If organizations are going to operate manufacturing differently than they have before, if supply chains will be structured differently, and if we engage in services procurement differently -- in all of those conversations, procurement is going to play a pivotal and central role.

And how are you going to try to come out of this productivity glut? If the promise of the artificial intelligence (AI) is going to help us come out of this productivity glut, then procurement is going to play a central role in how we use suppliers as key co-innovation partners. It means a very different lens about how you manage a relationship with your supplier base than we’ve done traditionally.

So those are some of the key factors if you look at how procurement is going to evolve over the next five to 10 years. The macroeconomic factors are the driving forces. The more that procurement professionals focus on providing solutions to their organizations around these areas the more impactful they can be. These are very different than the traditional metrics that we’ve had around cost savings. Those are still important, of course, don’t get me wrong. But if I think about how the procurement profession is changing and those trends, I think it’s going to be around these areas.

Medicine thinks globally, supplies locally

Gardner: Victoria, at your organization, Zuellig Pharma, are you also seeing these trends? Tell us about your organization and what you’ve been doing with procurement transformation?

Folbigg: Zuellig Pharma is one of the largest healthcare services groups in Asia. And as a pharma services company we distribute medicine in Asia. We are present in at least 13 countries, or what we call markets, in Asia. We also have clinical trials distribution throughout the world.

Folbigg
We realized pretty early on with all of the distribution capabilities and contact with healthcare professionals across Asia that we have a lot of data about drug purchasing preferences, which we are actively monetizing. We also have a significant role to play in ensuring the right medicines go to the market, which means preventing counterfeits and parallel trades.

Zuellig Pharma is not only enabling improved drug distribution, we also do vaccine distribution. In some of the bigger countries, for example, we take flu vaccines and distribute them to the various state hospitals and schools. It’s now very exciting for us to possibility be at the forefront of COVID-19 vaccines distribution. We are very busy figuring out how to make that possible across Asia.

Building on Baber’s points on globalization, which I found very relevant, there is a clear trend in supply of goods to move away from globalization. We have seen that even with the supply of personal protective equipment (PPE) from China due to people being concerned about buying from China as well as the many custom issues for going in and out. People are naturally now looking for supply sources closer to their countries. We are seeing that as well.

Baber also spoke about the globalization of services. This is fabulous and very exciting, and we are seeing that. For example, when I now negotiate contracts with consulting companies, I begin by telling them there is no need for travel. And so why don’t you put your best team on my project? I don’t even need your team to be in Asia.

And that makes them pull a breath and step back and say, “Oh my God. You know, we had these natural differences between regions and different companies in the servicing industry.” That is breaking down because customers are expecting the best people on the job anywhere. I completely see that in my daily work now.

Gardner: Baber also mentioned the impact of AI, with more data-driven decision-making. While we’re grappling with rapid changes in sourcing and the requirements of rapid vaccination distribution and logistics, for example, how are the latest technologies helping you transform your procurement?

AI for 100 percent reliability

Folbigg: It’s an interesting and complex subject. When I talk to my peers on the manufacturing side -- and again we’re not a manufacturing company – they oversee a lot of direct spend. I see them embracing data-driven, AI-driven procedures and analysis.

With services industries, and with indirect procurement, it is much more difficult, I believe. And hence we are not so much on the forefront of AI thinking. Also because we’re in Asia, I’m wondering whether there is enough databases and rules to be able to draw the right decisions.

Established technologies like RPA and chatbots are filling holes because people need support. The labor force is getting more expensive and so having a robot do a menial task can be more efficient.

For example, if I want to find a different source among suppliers very far away, I would rely normally on a database that would go through millions of sources for a supplier. If the supplier, though, were a local company, I might not find any relevant databases. So the challenges we have in Asia are about getting data that can be analyzed and then draw insights from it.

Other more established technologies like robotic process automation (RPA) and chatbots are filling holes because people need support. As Baber said, the labor force is getting more expensive even in Asia. So having a robot do a menial task can be much better and more efficient than hiring somebody to do it.

Gardner: Baber, how common are these challenges around data and analytics that Zuellig Pharma is grappling with?

Farooq: What Victoria said is so accurate with respect to the challenges that customers are facing in using these technologies. The challenge we have as technology provider is to make sure that we provide access to these technologies in the most beneficial fashion. AI is a very broad topic that means a lot of different things these days.

The ultimate goal of AI is to provide insights and eliminate tasks while effectively focusing on actual business outcomes, and not having so much repetition. When Victoria mentions the, “Hey, we can use a lot of this in the direct materials space,” a lot of those are predictable, repetitive tasks.

In the services space, and for indirect materials purchasing, it’s more difficult to grapple with it because it’s not as predictable and it’s not as rule-oriented as in other areas. That gets to the true heart of the problem with AI across any space, right? The last mile of AI is very hard. You can make it 90 percent effective, but who is going to trust their business with a robotic or computational process that’s 90 percent effective? Making it a 100 percent effective is the real challenge.

This is why we don’t have self-driving cars right now, right? They work great in laboratories. They work great on test tracks. They are driven around deserts. So much advancement and capabilities have happened, but that last mile is still yet to be achieved. And the amount of data needed to make that last mile work is an order of magnitude greater than it is for the first 90 percent of achieving the outcome.

The onus and the burden, frankly, is on companies like SAP to make sure that we can solve this problem for customers like Zuellig so that they can truly trust their business for insights and for the outcome-driven work that they would want the machines to do before they go ahead and say, “Okay, we’re happy with AI.” There were predictions six to seven years ago that dermatologists would not be diagnosing skin cancer anymore because an app would be doing it by taking a photo. That’s not true. It hasn’t happened yet, right? But the potential is still there.

The focus is on the outcomes that professionals are looking for. Let's see if we can use the data from across the world to drive these outcomes in a sustainable and predictable fashion. This work is research-oriented.

For us, the focus is on the outcomes that professionals are looking for. Let’s see if we can use the data from across the world to drive these outcomes in a sustainable and predictable fashion. This work is research-oriented. It requires focus from companies such as SAP to say that this is where we’re going to take the initiative and actually drive toward this outcome.

The reason why we feel that SAP is one of the companies that can do this is we actually have so many data. I mean, if you look at the SAP Business Network and the fact that just on spend and sourcing events we’re carrying $20 to $25 trillion worth of procurement over the past 10 years, we believe we have the data that can start making an impact.

We have to prove it, undoubtedly, especially when it comes to niche economies and emerging markets, like Victoria said. But we have a very strong starting point. And, of course, at the same time, we have to be considerate about privacy concerns and General Data Protection Regulation (GDPR) in all of these things. If you’re going to be mining data and then cross-applying the impacts across customer communities, you have to do it in a responsible manner.


So those are the things that we are grappling with. I clearly see there’s a trend here and you will see AI impacting procurement processes before you see AI driving cars on roads. There’s still a lot of work to be done, and there’s still a lot of data that needs to be mined in order to make sure that we’re building something that’s intelligence- and not just rule-based. You can use RPA, for sure, but that’s still rule-based. That’s not true intelligence, and no business is going to actually go ahead and say, “Hey, we’re happy with the insights that the machine is telling us or we’re happy with the machine doing the work of a human if it’s 90 to 95 percent accurate.” It really, really needs to be 99.9 percent accurate for that to happen.

Gardner: And whether we are doing this with AI or traditional data-driven analytics, what we need to deliver more of now are better agility, resilience, and managed risks.

Victoria, tell us about your journey at Zuellig Pharma and why you’re working toward those fundamental goals. How have you gone about revamping your source-to-pay procurement to attain that agility and resilience, and to managing risks?

Strategic procurement journey

Folbigg: Our real strategic sourcing journey started in 2016. And I like to call the company a 100-year-old startup because Zuellig Pharma is truly 100 years old. The company was, fair to say, in the early stages very decentralized. And then it moved to become more central-to-edge, with the need in Asia with emerging economies for general management to act much faster if there was a risk or opportunity. So these principles still apply.

But the chief executive saw the need for more strategic procurement, with transparency, visibility, and control of spend accountability. He sponsored the need to design and set up a lean procurement function within the Asia region. The first thing we decided to do was put a system in place to better anchor a new, all-encompassing, yet small procurement team. I have been getting this visibility, control, and data through our all-encompassing procurement system, SAP Ariba.

SAP Ariba has also been different because of its ecosystem and because they’re backed by SAP. And it has a support network and already-proven technology across Asia. Because of Asian tax rules, and the variety of Asian languages, we found when we looked at the market back in 2015-2016 that you needed a system that will grow with you. We needed something that’s anchored very strongly within Asia. From that, we gained control and visibility in stage one of our journey.

The next stage focused on process improvement. Our old key performance indicator (KPI) was about how long it takes to pay an invoice. And that you need to make it easier and user friendly but also have controls in place to ensure that you have no fund leakage. So, control and visibility are number one and two, and process improvement is number three. Next, we will be seeking agility and then insights.

But COVID-19 has shown the need for traditional procurement, too. For example, when it came time that we needed a PPE supplier -- everyone needed them. And it wasn’t a system that helped us, unfortunately. It’s more of people knowing people and finding out where there was capacity. That was not done via data-driven insights.

We had to go off system as well because sometimes we didn’t have time to get the supplies through the system. We also didn’t have time to pay the suppliers through the system because it was a supplier’s market: “You can have the shipment of your general masks. You take it or you leave it.”

The traditional kind of robust procurement systems were breaking down for us and exacerbated by the fact that we do not yet have the right kind of data to make these decisions. We still needed to be rather creative in how we found the best sources.

So very often we had to make this decision within an hour. And in some cases, I would come back to the supplier and say, “I’m ready to buy,” and they’re saying, “Sorry, somebody else offered me twice the price.” This was the reality of procurement last spring. It certainly brought us to the forefront because we needed to report to the CEO what we were doing to protect our business. We’re delivering the medicines to the hospitals. We probably needed this PPE for the drivers even more than the hospitals, and we needed to negotiate to buy that.

This is where the traditional kind of robust procurement systems were breaking down for us and exacerbated by the fact that we do not yet have the right amount of data on Asia translated into English to make these decisions as we would like to. That newer method may be strong and prevalent, of course, in the US and in Europe.

So that tested us quite a lot and it’s shown that we still needed to be rather creative in how we found the best sources. There are building blocks to what the systems allow you to do. And now we’re saying, “Okay, well, how can you give us insights? How can you give us this agility?” I think the systems need to evolve to be topical and to be able to address all of these use cases that came to the fore due to the COVID-19 pandemic.

Gardner: Listening to you reminds me of what Baber said about self-driving cars. You had to revert back to manual during the pandemic.

Folbigg: Bicycles even.

Pandemic forces procurement pivot

Farooq: It’s such a great point. One thing I’ve learned is that the technology and business processes we have constructed over the past 15 to 20 years kind of broke down. When you look at a pandemic of this magnitude -- it’s the greatest disruption in the world since World War II. The IMF just estimated how big. When the global financial crisis happened in 2008, the overall global GDP impact, because the emerging economies were not as affected, was a reduction of 0.1 to 0.2 percent in global GDP. This year we’re seeing a 5 percent GDP impact globally. It’s very, very significant.

The scale of the disruption is huge, and you are having these low-probability, high-impact events. Because they don’t happen for a long time, people presume they won’t happen, and they don’t plan for them.

What I’ve learned is, with technology and business processes, you need to keep in mind that one aspect that might have a 2 to 3 percent chance of happening. You can’t Pareto analysis that out of the way and not consider it. So it’s one thing to make sure that, of course, you’re not spending time focused on a problem that has a low chance of happening. But at the same time, you have to keep in mind that, “Hey, if there’s one of these events where if it happens, the results could be a complete breakdown.” You can’t ignore it, right? You need to make sure you have that factored into your technology.

So, emergency payment processes, emergency purchase order (PO) processes. These capabilities need to be built in. You can’t just presume that there’s going to be perfection that’s set up and available for all circumstances, and that’s the only thing you’re designing for, particularly when you talk about industries like life sciences.

Gardner: That’s the very character of agility and resiliency -- being able to have exception management for exceptions that you can’t anticipate. And certainly, we have seen that in the last seven months.

Now that we see how important procurement is for a larger organization during a very tumultuous time -- recognizing that we need to have the agility of working with the manual as well as the automatic -- what does the future portend? What will our systems need to now become in order to provide the new definition of agility and resiliency?

Agile systems preempt problems

Folbigg: We need agile systems, and we need to be able to solve specific use cases in order for these systems to become important, viable, and present within our procurement landscape and many ways of doing business.

It’s not good enough for us when everything reverts back to the system. When there is issue like a pandemic -- or for something that is not necessarily rule-based -- we then need to go off system, and that marginalizes the importance of the system. I honestly don’t know how you enable a search for suppliers that is largely relationship-based. But there are elements that come from the availability of data, data that is presented in a form that’s easily consumed, especially if the data has to be translated and normalized. That is something definitely that the system suppliers can play a role in.

When I look at the system now as the head of procurement, I am not looking at features and functions. I am looking at the problems that I need to solve through a system to enable us to drive the resiliency that the company needs. And if I look at the challenge that we have of enabling the potential like distribution across the world, what we are trying to do is not to be stuck in a situation that we had at the beginning of the year.

What we are looking proactively at is certain key suppliers to partner with to develop the system, to design the supply chain, and this is not transactional. This is a highly strategic activity based on human creativity, human network relationships, and trust between the leadership of different companies. It is a completely different design approach.

Now we are all thinking about preempting. How is the technology going to help me with what I am looking forward to? I need to be able to have the basic explanation at my fingertips fast in order for me and my team to concentrate on the strategic analysis.

Now we are all thinking about preempting. How is the technology going to help me with what I am looking forward to? I need to be able to have the basic explanation at my fingertips fast in order for me and my team to concentrate on the real strategic creative kinds of analysis.

Also, we need systems that can give us a lot of modeling and analysis. If you think about my problem now, I can buy freezers and cold storage for vaccines. But what am I going to do with them in five years’ time? You have supplies for the vaccine distribution. And then what?

I think the vaccine will become part-and-parcel of our cold chain and supply chain going forward because COVID-19 is not going to go away. The vaccines potentially are only going to last for a year or two, and you will have to be re-vaccinated. But, despite of all these high-cost, complex, energy-thirsty capital purchases, how do you do that? Now everything is done on the spur of the moment. A system that holistically can bring this all together for me would be a huge benefit.

Gardner: That point about being holistic, Baber, must be very important to you at SAP because you’ve been building out so many different systems, business capabilities, and data capabilities. It sounds like SAP might be actually in very good position to come to the rescue of somebody like Victoria, given that she has these pressing needs and wants to instantiate relationships into digital interactions. How SAP can help?

Supply chain for vaccine delivery

Farooq: It’s a privileged position because it’s a complicated problem. But it’s a problem that I believe SAP is one of the few companies that can support Zuellig. From our perspective, we want to get companies like Zuellig into a position where they can focus on those strategic elements and those creative elements that only humans can do. Creativity solving these problems is probably is one of the most complicated supply chain problems in recent history. The COVID vaccine distribution problem can only be solved through extensive creativity.

When SAP talks about the intelligent enterprise, that just means two very simple things. It means that I give an organization all of the insights and analytics capabilities at their fingertips so that they have the ability to quickly make decisions and pivot when they need to pivot, and that truly became evident during this pandemic. From our perspective, we have the ability.

If you look at all of the different processes that exist across manufacturing, distribution, sourcing, purchasing, procurement, payment -- all of these processes reside and are impacted by some element of SAP’s footprint. And our perspective is to make sure that all these elements can talk to each other. And by talking to each other, they can actively provide all of the data that’s required by organizations like Zuellig so that they can quickly make the decisions they need and focus on the strategic elements they need to focus on.

We don’t want people at Zuellig to be worried about how the POs are going to get raised and what are the different steps required for sourcing to take place. And that is very strictly the direction we want to take our products and we’re going to be taking our products so that we can go ahead and offer these solutions for companies like Zuellig.

The example that Victoria gave is just so close to my heart because I believe that when I was talking about the productivity decrease and growth that the world has experienced over the past 10 years, if we can make procurement more productive as a function, then procurement organizations can make the entire organization more productive. They can actually focus on supplier relationships and the co-innovation partnerships with suppliers that are critical suppliers. That has an impact on the entire business.

And no one is better suited to do doing that than procurement. We just have to get them out of the day-to-day processes of running reports, figuring out what the data says, and focusing on the transactional events and purchase orders and payments that take place. We need to get them out of those processes so they can leverage their skills in terms of finding the right suppliers, developing the right relationships that make an innovation impactful, and have an impact to the top line of organizations -- along with the bottom one.

And it is very clearly the direction that we are trying to take as rapidly as possible because we know that the next 12 months are critical in this space.

Gardner: Victoria, what advice could you give to others who are trying to transform their procurement organizations to take advantage of the agility and resilience that are now required? What advice can you offer for folks who might be not quite as far as long as you are in your transformation journey?

Educate around procurement

Folbigg: It’s complex because it depends very much on the specific company and how anchored procurement is. But it’s about making sure you find sponsors of the function who really understand the benefits of procurement. Give your team and yourself a job to show the benefit that strategic procurement can bring.

In this part of the world, we are just now seeing procurement on the university curriculum. Where I worked before, in Europe and the US, it was an established kind of skillset that we would learn in university. And there were courses on that in MBAs and social work. It’s just starting to anchor in universities in Asia. Go to your leadership and put procurement on the table and give a very factual and viable rationale of why the systems investment is very, very important.

As you are able to anchor your procurement with the system, it will put a lot of pressure on you to deliver the benefits that the system’s business cases provide. It gives you an opportunity to reach for wider buy-in of the system with you and your purchasers. Your training of people on what procurement can provide then becomes part of their evaluation. So, I think certainly this goes in hand-in-hand.

Gardner: Baber, anything more to offer?

Farooq: Victoria said something just a few moments ago. She said, “I really don’t care about the feature functionality. I only care about the outcomes.” That should be your North Star. It’s natural when you get into the deployment that you care about all the different little things, but one of the things that organizations often struggle with once the deployment begins is they stay in those sub-processes and functional elements.

I only care about outcomes. That should be your North Star. It's natural when you get into the deployment that you can care about all the little things, but one of the things that organizations struggle with is that they stay stuck in those sub-processes.

And a lot of the things that were the guiding reasons behind their transformation to begin with, those got lost, right? I say keep that front and center. That is the basis by which not only you will get internal buy-in, CEO buy-in, and CFO buy-in -- but it’s also something that you should constantly be reminding people of as well.

Of course you have to deliver to those outcomes and that’s where companies like SAP need to be held accountable and be a partner to make sure that those outcomes are delivered. But those business outcomes from a technology perspective is everything that we want to be focusing on and from a business perspective, and everything that the procurement organization should focus on.

And COVID-19 will force a recalibration on what those business outcomes should be. The traditional measures of the efficacy of procurement will change -- and should change -- because procurement can make a bigger, deeper impact for organizations.

Supply chain resilience is going to become a much more important factor. Procurement should embrace what they want to impact. Co-innovative partnerships that you deliver for the business should become a much more important factor. Procurement should embrace and show the impact. These are not measurements that were traditionally monitored, but they’re going to be increasing in terms of importance as we encounter the challenges of the next couple of years. This is something procurement organizations should embrace because it will elevate their standing in organizations.

Gardner: I’m afraid we’ll have to leave it there. You’ve been listening to a sponsored BriefingsDirect discussion on the rationales and results when companies look to intelligent automation and standardization for how they acquire the goods and services.

And we’ve learned how organizations are finding -- even during the pandemic -- new lessons and efficiencies in how their source-to-pay processes and purchasing work best.

So please join me in thanking our guests, Victoria Folbigg, Vice President of Procurement at Zuellig Pharma Holdings. Thank you so much, Victoria.

Folbigg: Thank you for having me.

Gardner: And also a big thank you to Baber Farooq, Senior Vice President of Product Strategy for Procurement Solutions at SAP. Thank you, sir.

Farooq: Thank you, Dana, for having me.


Gardner:
And a big thank you as well to our audience for joining this BriefingsDirect modern digital business innovation discussion. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host throughout this series of SAP-sponsored BriefingsDirect discussions.

Thanks again for listening. Please do come back next time, and feel free to share this information across your IT and business communities.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: SAP Ariba.

Transcript of a discussion on how to bring agility, resilience, and managed risk to the end-to-end procurement process for significantly better overall business outcomes. Copyright Interarbor Solutions, LLC, 2005-2020. All rights reserved.

You may also be interested in:

Friday, November 20, 2020

How the Journey to Modern Data Management is Paved with an Inclusive Edge-to-Cloud Data Fabric


Transcript of a discussion on
the best ways widely inclusive data can be managed for today’s data-rich but too often insights-poor organizations. 

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: Hewlett Packard Enterprise.

 

Dana Gardner: Hello, and welcome to the next BriefingsDirect Voice of Analytics Innovation discussion. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for this ongoing discussion on the latest insights into end-to-end data management strategies.

Gardner

As businesses seek to gain insights for more elements of their physical edge -- from factory sensors, myriad machinery, and across field operations -- data remains fragmented. But a Data Fabric approach allows information and analytics to reside locally at the edge yet contribute to the global improvement in optimizing large-scale operations.

Stay with us now as we explore how edge-to-core-to-cloud dispersed data can be harmonized with a common fabric to make it accessible for use by more apps and across more analytics.

To learn more about the ways all data can be managed for today’s data-rich but too often insights-poor organizations, we’re joined by Chad Smykay, Field Chief Technology Officer for Data Fabric at Hewlett Packard Enterprise (HPE). Welcome, Chad.

 


Chad Smykay: Thank you.

 

Gardner: Chad, why are companies still flooded with data? It seems like they have the data, but they’re still thirsty for actionable insights. If you have the data, why shouldn’t you also have the insights readily available?

 

Smykay
Smykay: There are a couple reasons for that. We still see today challenges for our customers. One is just having a common data governance methodology. That’s not just to govern the security and audits, and the techniques around that -- but determining just what your data is.

 

I’ve gone into so many projects where they don’t even know where their data lives; just a simple matrix of where the data is, where it lives, and how it’s important to the business. This is really the first step that most companies just don’t do.

 

Gardner: What’s happening with managing data access when they do decide they want to find it? What’s been happening with managing the explosive growth of unstructured data from all corners of the enterprise?

 

Tame your data

 

Smykay: Five years ago, it was still the Wild West of data access. But we’re finally seeing some great standards being deployed and application programming interfaces (APIs) for that data access. Companies are now realizing there’s power in having one API to rule them all. In this case, we see mostly Amazon S3.

 

There are some other great APIs for data access out there, but just having more standardized API access into multiple datatypes has been great for our customers. It allows for APIs to gain access across many different use cases. For example, business intelligence (BI) tools can come in via an API. Or an application developer can access the same API. So that approach really cuts down on my access methodologies, my security domains, and just how I manage that data for API access.

 

Gardner: And when we look to get buy-in from the very top levels of businesses, why are leaders now rethinking data management and exploitation of analytics? What are the business drivers that are helping technologists get the resources they need to improve data access and management?

 

Smykay: The business drivers gain when data access methods are as reusable as possible across the different use cases. It used to be that you’d have different point solutions, or different open source tools, needed to solve a business use-case. That was great for the short-term, maybe with some quarterly project or something for the year you did it in.

Gaining a common, secure access layer that can access different types of data is the biggest driver of our HPE Data Fabric. And the business drivers gain when the data access methods are as reusable as possible.

 

But then, down the road, say three years out, they would say, “My gosh, we have 10 different tools across the many different use cases we’re using.” It makes it really hard to standardize for the next set of use cases.

 

So that’s been a big business driver, gaining a common, secure access layer that can access different types of data. That’s been the biggest driver for our HPE Data Fabric. That and having common API access definitely reduces the management layer cost, as well as the security cost.

 

Gardner: It seems to me that such data access commonality, when you attain it, becomes a gift that keeps giving. The many different types of data often need to go from the edge to dispersed data centers and sometimes dispersed in the cloud. Doesn’t data access commonality also help solve issues about managing access across disparate architectures and deployment models?

 

Smykay: You just hit the nail on the head. Having commonality for that API layer really gives you the ability to deploy anywhere. When I have the same API set, it makes it very easy to go from one cloud provider, or one solution, to another. But that can also create issues in terms of where my data lives. You still have data gravity issues, for example. And if you don’t have portability of the APIs and the data, you start to see some lock-in with the either the point solution you went with or the cloud provider that’s providing that data access for you.

 

Gardner: Following through on the gift that keeps giving idea, what is it about the Data Fabric approach that also makes analytics easier? Does it help attain a common method for applying analytics?

 

Data Fabric deployment options

 

Smykay: There are a couple of things there. One, it allows you to keep the data where it may need to stay. That could be for regulatory reasons or just depend on where you build and deploy the analytics models. A Data Fabric helps you to start separating out your computing and storage capabilities, but also keeps them coupled for wherever the deployment location is.

 


For example, a lot of our customers today have the flexibility to deploy IT resources out in the edge. That could be a small cluster or system that pre-processes data. They may typically slowly trickle all the data back to one location, a core data center or a cloud location. Having these systems at the edge gives them the benefit of both pushing information out, as well as continuing to process at the edge. They can choose to deploy as they want, and to make the data analytics solutions deployed at the core even better for reporting or modeling.

 

Gardner: It gets to the idea of act locally and learn globally. How is that important, and why are organizations interested in doing that?

 

Smykay: It’s just-in-time, right? We want everything to be faster, and that’s what this Data Fabric approach gets for you.

 

In the past, we’ve seen edge solutions deployed, but you weren’t processing a whole lot at the edge. You were pushing along all the data back to a central, core location -- and then doing something with that data. But we don’t have the time to do that anymore.

 

Unless you can change the laws of physics -- last time I checked, they haven’t done that yet -- we’re bound by the speed of light for these networks. And so we need to keep as much data and systems as we can out locally at the edge. Yet we need to still take some of that information back to one central location so we can understand what’s happening across all the different locations. We still want to make the rearview reporting better globally for our business, as well as allow for more global model management.

 

Gardner: Let’s look at some of the hurdles organizations have to overcome to make use of such a Data Fabric. What is it about the way that data and information exist today that makes it hard to get the most out of it? Why is it hard to put advanced data access and management in place quickly and easily?

 

Track the data journey

 

Smykay: It’s tough for most organizations because they can’t take the wings off the airplane while flying. We get that. You have to begin by creating some new standards within your organization, whether that’s standardizing on an API set for different datatypes, multiple datatypes, a single datatype.

 

Then you need to standardize the deployment mechanisms within your organization for that data. With the HPE Data Fabric, we give the ability to just say, “Hey, it doesn’t matter where you deploy. We just need some x86 servers and we can help you standardize either on one API or multiple APIs.”

 

We now support more than 10 APIs, as well as the many different data types that these organizations may have.

We see a lot of data silos out there today with customers -- and they're getting worse. They're now all over the place between multiple cloud providers. And there's all the networking in the middle. I call it silo sprawl.

 

Typically, we see a lot of data silos still out there today with customers – and they’re getting worse. By worse, I mean they’re now all over the place between multiple cloud providers. I may use some of these cloud storage bucket systems from cloud vendor A, but I may use somebody else’s SQL databases from cloud vendor B, and those may end up having their own access methodologies and their own software development kits (SDKs).

 

Next you have to consider all the networking in the middle. And let’s not even bring up security and authorization to all of them. So we find that the silos still exist, but they’ve just gotten worse and they’ve just sprawled out more. I call it the silo sprawl.

 

Gardner: Wow. So, if we have that silo sprawl now, and that complexity is becoming a hurdle, the estimates are that we’re going to just keep getting more and more data from more and more devices. So, if you don’t get a handle on this now, you’re never going to be able to scale, right?

 

Smykay: Yes, absolutely. If you’re going to have diversity of your data, the right way to manage it is to make it use-case-driven. Don’t boil the ocean. That’s where we’ve seen all of our successes. Focus on a couple of different use cases to start, especially if you’re getting into newer predictive model management and using machine learning (ML) techniques.

But, you also have to look a little further out to say, “Okay, what’s next?” Right? “What’s coming?” When you go down that data engineering and data science journey, you must understand that, “Oh, I’m going to complete use case A, that’s going to lead to use case B, which means I’m going to have to go grab from other data sources to either enrich the model or create a whole other project or application for the business.”

You should create a data journey and understand where you’re going so you don’t just end up with silo sprawl.

Gardner: Another challenge for organizations is their legacy installations. When we talk about zettabytes of data coming, what is it about the legacy solutions -- and even the cloud storage legacy -- that organizations need to rethink to be able to scale?

Zettabytes of data coming

Smykay: It’s a very important point. Can we just have a moment of silence? Because now we’re talking about zettabytes of data. Okay, I’m in.

Some 20 years ago, we were talking about petabytes of data. We thought that was a lot of data, but if you look out to the future, we’re talking about some studies showing connected Internet of Things (IoT) devices generating this zettabytes amount of data.


If you don’t get a handle on where your data points are going to be generated, how they’re going to be stored, and how they’re going to be accessed now, this problem is just going to get worse and worse for organizations.

Look, Data Fabric is a great solution. We have it, and it can solve a ton of these problems. But as a consultant, if you don’t get ahead of these issues right now, you’re going to be under the umbrella of probably 20 different cloud solutions for the next 10 years. So, really, we need to look at the datatypes that you’re going to have to support, the access methodologies, and where those need to be located and supported for your organization.

Gardner: Chad, it wasn’t that long ago that we were talking about how to manage big data, and Hadoop was a big part of that. NoSQL and other open source databases in particular became popular. What is it about the legacy of the big data approach that also needs to be rethought?

Smykay: One common issue we often see is the tendency to go either/or. By that I mean saying, “Okay, we can do real-time analytics, but that’s a separate data deployment. Or we can do batch, rearview reporting analytics, and that’s a separate data deployment.” But one thing that our HPE Data Fabric has always been able to support is both -- at the same time -- and that’s still true.

So if you’re going down a big data or data lake journey -- I think now the term now is a data lakehouse, that’s a new one. For these, basically I need to be able to do my real-time analytics, as well as my traditional BI reporting or rearview mirror reporting -- and that’s what we’ve been doing for over 10 years. That’s probably one of the biggest limitations we have seen.

But it’s a heavy lift to get that data from one location to another, just because of the metadata layer of Hadoop. And then you had dependencies with some of these NoSQL databases out there on Hadoop, it caused some performance issues. You can only get so much performance out of those databases, which is why we have NoSQL databases just out of the box of our Data Fabric -- and we’ve never run into any of those issues.

Gardner: Of course, we can’t talk about end-to-end data without thinking about end-to-end security. So, how do we think about the HPE Data Fabric approach helping when it comes to security from the edge to the core?

Secure data from edge to core

 

Smykay: This is near-and-dear to my heart because everyone always talks about these great solutions out there to do edge computing. But I always ask, “Well, how do you secure it? How do you authorize it? How does my application authorization happen all the way back from the edge application to the data store in the core or in the cloud somewhere?”

That’s what I call off-sprawl, where those issues just add up. If we don’t have one way to secure and manage all of our different data types, then what happens is, “Okay, well, I have this object-based system out there, and it has its own authorization techniques.” It has its own authentication techniques. By the way, it has its own way of enforcing security in terms of who has access to what, unless … I haven’t talked about monitoring, right? How do we monitor this solution?

So, now imagine doing that for each type of data that you have in your organization -- whether it’s a SQL database, because that application is just a driving requirement for that, or a file-based workload, or a block-based workload. You can see where this starts to steamroll and build up to be a huge problem within an organization, and we see that all the time.

We're seeing a ton of issues today in the security space. We're seeing people getting hacked. It happens all the way down to the application layer, as you often have security sprawl that makes it very hard to manage all of the different systems.

 

And, by the way, when it comes to your application developers, that becomes the biggest annoyance for them. Why? Because when they want to go and create an application, they have to go and say, “Okay, wait. How do I access this data? Oh, it’s different. Okay. I’ll use a different key.” And then, “Oh, that’s a different authorization system. It’s a completely different way to authenticate with my app.”

I honestly think that’s why we’re seeing a ton of issues today in the security space. It’s why we’re seeing people get hacked. It happens all the way down to the application layer, as you often have this security sprawl that makes it very hard to manage all of these different systems.

Gardner: We’ve come up in this word sprawl several times now. We’re sprawling with this, we’re sprawling with that; there’s complexity and then there’s going to be even more scale demanded.


The bad news is there is quite a bit to consider when you want end-to-end data management that takes the edge into consideration and has all these other anti-sprawl requirements. The good news is a platform and standards approach with a Data Fabric forms the best, single way to satisfy these many requirements.

So let’s talk about the solutions. How does HPE Ezmeral generally -- and the Ezmeral Data Fabric specifically -- provide a common means to solve many of these thorny problems?

Smykay: We were just talking about security. We provide the same security domain across all deployments. That means having one web-based user interface (UI), or one REST API call, to manage all of those different datatypes.

We can be deployed across any x86 system. And having that multi-API access -- we have more than 10 – allows for multi-data access. It includes everything from storing data into files and storing data in blocks. We’re soon going to be able to support blocks in our solution. And then we’ll be storing data into bit streams such as Kafka, and then into a NoSQL database as well.

Gardner: It’s important for people to understand that HPE Ezmeral is a larger family and that the Data Fabric is a subset. But the whole seems to be greater than the sum of the parts. Why is that the case? How has what HPE is doing in architecting Ezmeral been a lot more than data management?

Smykay: Whenever you have this “whole is greater than the sum of the parts,” you start reducing so many things across the chain. When we talk about deploying a solution, that includes, “How do I manage it? How do I update it? How do I monitor it?” And then back to securing it.

Honestly, there is a great report from IDC that says it best. We show a 567-percent, five-year return on investment (ROI). That’s not from us, that’s IDC talking to our customers. I don’t know of a better business value from a solution than that. The report speaks for itself, but it comes down to these paper cuts of managing a solution. When you start to have multiple paper cuts, across multiple arms, it starts to add up in an organization.

Gardner: Chad, what is it about the HPE Ezmeral portfolio and the way the Data Fabric fits in that provides a catalyst to more improvement?

 

All data put to future use

 

Smykay: One, the HPE Data Fabric can be deployed anywhere. It can be deployed independently. We have hundreds and hundreds of customers. We have to continue supporting them on their journey of compute and storage, but today we are already shipping a solution where we can containerize the Data Fabric as a part of our HPE Ezmeral Container Platform and also provide persistent storage for your containers.

 

The HPE Ezmeral Container Platform comes with the Data Fabric, it’s a part of the persistent storage. That gives you full end-to-end management of the containers, not only the application APIs. That means the management and the data portability.

 

So, now imagine being able to ship the data by containers from your location, as it makes sense for your use case. That’s the powerful message. We have already been on the compute and storage journey; been down that road. That road is not going away. We have many customers for that, and it makes sense for many use cases. We’ve already been on the journey of separating out compute and storage. And we’re in general availability today. There are some other solutions out there that are still on a road map as far as we know, but at HPE we’re there today. Customers have this deployed. They’re going down their compute and storage separation journey with us.

 

Gardner: One of the things that gets me excited about the potential for Ezmeral is when you do this right, it puts you in a position to be able to do advanced analytics in ways that hadn’t been done before. Where do you see the HPE Ezmeral Data Fabric helping when it comes to broader use of analytics across global operations?

 

Smykay: One of our CMOs used to say it best, and which Jack Morris has said: “If it’s going to be about the data, it better be all about the data.”

 


When you improve automating data management across multiple deployments -- managing it, monitoring it, keeping it secure -- you can then focus on those actual use cases. You can focus on the data itself, right? That’s living in the HPE Data Fabric. That is the higher-level takeaway. Our users are not spending all their time and money worrying about the data lifecycle. Instead, they can now go use that data for their organizations and for future use cases.

 

HPE Ezmeral sets your organization up to use your data instead of worrying about your data. We are set up to start using the Data Fabric for newer use cases and separating out compute and storage, and having it run in containers. We’ve been doing that for years. The high-level takeaway is you can go focus on using your data and not worrying about your data.

 

Gardner: How about some of the technical ways that you’re doing this? Things like global namespaces, analytics-ready fabrics, and native multi-temperature management. Why are they important specifically for getting to where we can capitalize on those new use cases?

 

Smykay: Global namespaces is probably the top feature we hear back from our customers on. It allows them to gain one view of the data with the same common security model. Imagine you’re a lawyer sitting at your computer and you double-click on a Data Fabric drive, you can literally then see all of your deployments globally. That helps with discovery. That helps with bringing onboard your data engineers and data scientists. Over the years that’s been one of the biggest challenges, they spend a lot of time building up their data science and data engineering groups and on just discovering the data.

 

Global namespace means I’m reducing my discovery time to figure out where the data is. A lot of this analytics-ready value we’ve been supporting in the open source community for more than 10 years. There’s a ton of Apache open source projects out there, like Presto, Hive, and Drill. Of course there’s also Spark-ready, and we have been supporting Spark for many years. That’s pretty much the de facto standard we’re seeing when it comes to doing any kind of real-time processing or analytics on data.

 

As for multi-temperature, that feature allows you to decrease your cost of your deployment, but still allows managing all your data in one location. There are a lot of different ways we do that. We use erasure coding. We can tear off to Amazon S3-compliant devices to reduce the overall cost of deployment.

 

These features contribute to making it still easier. You gain a common Data Fabric, common security layer, and common API layer.

 

Gardner: Chad, we talked about much more data at the edge, how that’s created a number of requirements, and the benefits of a comprehensive approach to data management. We talked about the HPE Data Fabric solution, what it brings, and how it works. But we’ve been talking in the abstract.

 

What about on the ground? Do you have any examples of organizations that have bitten off and made Data Fabric core for them? As an adopter, what do they get? What are the business outcomes?

 

Central view benefits businesses

 

Smykay: We’ve been talking a lot about edge-to-core-to-cloud, and the one example that’s just top-of-mind is a big, tier-1 telecoms provider. This provider makes the equipment for your AT&Ts and your Vodafones. That equipment sits out on the cell towers. And they have many Data Fabric use cases, more than 30 with us.

 

But the one I love most is real-time antenna tuning. They’re able to improve customer satisfaction in real time and reduce the need to physically return to hotspots on an antenna. They do it via real-time data collection on the antennas and then aggregating that across all of the different layers that they have in their deployments.

One example is real-time antennae tuning. They're able to improve customer satisfaction in real time and reduce the need to physically return to hotspots on an antennae. They do it instead via real-time data collection and aggregating that across all of their deployments.

 

They gain a central view of all of the data using a modern API for the DevOps needs. They still centrally process data, but they also process it at the edge today. We replicate all of that data for them. We manage that for them and take a lot of the traditional data management tasks off the table for them, so they can focus on the use case of the best way to tune antennas.

 

Gardner: They have the local benefit of tuning the antenna. But what’s the global payback? Do we have a business quantitative or qualitative returns for them in doing that?

 

Smykay: Yes, but they’re pretty secretive. We’ve heard that they’ve gotten a payback in the millions of dollars, but an immediate, direct payback for them is in reducing the application development spend everywhere across the layer. That reduction is because they can use the same type of API to publish that data as a stream, and then use the same API semantics to secure and manage it all. They can then take that same application, which is deployed in a container today, and easily deploy it to any remote location around the world.

 

Gardner: There’s that key aspect of the application portability that we’ve danced around a bit. Any other examples that demonstrate the adoption of the HPE Data Fabric and the business pay-offs?

 

Smykay: Another one off the top of my head is a midstream oil and gas customer in the Houston area. This one’s not so much about edge-to-core-to-cloud. This is more about consolidation of use cases.

 

We discussed earlier that we can support both rearview reporting analytics as well as real-time reporting use cases. And in this case, they actually have multiple use cases, up to about five or six right now. Among them, they are able to do predictive failure reports for heat exchangers. These heat exchangers are deployed regionally and they are really temperamental. You have to monitor them all the time.

 

But now they have a proactive model where they can do a predictive failure monitor on those heat exchangers just by checking the temperatures on the floor cameras. They bring in all real-time camera data and they can predict, “Oh, we think we’re having an issue with this heat exchanger on this time and this day.” So that decreases management cost for them.

 

They also gain a dynamic parts management capability for all of their inventory in their warehouses. They can deliver faster, not only on parts, but reduce their capital expenditure (CapEx) costs, too. They have gained material measurement balances. When you push oil across a pipeline, they can detect where that balance is off across the pipeline and detect where they’re losing money, because if they are not pushing oil across the pipe at x amount of psi, they’re losing money.

 

So they’re able to dynamically detect that and fix it along the pipe. They also have a pipeline leak detection that they have been working on, which is modeled to detect corrosion and decay.

 

The point is there are multiple use cases. But because they’re able to start putting those data types together and continue to build off of it, every use case gets stronger and stronger.

 

Gardner: It becomes a virtuous adoption cycle; the more you can use the data generally, then the more value, then the more you invest in getting a standard fabric approach, and then the more use cases pop up. It can become very powerful.

 

This last example also shows the intersection of operational technology (OT) and IT. Together they can start to discover high-level, end-to-end business operational efficiencies. Is that what you’re seeing?

 

Data science teams work together

 

Smykay: Yes, absolutely. A Data Fabric is kind of the Kumbaya set among these different groups. If they’re able to standardize on the IT and developer side, it makes it easier for them to talk the same language. I’ve seen this with the oil and gas customer. Now those data science and data engineering teams work hand in hand, which is where you want to get in your organization. You want those IT teams working with the teams managing your solutions today. That’s what I’m seeing. As you get a better, more common data model or fabric, you get faster and you get better management savings by having your people working better together.

 

Gardner: And, of course, when you’re able to do data-driven operations, procurement, logistics, and transportation you get to what we’re referring generally as digital business transformation.

 

Chad, how does a Data Fabric approach then contribute to the larger goal of business transformation?

 

Smykay: It allows organizations to work together through a common data framework. That’s been one of the biggest issues I’ve seen, when I come in and say, “Okay, we’re going to start on this use case. Where is the data?”

 

Depending on size of the organization, you’re talking to three to five different groups, and sometimes 10 different people, just to put a use case together. But as you create a common data access method, you see an organization where it’s easier and easier for not only your use cases, but your businesses to work together on the goal of whatever you’re trying to do and use your data for.

 

Gardner: I’m afraid we’ll have to leave it there. We’ve been exploring how a Data Fabric approach allows information and analytics to reside locally at the edge, yet contribute to a global improvement in optimizing large-scale operations.

 

And we’ve learned how HPE Ezmeral Data Fabric makes modern data management more attainable so businesses can dramatically improve their operational efficiency and innovate from edge to core to clouds.

 


So please join me in thanking our guest, Chad Smykay, Field Chief Technology Officer for Data Fabric at HPE. Thanks so much, Chad.

 

Smykay: Thank you, I appreciate it.

 

Gardner: And a big thank you as well to our audience for joining this sponsored BriefingsDirect Voice of Analytics Innovation discussion. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host for this ongoing series of Hewlett Packard Enterprise-supported discussions.

Thanks again for listening. Please pass this along to your IT community, and do come back next time.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: Hewlett Packard Enterprise.

Transcript of a discussion on the best ways widely inclusive data can be managed for today’s data-rich but too often insights-poor organizations. Copyright Interarbor Solutions, LLC, 2005-2020. All rights reserved.

You may also be interested in: