Showing posts with label finance. Show all posts
Showing posts with label finance. Show all posts

Tuesday, June 22, 2021

How Financial Firms Blaze a Trail to New, More Predictive Operational Resilience Capabilities


A transcript of a discussion on new ways that businesses in the financial sector are avoiding and mitigating the damage from today’s myriad business threats

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: ServiceNow and EY.

 Dana Gardner: Hi, this is Dana Gardner, Principal Analyst at Interarbor Solutions and you’re listening to BriefingsDirect.

The last few years have certainly highlighted the need for businesses of all kinds to build up their operational resilience. With a rising tide of pandemic waves, high-level cybersecurity incidents, frequent technology failures, and a host of natural disasters -- there’s been plenty to protect against.

As businesses become more digital and dependent upon end-to-end ecosystems of connected services, the responsibility for protecting critical business processes has clearly shifted. It’s no longer just a task for IT and security managers but has become top-of-mind for line-of-business owners, too.

Stay with us now as we explore new ways that those responsible for business processes specifically in the financial sector are successfully leading the path to avoiding and mitigating the impact and damage from these myriad threats.

To learn more about the latest in rapidly beefing-up operational resilience by bellwether finance companies, please join me in welcoming Steve Yon, Executive Director of the EY ServiceNow Practice. Welcome, Steve.

Steve Yon: Thanks, I’m happy to be here.

Gardner: We’re also here with Sean Culbert, Financial Services Principal at EY. Good to have you with us, Sean.

Sean Culbert: Good afternoon, Dana.

Gardner: Sean, how have the risks modern digital businesses face changed over the past decade? Why are financial firms at the vanguard of identifying and heading off these pervasive risks?

Culbert: The category of financial firms forms a broad scope of types. The risks for a consumer bank, for example, are going to be different than the risks for an investment bank or from a broker-dealer. But they all have some common threads. Those include the expectation to be always-on, at the edge, and able to get to your data in a reliable and secure way.

Culbert

There’s also the need for integration across the ecosystem. Unlike product sets before, such as in retail brokerage or insurance, customers expect to be brought together in one cohesive services view. That includes more integration points and more application types.

This all needs to be on the edge and always-on, even as it includes, increasingly, reliance on third-party providers. They need to walk in step with the financial institutions in a way that they can ensure reliability. In certain cases, there’s a learning curve involved, and we’re still coming up that curve.

It remains a shifting set of expectations to the edge. It’s different by category, but the themes of integrated product lines -- and being able to move across those product lines and integrate with third-parties – has certainly created complexity.

Gardner: Steve, when you’re a bank or a financial institution that finds itself in the headlines for bad things, that is immediately damaging for your reputation and your brands. How are banks and other financial organizations trying to be rapid in their response in order to keep out of the headlines?

Interconnected, system-wide security

Yon: It’s not just about having the wrong headline on the front cover of American Banker. As Sean said, the taxonomy of all these services is becoming interrelated. The suppliers tend to leverage the same services.

Yon

Products and services tend to cross different firms. The complexity of the financial institution space right now is high. If something starts to falter -- because everything is interconnected -- it could have a systemic effect, which is what we saw several years ago that brought about Dodd-Frank regulations.

So having a good understanding of how to measure and get telemetry on that complex makeup is important, especially in financial institutions. It’s about trust. You need to have confidence in where your money is and how things are going. There’s a certain expectation that must happen. You must deal with that despite mounting complexity. The notion of resiliency is critical to a brand promise -- or customers are going to leave.

One, you should contain your own issues. But the Fed is going to worry about it if it becomes broad because of the nature of how these firms are tied together. It’s increasingly important -- not only from a brand perspective of maintaining trust and confidence with your clients -- but also from a systemic nature; of what it could do to the economy if you don’t have good reads on what’s going on with support of your critical business services.

Gardner: Sean, the words operational resilience come with a regulatory overtone. But how do you define it?

The operational resilience pyramid

Culbert: We begin with the notion of a service. Resilience is measured, monitored, and managed around the availability, scalability, reliability, and security of that service. Understanding what the service is from an end-to-end perspective, how it enters and exits the institution, is the center to our universe.

Around that we have inbound threats to operational resilience. From the threat side, you want the capability to withstand a robust set of inbound threats. And for us, one of the important things that has changed in the last 10 years is the sophistication and complexity of the threats. And the prevalence of them, quite frankly.

We have COVID, we have proliferation of very sophisticated cyber attacks that weren't around 10 years ago. Geopolitically, we're all aware of tensions, and weather events have become more prevalent. It's a wide scope of inbound threats.

If you look at the four major threat categories we work with -- weather, cyber, geopolitical, and pandemics -- pick any one of those and there has been a significant change in those categories. We have COVID, we have proliferation of very sophisticated cyber attacks that weren’t around 10 years ago, often due to leaks from government institutions. Geopolitically, we’re all aware of tensions, and weather events have become more prevalent. It’s a wide scope of inbound threats.

And on the outbound side, businesses need the capability to not only report on those things, but to make decisions about how to prevent them. There’s a hierarchy in operational resilience. Can you remediate it? Can you fix it? Then, once it’s been detected, how can minimize the damage. At the top of the pyramid, can you prevent it before it hits?

So, there’s been a broad scope of threats against a broader scope of service assets that need to be managed with remediation. That was the heritage, but now it’s more about detection and prevention.

Gardner: And to be proactive and preventative, operational resilience must be inclusive across the organization. It’s not just one group of people in a back office somewhere. The responsibility has shifted to more people -- and with a different level of ownership.

What’s changed over the past decade in terms of who’s responsible and how you foster a culture of operational resiliency?

Bearing responsibility for services

Culbert: The anchor point is the service. And services are processes: It’s technology, facilities, third parties, and people. The hard-working people in each one of those silos all have their own view of the world -- but the services are owned by the business. What we’ve seen in recognition of that is that the responsibility for sustaining those services falls with the first line of business [the line of business interacting with consumers and vendors at the transaction level].

Yon: There are a couple of ways to look at it. One, as Sean was talking about, the lines of defense and the evolution of risk has been divvied up. The responsibilities have had line-of-sight ownership over certain sets of accountabilities. But you also have triangulation from others needing to inspect and audit those things as well.


The time is right for the new type of solution that we’re talking about now. One, because the nature of the world has gotten more complex. Two, the technology has caught up with those requirements.

The move within the tech stack has been to become more utility-based, service-oriented, and objectified. The capability to get signals on how everything is operating, and its status within that universe of tech, has become a lot easier. And with the technology now being able to integrate across platforms and operate at the service level -- versus at the component level – it provides a view that would have been very hard to synthesize just a few years ago.

What we’re seeing is a big shot in the arm to the power of what a typical risk resilience compliance team can be exposed to. They can manage their responsibilities at a much greater level.

Before they would have had to develop business continuity strategies and plans to know what to do in the event of a fault or a disruption. And when those things come out, the three-ring binders, the war room gets assembled and people start to figure out what to do. They start running the playbook.

What we're seeing is a big shot in the arm to the power of what a typical risk resilience compliance team can be exposed to. They can manage their responsibilities at a mch greater level. 

The problem with that is that while they’re running the playbook, the fault has occurred, the destruction has happened, and the clock is ticking for all those impacts. The second-order consequences of the problem are starting to amass with respect to value destruction, brand reputational destruction, as well as whatever customer impacts there might be.

But now, because of technology and moving toward Internet of things (IoT) thinking across assets, people, facilities, and third-party services, technology can self-declare their state. That data can be synthesized to say, “Okay, I can start to pick up a signal that’s telling me that a fault is inbound.” Or something looks like it’s falling out of the control thresholds that they have.

That tech now gives me the capability to get out in front of something. That would be almost unheard-of years ago. The nexus of tech, need, and complexity are all hitting right now. That means we’re moving and pivoting to a new type of solution rising out of the field.

Gardner: You know, many times we’ve seen such trends happen first in finance and then percolate out to the rest of the economy. What’s happened recently with banking supervision, regulations, and principles of operational resilience?

Financial sector leads the way

Yon: There are similar forms of pressure coming from all regulatory-intense industries. Finance is a key one, but there’s also power, utilities, oil, and gas. The trend is happening primarily first in regulatory-intensive industries.

Culbert: A couple years ago, the Bank of England and the Prudential Regulation Authority (PRA) put out a consultation paper that was probably most prescriptive out of the UK. We have the equivalent over here in the US around expectations for operational resiliency. And that just made its way into policy or law. For the most part, on a principles basis, we all share a common philosophy in terms of what’s prudent.

A lot of the major institutions, the ones we deal with, have looked at those major tenets in these policies and have said they will be practiced. And there are four fundamental areas that the institutions must focus on.

One is, can it declare and describe its critical business services? Does it have threshold parameters logic assigned to those services so that it knows how far it can go before it sustains damage across several different categories? Are the assets that support those services known and mapped? Are they in a place where we can point to them and point to the health of them? If there’s an incident, can they collaborate around the sustaining of those assets?

As I said earlier, those assets generally fall into small categories: people, facilities, third parties, and technology. And, finally, do you have the tools in place to keep those services within those tolerance parameters and have other alerting systems to let you know which of the assets may well be failing you, if the services are at risk.

That’s a lay-person, high-level description of the Bank of England policy on operational risks for today’s Financial Management Information Systems (FMIS). Thematically most of the institutions are focusing on those four areas, along with having credible and actionable testing schemes to simulate disruptions on the inbound side.

In the US, Dodd-Frank mandated that institutions declare which of those services could disrupt critical operations and, if those operations were disrupted, could they in turn disrupt the general economy. The operational resilience rules and regulations fall back on that. So, now that you know what they are, can you risk-rate them based on the priorities of the bank and its counterparties? Can you manage them correctly? That’s the letter-of-the-law-type regulation here. In Japan, it’s more credential-based regulation like the Bank of England. It all falls into those common categories.

Gardner: Now that we understand the stakes and imperatives, we also know that the speed of business has only increased. So has the speed of expectations for end consumers. The need to cut time to discovery of the problems and to find root causes also must be as fast as possible.

How should banks and other financial institutions get out in front of this? How do we help organizations move faster to their adoption, transform digitally, and be more resilient to head off problems fast?

Preventative focus increases

Yon: Once there’s clarity around the shift in the goals, knowing it’s not good enough to just be able to know what to do in the event of a fault or a potential disruption, the expectation becomes the proof to regulatory bodies and to your clients that they should trust you. You must prove that you can withstand and absorb that potential disruption without impact to anybody else downstream. Once people get their head around the nature of the expectation-shifting to being a lot more preventative versus reactive, the speeds and feeds by which they’re managing those things become a lot easier to deal with.

You'd get the phone call at 3 a.m. that a critical business service was down. You'd have the tech phone call that people are trying to figure out what happened. That lack of speed killed because you had to figure a lot of things out while the clock was ticking. But now, you're allowing yourself time to figure things out.

Back when I was running the technology at a super-regional bank, you’d get the phone call at 3 a.m. that a critical business service was down. You’d have the tech phone call that people are trying to figure out what happened because they started to notice at the help desk that a number of clients and customers were complaining. The clock had been ticking before 3 a.m. when I got the call. And so, by now, by that time, those clients are upset.

Yet we were spending our time trying to figure out what happened and where. What’s the overall impact? Are there other second-order impacts because of the nature of the issue? Are other services disrupted as well? Again, it gets back to the complexity factor. There are interrelationships between the various components that make up any service. Those services are shared because that’s how it is. People lean on those things -- and that’s the risk you take.

Before, the lack of speed literally killed because you had to figure a lot of those things out while the clock was ticking and the impact was going on. But now, you’re allowing yourself time to figure things out. That’s what we call a decision-support system. You want to alert ahead of time to ensure that you understand the true blast area of what the potential destruction is going to be.

Secondly, can I spin up the right level of communications so that everybody who could be affected knows about it? And thirdly, can I now get the right people on the call -- versus hunting and pecking to determine who has a problem on the fly at 3 a.m.?

The nature of having speed is when you deal with an issue by buying time for firms to deal with the thing intelligently versus in a shotgun approach and without truly understanding the nature of the impact until the next day.

Gardner: Sean, it sounds like operation resiliency is something that never stops. It’s an ongoing process. That’s what buys you the time because you’re always trying to anticipate. Is that the right way to look at it?

Culbert: It absolutely is the way to look at it. A time objective may be specific to the type of service, and obviously it’s going to be different from a consumer bank to a broker-dealer. You will have a time objective attached to a service, but is that a critical service that, if disrupted, could further disrupt critical operations that could then disrupt the real economy? That’s come into focus in the last 10 years. It has forced people to think through: If you were if a broker-dealer and you couldn’t meet your hedge fund positions, or if you were a consumer bank and you couldn’t get folks their paychecks, does that put people in financial peril?

These involve very different processes and have very different outcomes. But each has a tolerance of filling in the blank time. So now it’s just more of a matter of being accountable for those times. There are two things: There’s the customer expectation that you won’t reach those tolerances and be able to meet the time objective to meet the customers’ needs.

And the second is that technology has made it more manageable as the domino or contagion effect of one service tipping over another one. So now it’s not just, “Is your service ready to go within its objective of half an hour?” It’s about the knock-on effect to other services as well.

So, it’s become a lot more correlated, and it’s become regional. Something that might be a critical service in one business, might not be in another -- or in one region, might not be in another. So, it’s become more of a multidimensional management problem in terms of categorically specific time objectives against specific geographies, and against the specific regulations that overhang the whole thing.

Gardner: Steve, you mentioned earlier about taking the call at 3 a.m. It seems to me that we have a different way of looking at this now -- not just taking the call but making the call. What’s the difference between taking the call and making the call? How does that help us prepare for better operation resiliency?

Make the call, don’t take the call

Yon: It’s a fun way of looking a day in the life of your chief resiliency officer or chief risk officer (CRO) and how it could go when something bad happens. So, you could take the call from the CEO or someone from the board as they wonder why something is failing. What are you going to do about it?

You’re caught on your heels trying to figure out what was going on, versus making the call to the CEO or the board member to let them know, “Hey, these were the potential disruptions that the firm was facing today. And this is how we weathered through it without incident and without damaging service operations or suffering service operations that would have been unacceptable.”

We like to think of it as not only trying to prevent the impact to the clients but also from the possibility of a systemic problem. It could potentially increase the lifespan of a CRO by showing they can be responsible for the firm’s up-time, versus just answer questions post-disruption. It provides a little bit of levity but it’s also a truth that there are more than just the consequences to the clients, but also to those people responsible for that function within the firm.

Gardner: Many leading-edge organizations have been doing digital transformation for some time. We’re certainly in the thick of digital transformation now after the COVID requirements of doing everything digitally rather than in person.

But when it comes to finance and the services that we’re describing -- the interconnections in the interdependencies -- there are cyber resiliency requirements that cut across organizational boundaries. Having a moat around your organization, for example, is no longer enough.

What is it about the way that ServiceNow and EY are coming together that helps make operational resiliency an ongoing process possible?

Digital transformation opens access

Yon: There are two components. You need to ask yourself, “What needs to be true for the outcome that we’re talking about to be valid?” From a supply-side, what needs to be true is, “Do I have good signal and telemetry across all the components and assets of resources that would pose a threat or a cause for a threat to happen from a down service?”

With the move to digital transformation, more assets and resources that compose any organization are now able to be accessed. That means the state of any particular asset, in terms of its preferential operating model, are going to be known.

With the move to digital transformation, more assets and resources that compose any organization are now able to be accessed. That means the state of any particular asset, in terms of its preferential operating model, are going to be known. I need to have that data and that’s what digital transformation provides.

Secondly, I need a platform that has wide integration capabilities and that has workflow at its core. Can I perform business logic and conditional synthesis to interpret the signals that are coming from all these different systems?

That’s what’s great about ServiceNow -- there hasn’t been anything that it hasn’t been able to integrate with. Then it comes down to, “Okay, do I understand the nature of what it is I’m truly looking for as a business service and how it’s constructed?” Once I do that, I’m able to capture that control, if you will, determine its threshold, see that there’s a trigger, and then drive the workflows to get something done.

For a hypothetical example, we’ve had an event so that we’re losing the trading floor in city A, therefore I know that I need to bring city B and its employees online and to make them active so I can get that up and running. ServiceNow can drive that all automatically, within the Now Platform itself, or drive a human to provide the approvals or notifications to drive the workflows as part of your business continuity plan (BCP) going forward. You will know what to do by being able to detect and interpret the signals, and then based on that, act on it.

That’s what ServiceNow brings to make the solution complete. I need to know what that service construction is and what it means within the firm itself. And that’s where EY comes to the table, and I’ll ask Sean to talk about that.

Culbert: ServiceNow brings to the table what we need to scale and integrate in a logical and straightforward way. Without having workflows that are cross-silo and cross-product at scale -- and with solid integration of capabilities – this just won’t happen.

When we start talking about the signals from everywhere against all the services -- it’s a sprawl. From an implementation perspective, it feels like it’s not implementable.

The regulatory burden requires focus on what’s most important, and why it’s most important to the market, the balance sheet, and the customers. And that’s not for the 300 services, but for the one or two dozen services that are important. Knowing that gives us a big step forward by being able to scope out the ServiceNow implementation.

And from there, we can determine what dimensions associated with that service we should be capturing on a real-time basis. To progress from remediation to detection on to prevention, we must be judicious of what signals we’re tracking. We must be correct.

We have the requirement and obligation to declare and describe what is critical using a scalable and integrable technology, which is ServiceNow. That’s the big step forward.

Yon: The Now platform also helps us to be fast. If you look under the hood of most firms, you’ll find ServiceNow is already there. You’ll see that there’s already been work done in the risk management area. They already know the concepts and what it means to deal with policies and controls, as well as the triggers and simulations. They have IT  and other assets under management, and they know what a configuration management database (CMDB) is.

These are all accelerants that not only provide scale to get something done but provide speed because so many of these assets and service components are already identified. Then it’s just a matter of associating them correctly and calibrating it to what’s really important so you don’t end up with a science fair integration project.

Gardner: What I’m still struggling to thread together is how the EY ServiceNow alliance operational resiliency solution becomes proactive as an early warning system. Explain to me how you’re able to implement this solution in such a way that you’re going to get those signals before the crisis reaches a crescendo.

Tracking and recognizing faults

Yon: Let’s first talk about EY and how it comes with an understanding from the industry of what good looks like with respect to what a critical business service needs to be. We’re able to hone down to talking about payments or trading. This maps the deconstruction of that service, which we also bring as an accelerant.

We know what it looks like -- all the different resources, assets, and procedures that make that critical service active. Then, within ServiceNow, it manages and exposes those assets. We can associate those things in the tool relatively quickly. We can identify the signal that we’re looking to calibrate on.

Then, based on what ServiceNow knows how to do, I can put a control parameter on this service or component within the threshold. It then gives me an indication whether something might be approaching a fault condition. We basically look at all the different governance, risk management, and compliance (GRC) leading indicators and put telemetry around those things when, for example, it looks like my trading volume is starting to drop off.

Based on what ServiceNow knows how to do, I can put a control parameter on this service or component within the threshold. It then gives me an indication whether something might be approaching a fault condition.

Long before it drops to zero, is there something going on elsewhere? It delivers up all the signals about the possible dimensions that can indicate something is not operating per its normal expected behavior. That data is then captured, synthesized, and displayed either within ServiceNow or it is automated to start running its own tests to determine what’s valid.

But at the very least, the people responsible are alerted that something looks amiss. It’s not operating within the control thresholds already set up within ServiceNow against those assets. This gives people time to then say, “Okay, am I looking at a potential problem here? Or am I just looking at a blip and it’s nothing to worry about?”

Gardner: It sounds like there’s an ongoing learning process and a data-gathering process. Are we building a constant mode of learning and automation of workflows? Do we do get a whole greater than the sum of the parts after a while?

Culbert: The answer is yes and yes. There’s learning and there’s automation. We bring to the table some highly effective regulatory risk models. There’s a five-pillar model that we’ve used where market and regulatory intelligence feeds risk management, surveillance, analysis, and ultimately policy enforcement.

And how the five pillars work together within ServiceNow -- it works together within the business processes within the organization. That’s where we get that intelligence feeding, risk feeding, surveillance analysis, and enforcement. That workflow is the differentiator, to allow rapid understanding of whether it’s an immediate risk or concentrating risk.

And obviously, no one is going to be 100 percent perfect, but having context and perspective on the origin of the risk helps determine whether it’s a new risk -- something that’s going to create a lot of volatility – or whether it’s something the institution has faced before.

We rationalize that risk -- and, more importantly, rationalize the lack of a risk – to know at the onset if it’s a false positive. It’s an essential market and regulatory intelligence mechanism. Are they feeding us only the stuff that’s really important?

Our risk models tell us that. That risk model usually takes on a couple of different flavors. One flavor is similar to a FICO score. So, have you seen the risk? Have you seen it before? It is characterizable by the words coming from it and its management in the past.

And then some models are more akin to a bar calculator. What kind of volatility is this risk going to bring to the bank? Is it somebody that’s recreationally trying to get into the bank, or is it a state actor?

Once the false-positive gets escalated and disposed of -- if it’s, in fact, a false positive – are we able to plug it into something robust enough to surveil for where that risk is headed? That’s the only way to get out in front of it.

The next phase of the analysis says, “Okay, who should we talk to about this? How do we communicate that this is bigger than a red box, much bigger than a red box, a real crisis-type risk? What form does that communication take? Is it a full-blown crisis management communication? Is it a standing management communication or protocol?”

We take that affected function and very quickly understand the health or the resiliency of other impacted functions. We use our own proprietary model. It helps to shift from primary states to alternative states.

And then ultimately, this goes to ServiceNow, so we take that affected function and very quickly understand the health or the resiliency of other impacted functions. We use our own propriety model. It’s a military model used for nuclear power plants, and it helps to shift from primary states to alternative states, as well as to contingency and emergency states.

At the end, the person who oversees policy enforcement must gain the tools to understand where they should be fixing the primary state issue or moving on from it. They must know to step aside or shift into an emergency state.

From our perspective, it is constant learning. But there are fundamental pillars that these events flow through that deliver the problem to the right person and give that person options for minimizing the risk.

Gardner: Steve, do we have any examples or use cases that illustrate how alerting the right people with the right skills at the right time is an essential part of resuming critical business services or heading off the damage?

Rule out retirement risks

Yon: Without naming names, we have a client within Europe, the Middle East and Africa (EMEA) we can look at. One of the things the pandemic brought to light is the need to know our posture to continuing to operate the way we want. Getting back to integration and integrability, where are we going to get a lot of that information for personnel from? Workday, their human resources (HR) system of record, of course.

Now, they had a critical business service owner who was going to be retiring. That sounds great. That’s wonderful to hear. But one of the valid things for this critical business service to be considered operating in its normal state is to check for an owner. Who will cut through the issues and process and lead going forward?

If there isn’t an owner identified for the service, I would be considered at risk for this service. It may not be capable of maintaining its continuity. So, here’s a simple use case where someone could be looking at a trigger from Workday that asks if this leadership person is still in the role and active.

Is there a control around identifying if they are going to become inactive within x number of months’ time? If so, get on that because the regulators will look at these processes potentially being out of control.

There’s a simple use case that has nothing to do with technology but shows the integrability of ServiceNow into another system of record. It turns ServiceNow into a decision-support platform that drives the right actions and orchestrates timely actions -- not only to detect a disruption but anything else considered valid as a future risk. Such alerts give the time to get it taken care of before a fault happens.

Gardner: The EY ServiceNow alliance operational resilience solution is under the covers but it’s powering leaders’ ability to be out in front of problems. How does the solution enable various levels of leadership personas, even though they might not even know it’s this solution they’re reacting to?

Leadership roles evolve

Culbert: That’s a great question. For the last six to seven years, we’ve all heard about the shift from the second to the first line of primary ownership in the private sector. I’ve heard many occasions for our first line business manager saying, “You know, if it is my job, first I need to know what the scope of my responsibilities are and the tools to do my job.” And that persona of the frontline manager having good data, that’s not a false positive. It’s not eating at his or her ability to make money. It’s providing them with options of where to go to minimize the issue.

The personas are clearly evolving. It was difficult for risk managers to move solidly into the first line without these types of tools. And there were interim management levels, too. Someone who sat between the first and the second line -- level 1.5. or line 1.5. And it’s clearly pushing into the first line. How do they know their own scope as relates to the risk to the services?

Now there’s a tool that these personas can use to be not only be responsible for risk but responsive as well. And that’s a big thing in terms of the solution design. With ServiceNow over the last several years, if the base data is correctly managed, then being able to reconfigure the data and recalibrate the threshold logic to accommodate a certain persona is not a coding exercise. It’s a no-code step forward to say, “Okay, this is now the new role and scope, and that role and scope will be enabled in this way.” And this power is going to direct the latest signals and options.

But it’s all about the definition of a service. Do we all agree end-to-end what it is, and the definition of the persona? Do we all understand who’s accountable and who’s responsible? Those two things are coming together with a new set of tools that are right and correct.

Yon: Just to go back to the call at 3 a.m., that was a tech call. But typically, what happens is there’s also going to be the business call. So, one of the issues we’re also solving with ServiceNow is in one system we manage the nature of information irrespective of what your persona is. You have a view of risk that can be tailored to what it is that you care about. And all the data is congruent back and forth.

It becomes a lot more efficient and accurate for firms to manage the nature of understanding on what things are when it’s not just the tech community talking. The business community wants to know what’s happening – and what’s next? And then someone can translate in between. This is a real-time way for all those personas to become a line around the nature of the issue with respect to their perspective.

Gardner: I really look forward to the next in our series of discussions around operational resilience because we’re going to learn more about the May announcement of this solution.

But as we close out today’s discussion, let’s look to the future. We mentioned earlier that almost any highly regulated industry will be facing similar requirements. Where does this go next?

It seems to me that the more things like machine learning (ML) and artificial intelligence (AI) analyze the many sources of data, they will make it even more powerful. What should we look for in terms of even more powerful implementations?

AI to add power to the equation

Culbert: When you set up the framework correctly, you can apply AI to the thinning out of false positives and for tagging certain events as credible risk events or not credible risk events. AI can also to be used to direct these signals to the right decision makers. But instead of taking the human analyst out of the equation, AI is going to help us. You can’t do it without that framework.

Yon: When you enable these different sets of data coming in for AI, you start to say, “Okay, what do I want the picture to look like in my ability to simulate these things?” It all goes up, especially using ServiceNow.

But back to the comment on complexity and the fact that suppliers don’t just supply one client, they connect to many. As this starts to take hold in the regulated industries -- and it becomes more of an expectation for a supplier to be able to operate this way and provide these signals, integration points, telemetry, and transparency that people expect -- anybody else trying to lever into this is going to get the lift and the benefit from suppliers who realize that the nature of playing in this game just went up. Those benefits become available to a much broader landscape of industries and for those suppliers.

Gardner: When we put two and two together, we come up with a greater sum. We’re going to be able to deal rapidly with the known knowns, as well as be better prepared for the unknown unknowns. So that’s an important characteristic for a much brighter future -- even if we hit another unfortunate series of risk-filled years such as we’ve just suffered.

I’m afraid we’ll have to leave it there. You’ve been listening to a sponsored BriefingsDirect discussion on the need for businesses to build up their operational resilience.

And we’ve learned how those responsible for business processes in the financial sector specifically are successfully leading the charge to avoid and mitigate the impact and damage from myriad business threats. These new imperatives to achieve operation resilience are sure to spread soon in the global economy.

So please join me in thanking our guests, Steve Yon, Executive Director of the EY ServiceNow Practice. Thank you so much, Steve.

Yon: Thank you.

Gardner: And we’ve also been with Sean Culbert, Financial Services Principal at EY. Thank you so much.

Culbert: Thanks, Dana.

Gardner: And a big thank you as well to our audience for joining this BriefingsDirect operational resilience innovation discussion. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host throughout this series of ServiceNow- and EY-sponsored BriefingsDirect interviews.

Thanks again for listening. Please pass this along to your business community, and do come back next time.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: ServiceNow and EY.

A transcript of a discussion on new ways that businesses in the financial sector are avoiding and mitigating the damage from today’s myriad business threats. Copyright Interarbor Solutions, LLC, 2005-2021. All rights reserved.

You may also be interested in:

Thursday, January 30, 2020

Intelligent Spend Management Supports Better Decision-Making Across Modern Business Functions

https://www.ariba.com/solutions/intelligent-spend-management

Transcript of a discussion on how a data-rich view of spend patterns across corporate services, hiring, and goods reduces risk, spurs new business models, and helps develop better strategic decisions.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: SAP Ariba.

Dana Gardner: Hi, this is Dana Gardner, Principal Analyst at Interarbor Solutions, and you’re listening to BriefingsDirect. Our next thought leadership discussion on attaining intelligent spend management explores the findings of a recent IDC survey on paths to holistic business processes improvement.

Gardner
We will now learn how a long history of legacy systems and outdated methods holds companies back from their potential around new total spend management optimization. The payoffs on gaining such a full and data-rich view of spend patterns across services, hiring, and goods includes reduced risk, new business models, and better strategic decisions.

To help us chart the future of intelligent spend management, and to better understand how the market views these issues, we are joined by Drew Hofler, Vice President of Portfolio Marketing at SAP Ariba and SAP Fieldglass. Welcome back, Drew.

Drew Hofler: Thanks, Dana. It’s great to be with you again.

Gardner: What trends or competitive pressures are prompting companies to seek better ways to get a total spend landscape view? Why are they incentivized to seek broader insights?

https://www.linkedin.com/in/drewhofler/
Hofler
Hofler: After years of grabbing best-of-breed or niche solutions for various parts of the source-to-pay process, companies are reaching the limits of this siloed approach. Companies are now being asked to look at their vendor spend as a whole. Whereas before they would look just at travel and expense vendors, or services procurement, or indirect or direct spend vendors, chief procurement and financial officers now want to understand what’s going on with spend holistically.

And, in fact, from the IDC report you mentioned, we found that 53 percent of respondents use different applications for each type of vendor spend that they have. Sometimes they even use multiple applications within a process for specific types of vendor spend. In fact, we find that a lot of folks have cobbled together a number of different things -- from in-house billing to niche vendors – to keep track of all of that.

Managing all of that when there is an upgrade to one particular system -- and having to test across the whole thing -- is very difficult. They also have trouble being able to reconcile data back and forth.


One of our competitors, for example -- to show how this Frankenmonster approach has taken root -- tried to build a platform of every source and category of spend across the entire source-to-pay process by acquiring 14 different companies in six years. That creates a patchwork of applications where there is a skim of user interfaces across the top for people to enter, but the data is disconnected. The processes are disconnected. You have to manage all of the different code bases. It’s untenable.

Gardner: There is a big technology component to such a patchwork, but there’s a people level to this as well. More-and-more we hear about the employee experience and trying to give people intelligent tools to make higher-level decisions and not get bogged down in swivel-ware and cutting and pasting between apps. What do the survey results tell us all about the people, process, and technology elements of total spend management?

Unified data reconciliation

Hofler: It really is a combination of people, process, and technology that drives intelligent spend. It’s the idea of bringing together every source, every category, every buying channel for all of your different types of vendor spend so that you can reconcile on the technology side; you can reconcile the data.

For example, one of the things that we are building is master vendor unification across the different types of spend. A vendor that you see -- IBM, for example -- in one system is going to be the same as in another system. The data about that vendor is going to be enriched by the data from all of the other systems into a unified platform. But to do that you have to build upon a platform that uses the same micro-services and the same data that reconciles across all of the records so that you’re looking at a consistent view of the data. And then that has to be built with the user in mind.

So when we talk about every source, category, and channel of spend being unified under a holistic intelligent spend management strategy, we are not talking about a monolithic user experience. In fact, it’s very important that the experience of the user be tailored to their particular role and to what they do. For example, if I want to do my expenses and travel, I don’t want to go into a deep, sourcing-type of system that’s very complex and based on my laptop. I want to go into a mobile app. I want to take care of that really quickly.
If I'm sourcing some strategic suppliers I certainly can't do that on just a mobile app. I need data, details, and analysis. And that's why we have built the platform underneath it all to tie this together.

On the other hand, if I’m sourcing some strategic suppliers I certainly can’t do that on just a mobile app. I need data, details, and analysis. And that’s why we have built the platform underneath it all to tie this together even while the user interfaces and the experience of the user is exactly what they need.

When we did our spend management survey with IDC, we had more than 800 respondents across four regions. The survey showed a high amount of dissatisfaction because of the wide-ranging nature of how expense management systems interact. Some 48 percent of procurement executives said they are dissatisfied with spend management today. It’s kind of funny to me because the survey showed that procurement itself had the highest level of dissatisfaction. They are talking about their own processes. I think that’s because they know how the sausages are being made.

Gardner: Drew, this dissatisfaction has been pervasive for quite a while. As we examine what people want, how did the survey show what is working? What gives them the data they need, and where does it go next?

Let go of patchwork 

Hofler: What came out of the survey is that part of the reason for that dissatisfaction is the multiple technologies cobbled together, with lots of different workflows. There are too many of those, too much data duplication, too many discrepancies between systems, and it doesn’t allow companies to analyze the data, to really understand in a holistic view what’s going on.

In fact, 47 percent of the procurement leaders said they still rely on spreadsheets for spend analysis, which is shocking to me, having been in this business for a long time. But we are much further along the path in helping that out by reconciling master data around suppliers so they are not duplicating data.

It’s also about tying together, in an integrated and seamless way, the entire process across different systems. That allows workflow to not be based on the application or the technology but on the required processes. For example, when it comes to installing some parts to fix a particular machine, you need to be able to order the right parts from the right suppliers but also to coordinate that with the right skilled labor needed to install the parts.

https://www.ariba.com/resources/library/library-pages/the-business-value-of-intelligent-spend-management
If you have separate systems for your services, skilled labor, and goods, you may be very disconnected. There may be parts available but no skilled labor at the time you need in the area you need. Or there may be the skilled labor but the parts are not available from a particular vendor where that skilled labor is.

What we’ve built at SAP is the ability to tie those together so that the system can intelligently see the needs, assess the risks such as fluctuations in the labor market, and plan and time that all together. You just can’t do that with cobbled together systems. You have to be able to have a fully and seamlessly integrated platform underneath that can allow that to happen.

Gardner: Drew, as I listen to you describe where this is going, it dovetails with what we hear about digital transformation of businesses. You’re talking not just about goods and services, you are talking about contingent labor, about all the elements that come together from modern business processes, and they are definitely distributed with a lifecycle of their own. Managing all that is the key.

Now that we have many different moving parts and the technology to evaluate and manage them, how does holistic spend management elevate what used to be a series of back-office functions into a digital business transformation value?

Hofler: Intelligent spend management makes it possible for all of the insights that come from these various data points -- by applying algorithms, machine learning (ML), and artificial intelligence (AI) -- to look at the data holistically. It can then pull out patterns of spend across the entire company, across every category, and it allows the procurement function to be at the nexus of those insights.

If you think of all the spend in a company, it’s a huge part of their business when you combine direct, indirect, services, and travel and expenses. You are now able to apply those insights to where there are the price fluctuations, peaks and valleys in purchasing, versus what the suppliers and their suppliers can provide at a certain time.


It’s an almost infinite amount of data and insights that you can gain. The procurement function is being asked to bring to the table not just the back-office operational efficiency but the insights that feed into a business strategy and the business direction. It’s hard to do that if you have disconnected or cobbled-together systems and a siloed approach to data and processes. It’s very difficult to see those patterns and make those connections.

But when you have a common platform such as SAP provides, then you’re able to get your arms around the entire process. The Chief Procurement Officer (CPO) can bring to the table quite a lot of data and the insights and that show the company what they need to know in order to make the best decisions.

Gardner: Drew, what are the benefits you get along the way? Are there short-, medium-, and long-term benefits? Were there any findings in the IDC survey that alluded to those various success measurements?

Common platform benefits 

Hofler: We found that 80 percent of today’s spend managers’ time is spent on low-level tasks like invoice matching, purchase requisitioning, and vendor management. That came out of the survey. With the tying together of the systems and the intelligence technologies infused throughout, those things can be automated. In some cases, they can become autonomous, freeing up time for more valuable pursuits for the employees.

New technologies can also help, like APIs for ecosystem solutions. This is one of the great short-term benefits if you are on an intelligent spend management platform such as SAP’s. You become part of a network of partners and suppliers. You can tap into that ecosystem of partners for solutions aligned with core spend management functions.

Celonis, for example, looks at all of your workflows across the entire process because they are all integrated. It can see it holistically and show duplication and how to make those processes far more efficient. That’s something that can be accessed very quickly.
Longer-term, companies gain insights into the ebbs and flows of spending, cost, and risk. They can begin to make better decisions on who to buy from based on many criteria. They can better choose who to buy from. They start to understand the risks across entire supply chains.

Longer-term, companies gain insights into the ebbs and flows of spending, cost, and risk. They can begin to make better decisions on who to buy from based on many criteria. They can better choose who to buy from. They can also in a longer-term situation start to understand the risks involved across entire supply chains.

One of the great things about having an intelligent spend platform is the ability to tie in through that network to other datasets, to other providers, who can provide risk information on your suppliers and on their suppliers. It can see deep into the supply chain and provide risk analytics to allow you to manage that in a much better way. That’s becoming a big deal today because there is so much information, and social media allows information to pass along so quickly.

When a company has a problem with their supply chain -- whether that’s reputational or something that their suppliers’ suppliers are doing -- that will damage their brand. If there is a disruption in services, that comes out very quickly and can very quickly hit the bottom line of a company. And so the ability to moderate those risks, to understand them better, and to put strategies together longer term and short-term makes a huge difference. An intelligent spend platform allows that to happen.

Gardner: Right, and you can also start to develop new business models or see where you can build out the top line and business development. It makes procurement not just about optimization, but with intelligence to see where future business opportunities lie.

Comprehend, comply, control 

Hofler: That’s right, you absolutely can. Again, it’s all about finding patterns, understanding what’s happening, and getting deeper understanding. We have so much data now. We have been talking about this forever, the amount of data that keeps piling up. But having an ability to see that holistically, have that data harmonized, and the technological capability to dive into the details and patterns of that data is really important.

http://www.ariba.com/
And that data network has, in our case, more than 20 years’ worth of spend data, with more than $13 trillion in lifetime of spend data and more than $3 trillion a year of transactions moving through our network – the Ariba Network. So not only do companies have the technologies that we provide in our intelligent spend management platform to understand their own data, but there is also the capability to take advantage of rationalized data across multiple industries, benchmarks, and other things, too, that affect them outside of their four walls.

So that’s a big part of what’s happening right now. If you don’t have access into those kinds of insights, you are operating in the dark these days.

Gardner: Are there any examples that illustrate some of the major findings from the IDC survey and show the benefits of what you have described?

Hofler: Danfoss, a Danish company, is a customer of ours that produces heating and cooling drives, and power solutions; they are a large company. They needed to standardize disparate enterprise resource planning (ERP) systems across 72 factories and implement services for indirect spend control and travel across 100 countries. So they have a very large challenge where there is a very high probability for data to become disconnected and broken down.

That’s really the key. They were looking for the ability to see one version of truth across all the businesses, and one of the things that really drives that need is the need for compliance. If you look at the IDC survey findings, close to half of executive officers are particularly concerned with compliance and auditing in spend management policy. Why? Because it allows both more control and deeper trust in budgeting and forecasting, but also because if there are quality issues they can make sure they are getting the right parts from the right suppliers.

The capability for Danfoss to pull all of that together into a single version of truth -- as well as with their travel and expenses -- gives them the ability to make sure that they are complying with what they need to, holistically across the business without it being spotty. So that was one of the key examples.

Another one of our customers, Swisscom, a telecommunications company in Switzerland, a large company also, needed intelligent spend management to manage their indirect spend and their contingent workforce.

They have 16,000 contingent workers, with 23,000 emails and a couple of thousand phone calls from suppliers on a regular basis. Within that supply chain they needed to determine supplier selection and rates on receipt of purchase requisitions. There were questions about supplier suitability in the subsequent procurement stages. They wanted a proactive, self-service approach to procurement to achieve visibility into that, as well as into its suppliers and the external labor that often use and install the things that they procure.
By moving from a disconnected system to the SAP intelligent spend offering, they were able to gain cohesive information and a clear view of their processes -- consumer, supplier, procurement, and end-user services.

So, by moving from a disconnected system to the SAP intelligent spend offering, they were able to gain cohesive information and a clear view of their processes, which includes those around consumer, supplier, procurement, and end user services. They said that using this user-friendly platform allowed them to quickly reach compliance and usability by all of their employees across the company. It made it very easy for them to do that. They simplified the user experience.

And they were able to link suppliers and catalogs very closely to achieve a vision of total intelligent spend management using SAP Fieldglass and SAP Ariba. They said they transformed procurement from a reactive processing role to one of proactively controlling and guiding, thanks to uniform and transparent data, which is really fundamental to intelligent spend.

Gardner: Before we close out, let’s look to the future. It sounds like you can do so much with what’s available now, but we are not standing still in this business. What comes next technologically, and how does that combine with process efficiencies and people power -- giving people more intelligence to work with? What are we looking for next when it comes to how to further extend the value around intelligent spend management?

Harmony and integration ahead 

Hofler: Extending the value into the future begins with the harmonization of data and the integration of processes seamlessly. It’s process-driven, and it doesn’t really matter what’s below the surface in terms of the technology because it’s all integrated and applied to a process seamlessly and holistically.

What’s coming in the future on top of that, as companies start to take advantage of this, is that more intelligent technologies are being infused into different parts of the process. For example, chatbots and the ability for users to interact with the system in a natural language way. Automation of processes is another example, with the capability to turn some processes into being fully autonomous, where the decisions are based on the learning of the machines.

The user interaction can then become one of oversight and exception management, where the autonomous processes take over and manage when everything fits inside of the learned parameters. It then brings in the human elements to manage and change the parameters and to manage exceptions and the things that fall outside of that.

https://www.ariba.com/solutions/intelligent-spend-management

There is never going to be removal of the human, but the human is now able with these technologies to become far more strategic, to focus more on analytics and managing the issues that need management and not on repetitive processes that can be handled by the machine. When you have that connected across your entire processes, that becomes even more efficient and allows for more analysis. So that’s where it’s going.

Plus, we’re adding more ecosystem partners. When you have a networked ecosystem on intelligent spend, that allows for very easy connections to providers who can augment the core intelligent spend functions with data. For example, for attaining global tax, compliance, risk, and VAT rules through partners like American Express and Thomson Reuters. All of these things can be added. You will see that ecosystem growing to continue to add exponential value to being a part of an intelligent spend management platform.

Gardner: There are upcoming opportunities for people to dig into this and understand it and find the ways that it makes sense for them to implement, because it varies from company to company. What are some ways that people can learn details?

Hofler: There is a lot coming up. Of course, you can always go to ariba.com, fieldglass.com or sap.com and find out about our intelligent spend management offerings. We will be having our SAP Ariba Live conference in Las Vegas in March, and so tons and tons of content there, and lots of opportunity to interact with other folks who are in the same situation and implementing these similar things. You can learn a lot.

We are also doing a webinar with IDC to dig into the details of the survey. You can find information about that on ariba.com, and certainly if you are listening to this after the fact, you can hear the recording of that on ariba.com and download the report.

Gardner: I’m afraid we’ll have to leave it there. You have been listening to a sponsored BriefingsDirect discussion on intelligent spend management through the exploration of the findings of a recent IDC survey. And we have learned how payoffs to gaining such a full and data rich view of spend patterns across services, hiring, and goods include reduced risk, new business models, and better strategic decision-making.

So a big thank you to our guest, Drew Hofler, Vice President of Portfolio Marketing at SAP Ariba and SAP Fieldglass. Thanks so much, Drew.

Hofler: Thanks, Dana. I appreciate it.


Gardner: And a big thank you as well to our audience for joining this BriefingsDirect Modern Digital Business Innovation Discussion. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host throughout this series of SAP Ariba-sponsored BriefingsDirect discussions. Thanks again for listening, and do come back next time.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: SAP Ariba.

Transcript of a discussion on how a data-rich view of spend patterns across corporate services, hiring, and goods reduces risk, spurs new business models, and helps develop better strategic decisions. Copyright Interarbor Solutions, LLC, 2005-2020. All rights reserved.

You may also be interested in: