Wednesday, July 03, 2019

Financial Stability, a Critical Factor for Choosing a Business Partner, is Now Easier to Assess

http://www.ariba.com/

Transcript of a discussion on new ways companies gain improved visibility, analytics, and predictive indicators to assess financial viability of partners across global supply chains.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: SAP Ariba.

Dana Gardner: Hi, this is Dana Gardner, Principal Analyst at Interarbor Solutions, and you’re listening to BriefingsDirect. Our next digital business risk remediation discussion explores new ways companies can gain improved visibility, analytics, and predictive indicators to better assess the financial viability of partners and global supply chains.

Gardner
Businesses are now heavily relying upon their trading partners across their supply chains -- and no business can afford to be dependent on suppliers that pose risks due to poor financial health.

We will now examine new tools and methods that create a financial health rating system to determine the probability of bankruptcy, default, or disruption for both public and private companies -- as many as 36 months in advance.

To learn more about the exploding sophistication around gaining insights into supply-chain risk of a financial nature, I am pleased to welcome Eric Evans, Managing Director of Business Development at RapidRatings in New York.

Eric Evans: Thanks, Dana. I appreciate being on the podcast.


Gardner: We are also here with Kristen Jordeth, Go-to-Market Director for Supplier Management Solutions, North America at SAP Ariba. Welcome, Kristen.

Kristen Jordeth: Hi, and thank you very much.

Gardner: Eric, how do the technologies and processes available now provide a step-change in managing supplier risk, particularly financial risk?
Evans

Evans: Platform-to-platform integrations enabled by application programming interfaces (APIs), which we have launched over the past few years, allows partnering with SAP Ariba Supplier Risk. It’s become a nice way for our clients to combine actionable data with their workflow in procurement processes to better manage suppliers end to end -- from sourcing to on-boarding to continuous monitoring.

Gardner: The old adage of “garbage in, garbage out” still applies to the quality and availability of the data. What’s new about access to better data, even in the private sector?

Dig deep into risk factors

Evans: We go directly to the source, the suppliers our customers work with. They introduce us to those suppliers and we get the private company financial data, right from those companies. It’s a quantitative input, and then we do a deeper “CAT scan,” if you will, on the financials, using that data together with our predictive scoring.

Gardner: Kristen, procurement and supply chain integrity trends have been maturing over the past 10 years. How are you able to focus now on more types of risk? It seems we are getting better and deeper at preventing unknown unknowns.

Jordeth: Exactly, and what we are seeing is customers managing risk from all aspects of the business. The most important thing is to bring it all together through technology.

Within our platform, we enable a Controls Framework that identifies key areas of risk that need to be addressed for a specific type of engagement. For example, do they need to pull a financial rating? Do they need to do a background check? We use the technology to manage the controls across all of the different aspects of risk in one system.

Gardner: And because many companies are reliant on real-time logistics and supplier services, any disruption can be catastrophic.

Jordeth
Jordeth: Absolutely. We need to make sure that the information gets to the system as quickly as it’s available, which is why the API connect to RapidRatings is extremely important to our customers. On top of that, we also have proactive incidents tracking, which complements the scores.

If you see a medium-risk business, from a financial perspective, you can look into that incident to see if they are under investigation, or if things going on where they might be laying off departments.

It’s fantastic and to have it all in one place with one view. You can then slice and dice the data and roll it up into scores. It’s very helpful for our customers.

Gardner: And this is a team sport, with an ecosystem of partners, because there is such industry specialization. Eric, how important is it being in an ecosystem with other specialists examining other kinds of risk?

Evans: It’s really important. We listen to our customers and prospects. It’s about the larger picture of bringing data into an end-to-end procurement and supplier risk management process.

We feel really good about being part of SAP PartnerEdge and an app extension partner to SAP Ariba. It’s exciting to see our data and the integration for clients.

Gardner: Rapid Ratings International, Inc. is the creator of the proprietary Financial Health Rating (FHR), also known as RapidRatings. What led up to the solution? Why didn’t it exist 30 years ago?

Rate the risk over time

Evans: The company was founded by someone with a background in econometrics and modeling. We have 24 industry models that drive the analysis. It’s that kind of deep, precise, and accurate modeling -- plus the historical database of more than 30 years of data that we have. When you combine those, it’s much more accurate and predictive, it’s really forward-looking data.

Gardner: You provide a 0 to 100 score. Is that like a credit rating for an individual? How does that score work in being mindful of potential risk?

Evans: The FHR is a short-term score, from 0 to 100, that looks at the next 12 months with a probability of default. Then a Core Health Score, which is around 24 to 36 months out, looks at operating efficiency and other indicators of how well a company is managing the business and operationalizing.
We can identify companies that are maybe weak short-term, but look fine long-term, or vice versa. Having industry depth -- and the historical data behind it -- that's what drives the go-forward assessments.

When you combine the two, or look at them individually, you can identify companies that are maybe weak short-term, but look fine long-term, or vice versa. If they don’t look good in the long-term and in the short-term, they still may have less risk because they have cash on hand. And that’s happening out in the marketplace these days with a lot of the initial public offerings (IPOs) such as Pinterest or Lyft. They have a medium-risk FHR because they have cash, but their long-term operating efficiency needs to be improved because they are not yet profitable.

Gardner: How are you able to determine risk going 36 months out when you’re dealing mostly with short-term data?

Evans: It’s because of the historical nature and the discrete modeling underneath, that’s what gets precise about the industry that each company is in. Having 24 unique industry models is very different than taking all of the companies out there and stuffing them into a plain-vanilla industry template. A software company is very different than pharmaceuticals, which is very different than manufacturing.

Having that industry depth -- and the historical data behind it -- is what’s drives the go-forward assessments.

Gardner: And this is global in nature?

Evans: Absolutely. We have gone out to more than 130 countries to get data from those sources, those suppliers. It is a global data set that we have built on a one-to-one basis for our clients.

Gardner: Kristen, how does somebody in the Ariba orbit take advantage of this? How is this consumed?

Jordeth: As with everything at SAP Ariba, we want to simplify how our customers get access to information. The PartnerEdge program works with our third parties and partners to create an API whereby all our customers need to do is get a license key from RapidRatings and apply it to the system.

The infrastructure and connection are already there. Our deployment teams don’t have to do anything, just add that user license and the key within the system. So, it’s less touch, and easy to access the data.

Gardner: For those suppliers that want to be considered good partners with low financial risk, do they have access to this information? Can they work to boost up their scores?

To reduce risk, discuss data details 

Evans: Our clients actually own the subscription and the license, and they can share the data with their suppliers. The suppliers can also foster a dialogue with our tool, called the Financial Dialogue, and they can ask questions around areas of concern. That can be used to foster a better relationship, build transparency, and it doesn’t have to be a negative conversation to be a positive one.

https://www.rapidratings.com/
They may want to invest in their company, extend payment terms or credit, work with them on service-level agreements (SLAs), and send in people to help manage. So, it could be a good way to just build up that deeper relationship with that supplier and use it as a better foundation.

Gardner: Kristen, when I put myself in the position of a buyer, I need to factor lots of other issues, such as around sustainability, compliance, and availability. So how do you see the future unfolding for the holistic approach to risk mitigation, of not only taking advantage of financial risk assessments, but the whole compendium of other risks? It’s not a simple, easy task.

Jordeth: When you look at financial data, you need to understand the whole story behind it. Why does that financial data look the way it does today? What I love about RapidRatings is they have financial scores, and it’s more about the health of the company in the future.

But in our SAP Ariba solution, we provide insights on other factors such as sustainability, information security, and are they funding things such as women’s rights in Third World countries? Once you start looking at the proactive awareness of what’s going on -- and all the good and the bad together -- you can weigh the suppliers in a total sense.


Their financials may not be up to par, but they are not high risk because they are funding women’s rights or doing a lot of things with the youth in America. To me, that may be more important. So I might put them on a tracker to address their financials more often, but I am not going to stop doing business with them because one of my goals is sustainability. That holistic picture helps tell the true story, a story that connects to our customers, and not just the story we want them to have. So, it creates and crafts that full picture for them.

Gardner: Empirical data that can then lead to a good judgment that takes into full account all the other variables. How does this now get to the SAP Ariba installed base? When is the general availability?

Customize categories, increase confidence 

Jordeth: It’s available now. Our supplier risk module is the entryway for all of these APIs, and within that module we connect to the companies that provide financial data, compliance screening, and information on forced labor, among others. We are heavily expanding in this area for categories of risk with our partners, so it’s a fantastic approach.

Within the supplier risk module, customers have the capability to not only access the information but also create their own custom scores on that data. Because we are a technology organization, we give them the keys so an administrator can go in and alter that the way they want. It is very customizable.

It’s all in our SAP Ariba Supplier Risk solution, and we recently released the connection to RapidRatings.

Evans: Our logo is right in there, built in, under the hood, and visible. In terms of getting it enabled, there’s no professional services or implementation wait time. So once the data set is built out on our end, if it’s a new client that’s through our implementation team, and basically we just give the API key credentials to our client. They take it and enable it in SAP Ariba Supplier Risk and they can instantly pull up the scores. So there is no wait time and no future developments to get at the data.
Once the data set is built on our end, we just give the API key to our client. They take it and enable it in SAP Ariba Supplier Risk and they can instantly pull up the scores. There is no wait time.

Jordeth: That helps us with security, too, because everybody wants to ensure that any data going in and out of a system is secure, with all of the compliance concerns we have. So our partner team also ensures the secure connection back and forth with their data system and our technology. So, that’s very important for customers.

Gardner: Are there any concrete examples? Maybe you can name them, maybe you can’t, instances where your rating system has proven auspicious? How does this work in the real world?

Evans: GE Healthcare did a joint-webinar with our CEO last year, explained their program, and showed how they were able to de-risk their supply base using RapidRatings. They were able to reduce the number of companies that were unhealthy financially. They were able to have mitigation plans put in place and corrective actions. So it was an across the board win-win.

Oftentimes, it’s not about the return on investment (ROI) on the platform, but the fact that companies were thwarting a disruption. An event did not happen because we were able to address it before it happened.

On the flip side, you can see how resilient companies are regardless of all the disruptions out there. They can use the financial health scores to observe the capability of a company to be resilient and bounce back from a cyber breach, a regulatory issue, or maybe a sustainability issue.

By looking at all of these risks inside of SAP Ariba Supplier Risk, they may want to order an FHR or look at an FHR for a new company that they hadn’t thought of if they are looking at other risks, operational risks. So that’s another way to tie it in.

Another interesting example is a large international retailer. A company got flagged as high risk and had just filed for bankruptcy, which alerted the buyer. The buyer had signed a contract, but they had the product on the shelf, so it had to be resourced and they had to find a new supplier. They mitigated risk, but they had to take quick action, get another product, and some scrambling had to be done. But they had de-risked some brand reputation damage by having done that. They hadn’t looked at that company before, it was a new company, and it was alerted. So that’s another way of not just running it at the time of contract, but it’s also running it when you’re going to market.

Identify related risks 

Gardner: It also seems logical that if a company is suffering on the financial aspects of doing business, then it might be an indicator that they’re not well-managed in general. It may not just be a cause, but an effect. Are there other areas, you could call them adjacencies, where risks to quality, delivery times, logistics are learned from financial indicators?

Evans: It’s a really good point. What’s interesting is we took a look at some data our clients had around timeliness, quality, performance, delivery, and overlaid it with the financial data on those suppliers. The companies that were weak financially were more than two times likely to ship a defective product. And companies that were weak financially were more than 2.5 times more likely to ship wrong or late.

https://www.rapidratings.com/
The whole just-in-time shipping or delivery value went out the window. To your point, it can be construed that companies -- when they are stressed financially – may be cutting corners, with things getting a little shoddy. They may not have replaced someone. Maybe there are infrastructure investments that should have been made but weren’t. So, all of those things have a reverberating effect in other operational risk areas.

Gardner: Kristen, now that we know that more data is good, and that you have more services like at RapidRatings, how will a big platform and network like SAP Ariba be able to use machine learning (ML) and artificial intelligence (AI) to further improve risk mitigation?

Jordeth: The opportunity exists for this to not only impact the assessment of a supplier, but throughout the full source-to-pay process, because it is embedded into the full SAP Ariba suite. So, even though you’re accessing it through risk, it’s visible when you’re sourcing, when you’re contracting, when you’re paying. So that direct connect is very important.

We want our customers to have it all. So I don’t cringe at the fact that they ask for it all because they should have it all. It’s just visualizing it in a manner that makes sense and it’s clear to them.

Gardner: And specifically on your set of solutions, Eric, where do you see things going in the next couple years? How can the technology get even better? How can the risk be reduced more?

Evans: We will be innovating products so our clients can bring in more scope around their supply base, not just the critical vendors but across the longer tail of a supply base and look at scores across different segments of suppliers. There could be sub-tiers, as a traversing with sub-tier third and fourth parties, particularly in the banking industry or manufacturing industry.
We will be innovating so our clients can bring in more scope around their supplier base, not just the critical vendors but across the longer tail of a supply chain and examine the scores of different segments of suppliers. It could be third tiers and fourth-parties.

And so that coupled with more intelligence or enhanced APIs and data visualization, these are things that we are looking into as well as additional scoring capabilities.

Gardner: I’m afraid we will have to leave it there. You have been listening to a sponsored BriefingsDirect discussion on new ways that companies can gain improved visibility, analytics, and predictive indicators to better assess the financial viability of their partners across their global supply chains.

And we have learned about new tools and methods create a Financial Health Rating system to determine the probability of bankruptcy, default, or disruption for public and private companies.

So a big thank you to our guests, Eric Evans, Managing Director of Business Development at RapidRatings in New York. Thank you so much, Eric.

Evans: Thank you, I appreciate it.


Gardner: And we have also been here with Kristen Jordeth, Go-to-Market Director for Supplier Management Solutions, North America at SAP Ariba. Thank you.

Jordeth: Thank you.

Gardner: And a big thank you as well to our audience for joining us for this BriefingsDirect digital business risk remediation discussion. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host throughout this series of SAP Ariba-sponsored BriefingsDirect interviews. Thanks again for listening, and do come back next time.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: SAP Ariba.

Transcript of a discussion on new ways companies gain improved visibility, analytics, and predictive indicators to assess financial viability of partners across global supply chains. Copyright Interarbor Solutions, LLC, 2005-2019. All rights reserved.
 
You may also be interested in:

Using AI to Solve Data and IT Complexity -- And Better Enable AI

https://www.hpe.com/us/en/home.html

A discussion on how the rising tidal wave of data must be better managed, and how new tools are emerging to bring artificial intelligence to the rescue.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: Hewlett Packard Enterprise.

Dana Gardner: Hello, and welcome to the next edition of the BriefingsDirect Voice of the Innovator podcast series. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for this ongoing discussion on the latest in IT innovation.

Gardner
Our next discussion focuses on why the rising tidal wave of data must be better managed, and how new tools are emerging to bring artificial intelligence (AI) to the rescue. Stay with us now as we learn how the latest AI innovations improve both data and services management across a cloud deployment continuum -- and in doing so set up an even more powerful way for businesses to exploit AI.

To learn how AI will help conquer complexity to allow for higher abstractions of benefits from across all sorts of data for better analysis, please join me in welcoming Rebecca Lewington, Senior Manager of Innovation Marketing at Hewlett Packard Enterprise (HPE). Welcome to BriefingsDirect, Rebecca.

Rebecca Lewington: Hi, Dana. It’s very nice to talk to you.


Gardner: We have been talking about massive amounts of data for quite some time. What’s new about data buildup that requires us to look to AI for help?

Lewington: Partly it is the sheer amount of data. IDC’s Data Age Study predicts the global data sphere will be 175 zettabytes by 2025, which is a rather large number. That’s what, 1 and 21 zeros? But we have always been in an era of exploding data.

Lewington
Yet, things are different. One, it’s not just the amount of data; it’s the number of sources the data comes from. We are adding in things like mobile devices, and we are connecting factories’ operational technologies to information technology (IT). There are more and more sources.

Also, the time we have to do something with that data is shrinking to the point where we expect everything to be real-time or you are going to make a bad decision. An autonomous car, for example, might do something bad. Or we are going to miss a market or competitive intelligence opportunity.

So it’s not just the amount of data -- but what you need to do with it that is challenging.

Gardner: We are also at a time when Al and machine learning (ML) technologies have matured. We can begin to turn them toward the data issue to better exploit the data. What is new and interesting about AI and ML that make them more applicable for this data complexity issue?

Data gets smarter with AI

Lewington: A lot of the key algorithms for AI were actually invented long ago in the 1950s, but at that time, the computers were hopeless relative to what we have today; so it wasn’t possible to harness them.

For example, you can train a deep-learning neural net to recognize pictures of kittens. To do that, you need to run millions of images to train a working model you can deploy. That’s a huge, computationally intensive task that only became practical a few years ago. But now that we have hit that inflection point, things are just taking off.

Gardner: We can begin to use machines to better manage data that we can then apply to machines. Does that change the definition of AI?

Lewington: The definition of AI is tricky. It’s malleable, depending on who you talk to. For some people, it’s anything that a human can do. To others, it means sophisticated techniques, like reinforcement learning and deep learning.
How to Remove Complexity
From Multicloud and Hybrid IT
One useful definition is that AI is what you use when you know what the answer looks like, but not how to get there.

Traditional analytics effectively does at scale what you could do with pencil and paper. You could write the equations to decide where your data should live, depending on how quickly you need to access it.

But with AI, it’s like the kittens example. You know what the answer looks like, it’s trivial for you to look at the photograph and say, “That is a cat in the picture.” But it’s really, really difficult to write the equations to do it. But now, it’s become relatively easy to train a black box model to do that job for you.

Gardner: Now that we are able to train the black box, how can we apply that in a practical way to the business problem that we discussed at the outset? What is it about AI now that helps better manage data? What's changed that gives us better data because we are using AI?
The heart of what makes AI work is good data; the right data, in the right place, with the right properties you can use to train a model, which you can then feed new data into to get results that you couldn't get otherwise.

Lewington: It’s a circular thing. The heart of what makes AI work is good data; the right data, in the right place, with the right properties you can use to train a model, which you can then feed new data into to get results that you couldn’t get otherwise.

Now, there are many ways you can apply that. You can apply it to the trivial case of the cat we just talked about. You can apply it to helping a surgeon review many more MRIs, for example, by allowing him to focus on the few that are borderline, and to do the mundane stuff for him.

But, one of the other things you can do with it is use it to manipulate the data itself. So we are using AI to make the data better -- to make AI better.

Gardner: Not only is it circular, and potentially highly reinforcing, but when we apply this to operations in IT -- particularly complexity in hybrid cloud, multicloud, and hybrid IT -- we get an additional benefit. You can make the IT systems more powerful when it comes to the application of that circular capability -- of making better AI and better data management.

AI scales data upward and outward

Lewington: Oh, absolutely. I think the key word here is scale. When you think about data -- and all of the places it can be, all the formats it can be in -- you could do it yourself. If you want to do a particular task, you could do what has traditionally been done. You can say, “Well, I need to import the data from here to here and to spin up these clusters and install these applications.” Those are all things you could do manually, and you can do them for one-off things.

But once you get to a certain scale, you need to do them hundreds of times, thousands of times, even millions of times. And you don’t have the humans to do it. It’s ridiculous. So AI gives you a way to augment the humans you do have, to take the mundane stuff away, so they can get straight to what they want to do, which is coming up with an answer instead of spending weeks and months preparing to start to work out the answer.

Gardner: So AI directed at IT, what some people call AIOps could be an accelerant to this circular advantageous relationship between AI and data? And is that part of what you are doing within the innovation and research work at HPE?

Lewington: That’s true, absolutely. The mission of Hewlett Packard Labs in this space is to assist the rest of the company to create more powerful, more flexible, more secure, and more efficient computing and data architectures. And for us in Labs, this tends to be a fairly specific series of research projects that feed into the bigger picture.

https://www.hpe.com/us/en/resources/solutions/deep-learning-dummies-gen10.html?chatsrc=ot-en&jumpid=ps_17fix8scuz_aid-510455007&gclid=EAIaIQobChMIp-OTod3k4gIVFqSzCh2rUwd0EAAYASAAEgIC3fD_BwE&gclsrc=aw.ds

For example, we are now doing the Deep Learning Cookbook, which allows customers to find out ahead of time exactly what kind of hardware and software they are going to need to get to a desired outcome. We are automating the experimenting process, if you will.

And, as we talked about earlier, there is the shift to the edge. As we make more and more decisions -- and gain more insights there, to where the data is created -- there is a growing need to deploy AI at the edge. That means you need a data strategy to get the data in the right place together with the AI algorithm, at the edge. That’s because there often isn’t time to move that data into the cloud before making a decision and waiting for the required action to return.

Once you begin doing that, once you start moving from a few clouds to thousands and millions of endpoints, how do you handle multiple deployments? How do you maintain security and data integrity across all of those devices? As researchers, we aim to answer exactly those questions.

And, further out, we are looking to move the natural learning phase itself to the edge, to do the things we call swarm learning, where devices learn from their environment and each other, using a distributed model that doesn’t use a central cloud at all.

Gardner: Rebecca, given your title is Innovation Marketing Lead, is there something about the very nature of innovation that you have come to learn personally that’s different than what you expected? How has innovation itself changed in the past several years?

Innovation takes time and space 

Lewington: I began my career as a mechanical engineer. For many years, I was offended by the term innovation process, because that’s not how innovation works. You give people the space and you give them the time and ideas appear organically. You can’t have a process to have ideas. You can have a process to put those ideas into reality, to wean out the ones that aren’t going to succeed, and to promote the ones that work.
How to Better Understand
What AI Can do For Your Business
But the term innovation process to me is an oxymoron. And that’s the beautiful thing about Hewlett Packard Labs. It was set up to give people the space where they can work on things that just seem like a good idea when they pop up in their heads. They can work on these and figure out which ones will be of use to the broader organization -- and then it’s full steam ahead.

Gardner: It seems to me that the relationship between infrastructure and AI has changed. It wasn’t that long ago when we thought of business intelligence (BI) as an application -- above the infrastructure. But the way you are describing the requirements of management in an edge environment -- of being able to harness complexity across multiple clouds and the edge -- this is much more of a function of the capability of the infrastructure, too. Is that how you are seeing it, that only a supplier that’s deep in its infrastructure roots can solve these problems? This is not a bolt-on benefit.

Lewington: I wouldn’t say it’s impossible as a bolt-on; it’s impossible to do efficiently and securely as a bolt-on. One of the problems with AI is we are going to use a black box; you don’t know how it works. There were a number of news stories recently about AIs becoming corrupted, biased, and even racist, for example. Those kinds of problems are going to become more common.

And so you need to know that your systems maintain their integrity and are not able to be breached by bad actors. If you are just working on the very top layers of the software, it’s going to be very difficult to attest that what’s underneath has its integrity unviolated.

If you are someone like HPE, which has its fingers in lots of pies, either directly or through our partners, it’s easier to make a more efficient solution.
You need to know that your systems maintain their integrity and are not able to be breached by bad actors. If you are just working on the very top layers of the software, it's going to be very difficult to attest that what's underneath has its integrity unviolated.

Gardner: Is it fair to say that AI should be a new core competency, for not only data scientists and IT operators, but pretty much anybody in business? It seems to me this is an essential core competency across the board.

Lewington: I think that's true. Think of AI as another layer of tools that, as we go forward, becomes increasingly sophisticated. We will add more and more tools to our AI toolbox. And this is one set of tools that you just cannot afford not to have.

Gardner: Rebecca, it seems to me that there is virtually nothing within an enterprise that won't be impacted in one way or another by AI.

Lewington: I think that’s true. Anywhere in our lives where there is an equation, there could be AI. There is so much data coming from so many sources. Many things are now overwhelmed by the amount of data, even if it’s just as mundane as deciding what to read in the morning or what route to take to work, let alone how to manage my enterprise IT infrastructure. All things that are rule-based can be made more powerful, more flexible, and more responsive using AI.

Gardner: Returning to the circular nature of using AI to make more data available for AI -- and recognizing that the IT infrastructure is a big part of that -- what are doing in your research and development to make data services available and secure? Is there a relationship between things like HPE OneView and HPE OneSphere and AI when it comes to efficiency and security at scale?

Let the system deal with IT 

Lewington: Those tools historically have been rules-based. We know that if a storage disk gets to a certain percentage full, we need to spin up another disk -- those kinds of things. But to scale flexibly, at some point that rules-based approach becomes unworkable. You want to have the system look after itself, to identify its own problems and deal with them.

Including AI techniques in things like HPE InfoSight, HPE Clearpath, and network user identity behavior software on the HPE Aruba side allows the AI algorithms to make those tools more powerful and more efficient.

You can think of AI here as another class of analytics tools. It’s not magic, it’s just a different and better way of doing IT analytics. The AI lets you harness more difficult datasets, more complicated datasets, and more distributed datasets.


Gardner: If I’m an IT operator in a global 2000 enterprise, and I’m using analytics to help run my IT systems, what should I be thinking about differently to begin using AI -- rather than just analytics alone -- to do my job better?

Lewington: If you are that person, you don’t really want to think about the AI. You don’t want the AI to intrude upon your consciousness. You just want the tools to do your job.

For example, I may have 1,000 people starting a factory in Azerbaijan, or somewhere, and I need to provision for all of that. I want to be able to put on my headset and say, “Hey, computer, set up all the stuff I need in Azerbaijan.” You don’t want to think about what’s under the hood. Our job is to make those tools invisible and powerful.

Composable, invisible, and insightful 

Gardner: That sounds a lot like composability. Is that another tangent that HPE is working on that aligns well with AI?

Lewington: It would be difficult to have AI be part of the fabric of an enterprise without composability, and without extending composability into more dimensions. It’s not just about being able to define the amount of storage and computer networking with a line of code, it’s about being able to define the amount of memory, where the data is, where the data should be, and what format the data should be in. All of those things – from the edge to cloud – need to be dimensions in composability.
How to Achieve Composability
Across Your Datacenter
You want everything to work behind the scenes for you in the best way with the quickest results, with the least energy, and in the most cost-effective way possible. That’s what we want to achieve -- invisible infrastructure.

Gardner: We have been speaking at a fairly abstract level, but let’s look to some examples to illustrate what we’re getting at when we think about such composability sophistication.

Do you have any concrete examples or use cases within HPE that illustrate the business practicality of what we’ve been talking about?

Lewington: Yes, we have helped a tremendous number of customers either get started with AI in their operations or move from pilot to volume use. A couple of them stand out. One particular manufacturing company makes electronic components. They needed to improve the yields in their production lines, and they didn’t know how to attack the problem. We were able to partner with them to use such things as vision systems and photographs from their production tools to identify defects that only could be picked up by a human if they had a whole lot of humans watching everything all of the time.

This gets back to the notion of augmenting human capabilities. Their machines produce terabytes of data every day, and it just gets turned away. They don’t know what to do with it.

We began running some research projects with them to use some very sophisticated techniques, visual autoencoders, that allow you, without having a training set, to characterize a production line that is performing well versus one that is on the verge of moving away from the sweet spot. Those techniques can fingerprint a good line and also identify when the lines go just slightly bad. In that case, a human looking at line would think it was working just perfectly.

This takes the idea of predictive maintenance further into what we call prescriptive maintenance, where we have a much more sophisticated view into what represents a good line and what represents a bad line. Those are couple of examples for manufacturing that I think are relevant.

Gardner: If I am an IT strategist, a Chief Information Officer (CIO) or a Chief Technology Officer (CTO), for example, and I’m looking at what HPE is doing -- perhaps at the HPE Discover conference -- where should I focus my attention if I want to become better at using AI, even if it’s invisible? How can I become more capable as an organization to enable AI to become a bigger part of what we do as a company?

The new company man is AI

Lewington: For CIOs, their most important customers these days may be developers and increasingly data scientists, who are basically developers working with training models as opposed to programs and code. They don’t want to have to think about where that data is coming from and what it’s running on. They just want to be able to experiment, to put together frameworks that turn data into insights.

It’s very much like the programming world, where we’ve gradually abstracted things from bare-metal, to virtual machines, to containers, and now to the emerging paradigm of serverless in some of the walled-garden public clouds. Now, you want to do the same thing for that data scientist, in an analogous way.

https://www.hpe.com/us/en/solutions/cloud/composable-private-cloud.html

Today, it’s a lot of heavy lifting, getting these things ready. It’s very difficult for a data scientist to experiment. They know what they want. They ask for it, but it takes weeks and months to set up a system so they can do that one experiment. Then they find it doesn’t work and move on to do something different. And that requires a complete re-spin of what’s under the hood.

Now, using things like software from the recent HPE BlueData acquisition, we can make all of that go away. And so the CIO’s job becomes much simpler because they can provide their customers the tools they need to get their work done without them calling up every 10 seconds and saying, “I need a cluster, I need a cluster, I need a cluster.”

That’s what a CIO should be looking for, a partner that can help them abstract complexity away, get it done at scale, and in a way that they can both afford and that takes the risk out. This is complicated, it’s daunting, and the field is changing so fast.

Gardner: So, in a nutshell, they need to look to the innovation that organizations like HPE are doing in order to then promulgate more innovation themselves within their own organization. It’s an interesting time.

Containers contend for the future 

Lewington: Yes, that’s very well put. Because it’s changing so fast they don’t just want a partner who has the stuff they need today, even if they don’t necessarily know what they need today. They want to know that the partner they are working with is working on what they are going to need five to 10 years down the line -- and thinking even further out. So I think that’s one of the things that we bring to the table that others can’t.

Gardner: Can give us a hint as to what some of those innovations four or five years out might be? How should we not limit ourselves in our thinking when it comes to that relationship, that circular relationship between AI, data, and innovation?

Lewington: It was worth coming to HPE Discover in June, because we talked about some exciting new things around many different options. The discussion about increasing automation abstractions is just going to accelerate.
We are going to get to the point where using containers seems as complicated as bare-metal today and that's really going to help simplify the whole data pipelines thing.

For example, the use of containers, which have a fairly small penetration rate across enterprises, is at about 10 percent adoption today because they are not the simplest thing in the world. But we are going to get to the point where using containers seems as complicated as bare-metal today and that’s really going to help simplify the whole data pipelines thing.

Beyond that, the elephant in the room for AI is that model complexity is growing incredibly fast. The compute requirements are going up, something like 10 times faster than Moore’s Law, even as Moore’s Law is slowing down.

We are already seeing an AI compute gap between what we can achieve and what we need to achieve -- and it’s not just compute, it’s also energy. The world’s energy supply is going up, can only go up slowly, but if we have exponentially more data, exponentially more compute, exponentially more energy, and that’s just not going to be sustainable.

So we are also working on something called Emergent Computing, a super-energy-efficient architecture that moves data around wherever it needs to be -- or not move data around but instead bring the compute to the data. That will help us close that gap.
How to Transform
The Traditional Datacenter
And that includes some very exciting new accelerator technologies: special-purpose compute engines designed specifically for certain AI algorithms. Not only are we using regular transistor-logic, we are using analog computing, and even optical computing to do some of these tasks, yet hundreds of times more efficiently and using hundreds of times less energy. This is all very exciting stuff, for a little further out in the future.

Gardner: I’m afraid we’ll have to leave it there. We have been exploring how the rising tidal wave of data must be better managed and how new tools are emerging to bring AI to the rescue. And we’ve heard how new AI approaches and tools create a virtuous adoption pattern between better data and better analytics, and therefore better business outcomes.

So please join me in thanking our guest, Rebecca Lewington, Senior Manager for Innovation Marketing at HPE. Thank you so much, Rebecca.

Lewington: Thanks Dana, this was fun.


Gardner: And thank you as well to our audience for joining this BriefingsDirect Voice of the Innovator interview. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host for this ongoing series of Hewlett Packard Enterprise-sponsored discussions. Thanks again for listening, please pass this along to your IT community, and do come back next time.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: Hewlett Packard Enterprise.

A discussion on how the rising tidal wave of data must be better managed, and how new tools are emerging to bring artificial intelligence to the rescue. Copyright Interarbor Solutions, LLC, 2005-2019. All rights reserved.

You may also be interested in: