Showing posts with label IIoT. Show all posts
Showing posts with label IIoT. Show all posts

Monday, December 07, 2020

How to Industrialize Data Science to Attain Mastery of Repeatable Intelligence Delivery

Transcript of a discussion on the latest methods, tools, and thinking around making data science an integral core function of any business.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: Hewlett Packard Enterprise.

Dana Gardner: Hello, and welcome to the next BriefingsDirect Voice of Analytics Innovation podcast series. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for this ongoing discussion on the latest insights into data science advances and strategy.

Gardner

Businesses these days are quick to declare their intention to become data-driven, yet the deployment of analytics and the use of data science remains spotty, isolated, and often uncoordinated. To fully reach their digital business transformation potential, businesses large and small need to make data science more of a repeatable assembly line -- an industrialization, if you will, of end-to-end data exploitation.

Stay with us now as we explore the latest methods, tools, and thinking around making data science an integral core function that both responds to business needs and scales to improve every aspect of productivity.

To learn more about the ways that data and analytics behave more like a factory -- and less like an Ivory Tower -- please join me now in welcoming Doug Cackett, EMEA Field Chief Technology Officer at Hewlett Packard Enterprise. Welcome, Doug.


Doug Cackett:
Thank you so much, Dana.

Gardner: Doug, why is there a lingering gap -- and really a gaping gap -- between the amount of data available and the analytics that should be taking advantage of it?

Data’s potential at edge

Cackett: That’s such a big question to start with, Dana, to be honest. We probably need to accept that we’re not doing things the right way at the moment. Actually, Forrester suggests that something like 40 zettabytes of data are going to be under management by the end of this year, which is quite enormous.

Cackett

And, significantly, more of that data is being generated at the edge through applications, Internet of Things (IoT), and all sorts of other things. This is where the customer meets your business. This is where you’re going to have to start making decisions as well.

So, the gap is two things. It’s the gap between the amount of data that’s being generated and the amount you can actually comprehend and create value from. In order to leverage that data from a business point of view, you need to make decisions at the edge. 

You will need to operationalize those decisions and move that capability to the edge where your business meets your customer. That’s the challenge we’re all looking for machine learning (ML) -- and the operationalization of all of those ML models into applications -- to make the difference. 

Gardner: Why does HPE think that moving more toward a factory model, industrializing data science, is part of the solution to compressing and removing this gap?

Cackett: It’s a math problem, really, if you think about it. If there is exponential growth in data within your business, if you’re trying to optimize every step in every business process you have, then you’ll want to operationalize those insights by making your applications as smart as they can possibly be. You’ll want to embed ML into those applications. 

Because, correspondingly, there’s exponential growth in the demand for analytics in your business, right? And yet, the number of data scientists you have in your organization -- I mean, growing them exponentially isn’t really an option, is it? And, of course, budgets are also pretty much flat or declining.

There's exponential growth in the demand for analytics in your business. And yet the number of data scientists in your organization, growing them, is not exponential. And budgets are pretty much flat or declining.

So, it’s a math problem because we need to somehow square away that equation. We somehow have to generate exponentially more models for more data, getting to the edge, but doing that with fewer data scientists and lower levels of budget. 

Industrialization, we think, is the only way of doing that. Through industrialization, we can remove waste from the system and improve the quality and control of those models. All of those things are going to be key going forward.

Gardner: When we’re thinking about such industrialization, we shouldn’t necessarily be thinking about an assembly line of 50 years ago -- where there are a lot of warm bodies lined up. I’m thinking about the Lucille Ball assembly line, where all that candy was coming down and she couldn’t keep up with it.

Perhaps we need more of an ultra-modern assembly line, where it’s a series of robots and with a few very capable people involved. Is that a fair analogy?

Industrialization of data science

Cackett: I think that’s right. Industrialization is about manufacturing where we replace manual labor with mechanical mass production. We are not talking about that. Because we’re not talking about replacing the data scientist. The data scientist is key to this. But we want to look more like a modern car plant, yes. We want to make sure that the data scientist is maximizing the value from the data science, if you like.

We don’t want to go hunting around for the right tools to use. We don’t want to wait for the production line to play catch up, or for the supply chain to catch up. In our case, of course, that’s mostly data or waiting for infrastructure or waiting for permission to do something. All of those things are a complete waste of their time. 


As you look at the amount of productive time data scientists spend creating value, that can be pretty small compared to their non-productive time -- and that’s a concern. Part of the non-productive time, of course, has been with those data scientists having to discover a model and optimize it. Then they would do the steps to operationalize it.

But maybe doing the data and operations engineering things to operationalize the model can be much more efficiently done with another team of people who have the skills to do that. We’re talking about specialization here, really.

But there are some other learnings as well. I recently wrote a blog about it. In it, I looked at the modern Toyota production system and started to ask questions around what we could learn about what they have learned, if you like, over the last 70 years or so.

It was not just about automation, but also how they went about doing research and development, how they approached tooling, and how they did continuous improvement. We have a lot to learn in those areas.

For an awful lot of organizations that I deal with, they haven’t had a lot of experience around such operationalization problems. They haven’t built that part of their assembly line yet. Automating supply chains and mistake-proofing things, what Toyota called jidoka, also really important. It’s a really interesting area to be involved with.

Gardner: Right, this is what US manufacturing, in the bricks and mortar sense, went through back in the 1980s when they moved to business process reengineering, adopted kaizen principles, and did what Deming and more quality-emphasis had done for the Japanese auto companies.

And so, back then there was a revolution, if you will, in physical manufacturing. And now it sounds like we’re at a watershed moment in how data and analytics are processed.

Cackett: Yes, that’s exactly right. To extend that analogy a little further, I recently saw a documentary about Morgan cars in the UK. They’re a hand-built kind of car company. Quite expensive, very hand-built, and very specialized.

And I ended up by almost throwing things at the TV because they were talking about the skills of this one individual. They only had one guy who could actually bend the metal to create the bonnet, the hood, of the car in the way that it needed to be done. And it took two or three years to train this guy, and I’m thinking, “Well, if you just automated the process, and the robot built it, you wouldn’t need to have that variability.” I mean, it’s just so annoying, right?

In the same way, with data science we’re talking about laying bricks -- not Michelangelo hammering out the figure of David. What I’m really trying to say is a lot of the data science in our customer’s organizations are fairly mundane. To get that through the door, get it done and dusted, and give them time to do the other bits of finesse using more skills -- that’s what we’re trying to achieve. Both [the basics and the finesse] are necessary and they can all be done on the same production line.

Gardner: Doug, if we are going to reinvent and increase the productivity generally of data science, it sounds like technology is going to be a big part of the solution. But technology can also be part of the problem.

What is it about the way that organizations are deploying technology now that needs to shift? How is HPE helping them adjust to the technology that supports a better data science approach?

Define and refine

Cackett: We can probably all agree that most of the tooling around MLOps is relatively young. The two types of company we see are either companies that haven’t yet gotten to the stage where they’re trying to operationalize more models. In other words, they don’t really understand what the problem is yet.

Forrester research suggests that only 14 percent of organizations that they surveyed said they had a robust and repeatable operationalization process. It’s clear that the other 86 percent of organizations just haven’t refined what they’re doing yet. And that’s often because it’s quite difficult. 

Many of these organizations have only just linked their data science to their big data instances or their data lakes. And they’re using it both for the workloads and to develop the models. And therein lies the problem. Often they get stuck with simple things like trying to have everyone use a uniform environment. All of your data scientists are both sharing the data and sharing the computer environment as well.

Data scientists can be very destructive in what they're doing. Maybe overwriting data, for example. To avoid that, you end up replicating terabytes of data, which can take a long time. That also demands new resources, including new hardware.

And data scientists can often be very destructive in what they’re doing. Maybe overwriting data, for example. To avoid that, you end up replicating the data. And if you’re going to replicate terabytes of data, that can take a long period of time. That also means you need new resources, maybe new more compute power and that means approvals, and it might mean new hardware, too.

Often the biggest challenge is in provisioning the environment for data scientists to work on, the data that they want, and the tools they want. That can all often lead to huge delays in the process. And, as we talked about, this is often a time-sensitive problem. You want to get through more tasks and so every delayed minute, hour, or day that you have becomes a real challenge.

The other thing that is key is that data science is very peaky. You’ll find that data scientists may need no resources or tools on Monday and Tuesday, but then they may burn every GPU you have in the building on Wednesday, Thursday, and Friday. So, managing that as a business is also really important. If you’re going to get the most out of the budget you have, and the infrastructure you have, you need to think differently about all of these things. Does that make sense, Dana?

Gardner: Yes. Doug how is HPE Ezmeral being designed to help give the data scientists more of what they need, how they need it, and that helps close the gap between the ad hoc approach and that right kind of assembly line approach?

Two assembly lines to start

Cackett: Look at it as two assembly lines, at the very minimum. That’s the way we want to look at it. And the first thing the data scientists are doing is the discovery.

The second is the MLOps processes. There will be a range of people operationalizing the models. Imagine that you’re a data scientist, Dana, and I’ve just given you a task. Let’s say there’s a high defection or churn rate from our business, and you need to investigate why.

First you want to find out more about the problem because you might have to break that problem down into a number of steps. And then, in order to do something with the data, you’re going to want an environment to work in. So, in the first step, you may simply want to define the project, determine how long you have, and develop a cost center.

You may next define the environment: Maybe you need CPUs or GPUs. Maybe you need them highly available and maybe not. So you’d select the appropriate-sized environment. You then might next go and open the tools catalog. We’re not forcing you to use a specific tool; we have a range of tools available. You select the tools you want. Maybe you’re going to use Python. I know you’re hardcore, so you’re going to code using Jupyter and Python.

And the next step, you then want to find the right data, maybe through the data catalog. So you locate the data that you want to use and you just want to push a button and get provisioned for that lot. You don’t want to have to wait months for that data. That should be provisioned straight away, right?


You can do your work, save all your work away into a virtual repository, and save the data so it’s reproducible. You can also then check the things like model drift and data drift and those sorts of things. You can save the code and model parameters and those sorts of things away. And then you can put that on the backlog for the MLOps team.

Then the MLOps team picks it up and goes through a similar data science process. They want to create their own production line now, right? And so, they’re going to seek a different set of tools. This time, they need continuous integration and continuous delivery (CICD), plus a whole bunch of data stuff they want to operationalize your model. They’re going to define the way that that model is going to be deployed. Let’s say, we’re going to use Kubeflow for that. They might decide on, say, an A/B testing process. So they’re going to configure that, do the rest of the work, and press the button again, right?

Clearly, this is an ongoing process. Fundamentally that requires workflow and automatic provisioning of the environment to eliminate wasted time, waiting for stuff to be available. It is fundamentally what we’re doing in our MLOps product.

But in the wider sense, we also have consulting teams helping customers get up to speed, define these processes, and build the skills around the tools. We can also do this as-a-service via our HPE GreenLake proposition as well. Those are the kinds of things that we’re helping customers with.

Gardner: Doug, what you’re describing as needed in data science operations is a lot like what was needed for application development with the advent of DevOps several years ago. Is there commonality between what we’re doing with the flow and nature of the process for data and analytics and what was done not too long ago with application development? Isn’t that also akin to more of a cattle approach than a pet approach?

Operationalize with agility

Cackett: Yes, I completely agree. That’s exactly what this is about and for an MLOps process. It’s exactly that. It’s analogous to the sort of CICD, DevOps, part of the IT business. But a lot of that tool chain is being taken care of by things like Kubeflow and MLflow Project, some of these newer, open source technologies. 

I should say that this is all very new, the ancillary tooling that wraps around the CICD. The CICD set of tools are also pretty new. What we’re also attempting to do is allow you, as a business, to bring these new tools and on-board them so you can evaluate them and see how they might impact what you’re doing as your process settles down.

The way we're doing MLOps and data science is progressing extremely quickly. So you don't want to lock yourself into a corner where you're trapped in a particular workflow. You want to have agility. It's analogous to the DevOps movement.

The idea is to put them in a wrapper and make them available so we get a more dynamic feel to this. The way we’re doing MLOps and data science generally is progressing extremely quickly at the moment. So you don’t want to lock yourself into a corner where you’re trapped into a particular workflow. You want to be able to have agility. Yes, it’s very analogous to the DevOps movement as we seek to operationalize the ML model.

The other thing to pay attention to are the changes that need to happen to your operational applications. You’re going to have to change those so they can tool the ML model at the appropriate place, get the result back, and then render that result in whatever way is appropriate. So changes to the operational apps are also important.

Gardner: You really couldn’t operationalize ML as a process if you’re only a tools provider. You couldn’t really do it if you’re a cloud services provider alone. You couldn’t just do this if you were a professional services provider.

It seems to me that HPE is actually in a very advantageous place to allow the best-of-breed tools approach where it’s most impactful but to also start put some standard glue around this -- the industrialization. How is HPE is an advantageous place to have a meaningful impact on this difficult problem?

Cackett: Hopefully, we’re in an advantageous place. As you say, it’s not just a tool, is it? Think about the breadth of decisions that you need to make in your organization, and how many of those could be optimized using some kind of ML model.

You’d understand that it’s very unlikely that it’s going to be a tool. It’s going to be a range of tools, and that range of tools is going to be changing almost constantly over the next 10 and 20 years.

This is much more to do with a platform approach because this area is relatively new. Like any other technology, when it’s new it almost inevitably to tends to be very technical in implementation. So using the early tools can be very difficult. Over time, the tools mature, with a mature UI and a well-defined process, and they become simple to use.

But at the moment, we’re way up at the other end. And so I think this is about platforms. And what we’re providing at HPE is the platform through which you can plug in these tools and integrate them together. You have the freedom to use whatever tools you want. But at the same time, you’re inheriting the back-end system. So, that’s Active Directory and Lightweight Directory Access Protocol (LDAP) integrations, and that’s linkage back to the data, your most precious asset in your business. Whether that be in a data lake or a data warehouse, in data marts or even streaming applications. 

This is the melting point of the business at the moment. And HPE has had a lot of experience helping our customers deliver value through information technology investments over many years. And that’s certainly what we’re trying to do right now.

Gardner: It seems that HPE Ezmeral is moving toward industrialization of data science, as well as other essential functions. But is that where you should start, with operationalizing data science? Or is there a certain order by which this becomes more fruitful? Where do you start?

Machine learning leads change

Cackett: This is such a hard question to answer, Dana. It’s so dependent on where you are as a business and what you’re trying to achieve. Typically, to be honest, we find that the engagement is normally with some element of change in our customers. That’s often, for example, where there’s a new digital transformation initiative going on. And you’ll find that the digital transformation is being held back by an inability to do the data science that’s required.

There is another Forrester report that I’m sure you’ll find interesting. It suggests that 98 percent of business leaders feel that ML is key to their competitive advantage. It’s hardly surprising then that ML is so closely related to digital transformation, right? Because that’s about the stage at which organizations are competing after all.

So we often find that that’s the starting point, yes. Why can’t we develop these models and get them into production in time to meet our digital transformation initiative? And then it becomes, “Well, what bits do we have to change? How do we transform our MLOps capability to be able to do this and do this at scale?”


Often this shift is led by an individual in an organization. There develops a momentum in an organization to make these changes. But the changes can be really small at the start, of course. You might start off with just a single ML problem related to digital transformation. 

We acquired MapR some time ago, which is now our HPE Ezmeral Data Fabric. And it underpins a lot of the work that we’re doing. And so, we will often start with the data, to be honest with you, because a lot of the challenges in many of our organizations has to do with the data. And as businesses become more real-time and want to connect more closely to the edge, really that’s where the strengths of the data fabric approach come into play.

So another starting point might be the data. A new application at the edge, for example, has new, very stringent requirements for data and so we start there with building these data systems using our data fabric. And that leads to a requirement to do the analytics and brings us obviously nicely to the HPE Ezmeral MLOps, the data science proposition that we have.

Gardner: Doug, is the COVID-19 pandemic prompting people to bite the bullet and operationalize data science because they need to be fleet and agile and to do things in new ways that they couldn’t have anticipated?

Cackett: Yes, I’m sure it is. We know it’s happening; we’ve seen all the research. McKinsey has pointed out that the pandemic has accelerated a digital transformation journey. And inevitably that means more data science going forward because, as we talked about already with that Forrester research, some 98 percent think that it’s about competitive advantage. And it is, frankly. The research goes back a long way to people like Tom Davenport, of course, in his famous Harvard Business Review article. We know that customers who do more with analytics, or better analytics, outperform their peers on any measure. And ML is the next incarnation of that journey.

Gardner: Do you have any use cases of organizations that have gone to the industrialization approach to data science? What is it done for them?

Financial services benefits

Cackett: I’m afraid names are going to have to be left out. But a good example is in financial services. They have a problem in the form of many regulatory requirements.

When HPE acquired BlueData it gained an underlying technology, which we’ve transformed into our MLOps and container platform. BlueData had a long history of containerizing very difficult, problematic workloads. In this case, this particular financial services organization had a real challenge. They wanted to bring on new data scientists. But the problem is, every time they wanted to bring a new data scientist on, they had to go and acquire a bunch of new hardware, because their process required them to replicate the data and completely isolate the new data scientist from the other ones. This was their process. That’s what they had to do.

So as a result, it took them almost six months to do anything. And there’s no way that was sustainable. It was a well-defined process, but it’s still involved a six-month wait each time.

So instead we containerized their Cloudera implementation and separated the compute and storage as well. That means we could now create environments on the fly within minutes effectively. But it also means that we can take read-only snapshots of data. So, the read-only snapshot is just a set of pointers. So, it’s instantaneous.

They scaled out their data science without scaling up their costs or the number of people required. They are now doing that in a hybrid cloud environment. And they only have to change two lines of code to push workloads into AWS, which is pretty magical, right?

They were able to scale-out their data science without scaling up their costs or the number of people required. Interestingly, recently, they’ve moved that on further as well. Now doing all of that in a hybrid cloud environment. And they only have to change two lines of code to allow them to push workloads into AWS, for example, which is pretty magical, right? And that’s where they’re doing the data science.

Another good example that I can name is GM Finance, a fantastic example of how having started in one area for business -- all about risk and compliance -- they’ve been able to extend the value to things like credit risk.

But doing credit risk and risk in terms of insurance also means that they can look at policy pricing based on dynamic risk. For example, for auto insurance based on the way you’re driving. How about you, Dana? I drive like a complete idiot. So I couldn’t possibly afford that, right? But you, I’m sure you drive very safely.

But in this use-case, because they have the data science in place it means they can know how a car is being driven. They are able to look at the value of the car, the end of that lease period, and create more value from it.

These are types of detailed business outcomes we’re talking about. This is about giving our customers the means to do more data science. And because the data science becomes better, you’re able to do even more data science and create momentum in the organization, which means you can do increasingly more data science. It’s really a very compelling proposition.

Gardner: Doug, if I were to come to you in three years and ask similarly, “Give me the example of a company that has done this right and has really reshaped itself.” Describe what you think a correctly analytically driven company will be able to do. What is the end state?

A data-science driven future

Cackett: I can answer that in two ways. One relates to talking to an ex-colleague who worked at Facebook. And I’m so taken with what they were doing there. Basically, he said, what originally happened at Facebook, in his very words, is that to create a new product in Facebook they had an engineer and a product owner. They sat together and they created a new product.

Sometime later, they would ask a data scientist to get involved, too. That person would look at the data and tell them the results.

Then they completely changed that around. What they now do is first find the data scientist and bring him or her on board as they’re creating a product. So they’re instrumenting up what they’re doing in a way that best serves the data scientist, which is really interesting.


The data science is built-in from the start. If you ask me what’s going to happen in three years’ time, as we move to this democratization of ML, that’s exactly what’s going to happen. I think we’ll end up genuinely being information-driven as an organization.

That will build the data science into the products and the applications from the start, not tack them on to the end.

Gardner: And when you do that, it seems to me the payoffs are expansive -- and perhaps accelerating.

Cackett: Yes. That’s the competitive advantage and differentiation we started off talking about. But the technology has to underpin that. You can’t deliver the ML without the technology; you won’t get the competitive advantage in your business, and so your digital transformation will also fail.

This is about getting the right technology with the right people in place to deliver these kinds of results.

Gardner: I’m afraid we’ll have to leave it there. You’ve been with us as we explored how businesses can make data science more of a repeatable assembly line – an industrialization, if you will -- of end-to-end data exploitation. And we’ve learned how HPE is ushering in the latest methods, tools, and thinking around making data science an integral core function that both responds to business needs and scales to improve nearly every aspect of productivity.


So please join me in thanking our guest, Doug Cackett, EMEA Field Chief Technology Officer at HPE. Thank you so much, Doug. It was a great conversation.

Cackett: Yes, thanks everyone. Thanks, Dana.

Gardner: And a big thank you as well to our audience for joining this sponsored BriefingsDirect Voice of Analytics Innovation discussion. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host for this ongoing series of Hewlett Packard Enterprise-supported discussions.

Thanks again for listening. Please pass this along to your IT community, and do come back next time.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: Hewlett Packard Enterprise.

Transcript of a discussion on the latest methods, tools, and thinking around making data science an integral core function of any business. Copyright Interarbor Solutions, LLC, 2005-2020. All rights reserved.

You may also be interested in:

Tuesday, June 04, 2019

Ferrara Candy’s IT Modernization Journey Uses Automated Intelligence to Support Rapid Business Growth


Transcript of a discussion on how a global candy maker unlocks end-to-end process and economic efficiency through increased actionable insight and optimization of servers and storage.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: Hewlett Packard Enterprise.

Dana Gardner: Hello, and welcome to the next edition of the BriefingsDirect Voice of the Customer podcast series. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for this ongoing discussion on bringing intelligence to IT infrastructure.

Gardner
Our next IT modernization journey interview explores how a global candy maker depends on increased insight for deploying and optimizing servers and storage. We’ll now learn how Ferrara Candy Company boosts its agility as a manufacturer by expanding the use of analysis and proactive refinement in its data center operations.

Stay with us to hear about unlocking the potential for end-to-end process and economic efficiency with our guest, Stefan Floyhar, Senior Manager of IT Infrastructure at Ferrara Candy Co. in Oakbrook Terrace, Illinois. Welcome, Stefan.

Floyhar: Thank you for having me.

Gardner: What are the major reasons Ferrara Candy took a new approach in bringing added intelligence to your servers and storage operations?

Floyhar: The driving force behind utilizing intelligence at the infrastructure level specifically was to alleviate the firefighting operations that we were constantly undergoing with the old infrastructure.

Gardner: And what sort of issues did that entail? What was the nature of the firefighting?

https://www.ferrarausa.com/
Floyhar: We were constantly addressing infrastructure-related hardware failures, firmware issues, and not having visibility into true growth factors. That included not knowing what’s happening on the backend during an outage or from a problem with performance. We had a lack of visibility into true real-time performance data and fully scalable performance data.

Gardner: There’s nothing worse than being caught up in reactive firefighting mode when you’re also trying to be innovative, re-architect, and adjust to things like mergers and growth. What were some of the business pressures that you were facing even as you were trying to keep up with that old-fashioned mode of operations?

IT meets expanded candy demands

Floyhar
Floyhar: We have undergone a significant amount of growth in the last seven years -- going from 125 virtual machines to 452, as of this morning. Those 452 virtual machines are all application-driven and application-specific. As we continued to grow, as we continued to merge and acquire other candy companies, that growth exploded exponentially.

The merger with Ferrara Pan Candy, and Farley’s and Sathers in 2012, for example, saw an initial growth explosion. More recently, in 2017 and 2018, we were acquired by Ferrero. We also acquired Nestlé Confections USA, which has essentially doubled the business overnight. The growth is continuing at an exponential rate.

Gardner: The old mode of IT operations just couldn’t keep up with that dynamic environment?

Floyhar: That is correct, yes.

Gardner: Ferrara Candy might not roll off the tongue for many people, but I bet they have heard a lot of your major candy brands. Could you help people understand how big and global you are as a confectionery manufacturer by letting us know some of your major brands?

Floyhar: We are the producers of Now and Later, Lemonheads, Boston Baked Beans, Atomic Fireballs, Bob’s Candy Canes, and Trolli Gummies, which is one of our major brands. We also recently acquired Crunch Bar, Butterfinger, 100 Grand, Laffy Taffy, and Willy Wonka brands, among others.

We produce a little over 1 million pounds of gummies per week, and we are currently utilizing 2.5 million square feet of warehousing.
Learn More About Intelligent,
Self-Managing Flash Storage
In the Data Center and Cloud
Gardner: Wow! Some of those brands bring me way back. I mean, I was eating those when I was a kid, so those are some age-old and favorite brands.

Let’s get back to the IT that supports that volume and diversity of favorite confections. What were some of the major drivers that brought you to a higher level of automation, intelligence, and therefore being able to get on top of operations rather than trying to play catch up?

https://www.ferrarausa.com/
Floyhar: We have a very lean staff of engineers. That forced us to seek the next generation of product, specifically around artificial intelligence (AI) and machine learning (ML). We absolutely needed that because we’re growing at this exponential rate. We needed to take the focus off of infrastructure-related tasks and leverage technology to manage and operate the application stack and get it up to snuff. And so that was the major driving force for seeking AI [in our operations and management].

Gardner: And when you refer to AI you are not talking about helping your marketers better factor which candy to bring into a region. You are talking about intelligence inside of your IT operations, so AIOps, right?

Floyhar: Yes, absolutely. So things like Hewlett Packard Enterprise (HPE) InfoSight and some of the other providers with cloud-type operations for failure metrics and growth perspectives. We needed somebody with proven metrics. Proven technology was a huge factor in product determination.

Gardner: How about storage specifically? Was that something you targeted? It seems a lot of people need to reinvent and modernize their storage and server infrastructure in tandem and coordination.

Floyhar: Storage was actually the driving factor for us. It’s what started the whole renovation of IT within Ferrara. With our older storage, we were constantly suffering bottlenecks with administrative tasks and in not having visibility into what was going on.
During that discovery process and research, HPE InfoSight really jumped off the page at us. That level of AI, the proven track record, and being able to produce data around my work loads.

Storage drove that need for change. We looked at a lot of different storage area networks (SANs) and providers, everything from HPE Nimble to Pure, VNX, Unity, Hitachi, … insert major SAN provider here. We probably did six or so months’ worth of research working with those vendors, doing proof of concepts (POCs) and looking at different products to truly determine what was the best storage solution for Ferrara.

During that discovery process, during that research, HPE InfoSight really jumped off the page at us. That level of AI, the proven track record, being able to produce data around my actual work loads. I needed real-life examples, not a sales and marketing pitch.

By having a demo and seeing that data being given that on the fly and on request was absolutely paramount in making our decision.

Gardner: And, of course, InfoSight, was a part of Nimble Storage and Nimble became acquired by HPE. Now we are even seeing InfoSight technology being distributed and integrated across HPE’s broad infrastructure offerings. Is InfoSight something that you are happy to see extended to other areas of IT infrastructure?

Floyhar: Yes, ever since we adopted the Nimble Storage solution I have been waiting for InfoSight to be adopted elsewhere. Finally it’s been added across the ProLiant series of servers. We are an HPE ProLiant DL560 shop.

I am ultra-excited to see what that level of AI brings for predictive failures monitoring, which is essentially going to alleviate any downtime. Any time we can predict a failure, it’s obviously better than being reactive, with a retroactive approach where something fails and then we have to replace it.

Gardner: Stefan, how do you consume that proactive insight? What does InfoSight bring in terms of an operations interface? Or have you crafted a new process in your operations? How have you changed your culture to accommodate such a proactive stance? As you point out, being proactive is a fairly new way of avoiding failures and degraded performance.

Proactivity improves productivity

Floyhar: A lot of things have changed with that proactivity. First, the support model, with the automatic opening and closure of tickets with HPE support. The Nimble support is absolutely fantastic. I don’t have to wait for something reactive at 2 am, and then call HPE support. The SAN does it for me; InfoSight does it for me. It automatically opens the ticket and an engineer calls me at the beginning of my workday.

No longer are we getting interrupted with those 2, 3, 4 am emergency calls because our monitoring platform has notified us that, “Hey, a disk failed or looks like it’s going to fail.” That, in turn, has led to a complete culture change within my team. It takes us away from that firefighting, the constant, reactive methodologies of maintaining traditional three-tier infrastructure and truly into leveraging AI and the support behind it.
Learn More About Intelligent,
Self-Managing Flash Storage
In the Data Center and Cloud
We are now able to turn the corner from reactive to proactive, including on applications redesign or re-work, or on tweaking performance improvements. We are taking that proactive approach with the applications themselves, which has rolled even further downhill to our end users and improved their productivity.

In the last six months, we have received significant praise for the applications performance, based on where it was three years ago compared with today. And, yes, part of that is because of the back-end upgrades in the infrastructure platform, but also because as we’ve been able to focus more on the applications administration tasks and truly making it a more pleasant experience for our end users -- less pain, less latency, just less issues.

Gardner: You are a big SAP shop, so that improvement extends across all of your operations, to your logistics and supply chain, for example. How does having a stronger sense of confidence in your IT operations give you benefits on business-level innovation?

Floyhar: As you mentioned, we are a large SAP shop. We run any number of SAP-insert-acronym-here systems. Being proactive on addressing some of the application issues has honestly caused less downtime for the applications. We have seen into the four- and five-9s (99.99-9 percent) uptime from an application availability perspective.

https://www.ferrarausa.com/

We have been able to proactively catch a number of issues, whether using HPE InfoSight or standard notifications. We have been able to proactively catch a number of issues that would have caused downtime, even as minimal as 30 minutes. But when you start talking about an operation that runs 24x7, 360 days a year, and truly depends on SAP to be the backbone, it’s the lifeblood of what we do on a business operations basis.

So 30 minutes makes all the difference on the production floor. Being able to turn that support corner has absolutely been critical in our success.

Gardner: Let’s go back to data. When it comes to having storage confidence, you can extend that confidence across your data lifecycle. It's not just storage and accommodating key mission-critical apps. You can start to modernize and gain efficiencies through backup and recovery, and to making the right cache and de-dupe decisions.

What’s it been like to extend your InfoSight-based intelligence culture into the full data lifecycle?

Sweet, simplified data backup and recovery

Floyhar: Our backup and recovery has gotten significantly less complex -- and significantly faster -- using Veeam with the storage API and Nimble snapshots. Our backup window went from about 22.5 hours a day, which was less than ideal, obviously, down to less than 30 minutes for a lot of our mission-critical systems.

We are talking about 8-10 terabytes of Microsoft Exchange data, 8-10 terabytes of SAP data -- all being backed up, full backups, in less than 60 minutes, using Veeam with the storage API. Again, it’s transformed how much time and how much effort we put into managing our backups.

Again, we have turned the corner on managing our backups on an exception-basis. So now it’s only upon failure. We have gained that much trust in the product and the back-end infrastructure.
We specifically watch for failure, and any time something comes up that's what we address as opposed to watching everything 100 percent of the time to make sure it's working.

We specifically watch for failure, and any time something comes up that’s what we address as opposed to watching everything 100 percent of the time to make sure that it’s all working. Outside of the backups, just every application has seen significant performance increases.

Gardner: Thinking about the future, a lot of organizations are experimenting more with hybrid cloud models and hybrid IT models. One of the things that holds them up from adoption is not feeling confident about having insight, clarity, and transparency across these different types of systems and architectures.

Does what HPE InfoSight and similar technologies bring to the table give you more confidence to start moving toward a hybrid model, or at least experimenting in that direction for better performance in price and economic payback?

Headed to hybrid, invested in IoT

Floyhar: Yes, absolutely, it does. We started to dabble into the cloud, and a mixed-hybrid infrastructure a few years before Nimble came into play. We now have a significantly larger cloud presence. And we were able to scale that cloud presence easily specifically because of the data. With our growth trending, all of the pieces involved with InfoSight, we were able to use that data to scale out and know what it looks like from a storage perspective on Amazon Web Services (AWS).


We started with SAP HANA out in the cloud, and now we’re utilizing some of that data on the back end. We are able to size and scale significantly better than we ever could have in the past, so it has actually opened up the door to adopting a bit more cloud architecture for our infrastructure.

Gardner: And looking to the other end from cloud, core, and data center, increasingly manufacturers like yourselves -- and in large warehouse environments like you have described -- the Internet of Things (IoT) is becoming much more in demand. You can place sensors and measure things in ways we didn’t dream of before.

Even though IoT generates massive amounts of data -- and it’s even processing at the edge – have you gained confidence to take these platform technologies in that direction, out to the edge, and hope that you can gain end-to-end insights, from edge to core?

Floyhar: The executives at our company have deemed that data is a necessity. We are a very data-driven company. Manufacturers of our size are truly benefiting from IoT and that data. For us, people say “big data” or insert-common-acronym-here. People process big data, but nobody truly understands what that term means.
Learn More About Intelligent,
Self-Managing Flash Storage
In the Data Center and Cloud
With our executives, we have gone through the entire process and said, “Hey, you know what? We have actually defined what big data means to Ferrara. We are going to utilize this data to help drive leaner manufacturing processes, to help drive higher-quality products out the door every single time to achieve an industry standard of quality that quite frankly has never been met before.”

We have very lofty goals for utilizing this data to drive the manufacturing process. We are working with a very large industrial automation company to assist us in utilizing IoT, not quite edge computing yet, but we might get there in the next couple of years. Right now we are truly adopting the IoT mentality around manufacturing.

And that is, as you mentioned, a huge amount of data. But it is also a very exciting opportunity for Ferrara. We make candy, right? We are not making cars, or tanks, or very expansive computer systems. We are not doing that level of intricacy. We are just making candy.

But to be able to leverage the machine data at almost every inch of the factory floor? If we could get that and utilize it to drive end-to-end process, efficiency, and manufacturing efficiencies? It not only helps us produce a better-quality product faster, it’s also environmentally conscious, because there will be less waste, if any waste at all.

The list of wonderful things that comes out of this goes on and on. It really is an exciting opportunity. We are trying to leverage that. The intelligent back-end storage and computer systems are ultra-imperative to us for meeting those objectives.

Gardner: Any words of advice for other organizations that are not as far ahead as you are when it comes to going to all-flash and highly intelligent storage -- and then extending that intelligence into an AIOps culture? With 20/20 hindsight, for those organizations that would like to use more AIOps, who would like to get more intelligence through something like HPE InfoSight, what advice can you give them?

Floyhar: First things first -- use it. For even small organizations, all the way up to the largest of organizations, it may almost seem like, “Well, what is that data really going to be used for?” I promise, if you use it, it is greatly beneficial to your IT operations.

Historically we would constantly be fighting infrastructure-related issues -- outages, performance bottlenecks, and so on. With the AI behind HPE InfoSight, the AI makes all the difference. You don't have to fight that fight when it becomes a problem because you nip it in the bud.
If you don't have it -- get it. It’s very important. This is the future of technology. Using AI to predictively analyze all of the data -- not just from your environment -- but being able to take a conglomerate view of customer data and keep it together and use predictive analytics – that truly does allow IT organizations to turn the corner from reactive to proactive.

Historically we would constantly be fighting infrastructure-related issues -- outages, performance bottlenecks, and so on. With the AI behind HPE InfoSight, and other providers, including cloud platforms, the AI makes all the difference. You don’t have to fight that fight when it becomes a problem because you get to nip it in the bud.

Gardner: I’m afraid we’ll have to leave it there. We have been exploring how a global candy maker has increased its resources insights for best deploying and optimizing service and storage. We have heard how they have also moved toward an AIOps culture and had great benefits as a result in boosting their agility as a manufacturer. Ferrara Candy has also been managing growth by expanding its use of analysis and proactive refinement of its data center infrastructure.


So please join me in thanking our guest, Stefan Floyhar, Senior Manager of IT Infrastructure at Ferrara Candy Co. in Oakbrook Terrace, Illinois. Thank you, Stefan.

Floyhar: Thank you very much, Dana.

Gardner: And a big thank you to our audience as well for joining this special BriefingsDirect Voice of the Customer IT modernization interview. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host for this ongoing series of Hewlett Packard Enterprise-sponsored discussions.

Thanks again for listening. Pass this along to your IT community, if you would, and do come back next time.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: Hewlett Packard Enterprise.

Transcript of a discussion on how a global candy maker unlocks end-to-end process and economic efficiency through increased actionable insight and optimization of servers and storage. Copyright Interarbor Solutions, LLC, 2005-2019. All rights reserved.

You may also be interested in: