Showing posts with label storage. Show all posts
Showing posts with label storage. Show all posts

Friday, February 05, 2021

How Storage Can Help You Digitally Transform in a Hybrid Cloud World


A transcript of a discussion on how consistent global storage models best accelerate and enable pervasive analytics that support digital business transformation. 

 

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: IBM Storage.

 

Dana Gardner: Hi, this is Dana Gardner, Principal Analyst at Interarbor Solutions and you’re listening to BriefingsDirect.

 

Our next data strategies insights discussion explores how consistent and global storage models can best propel pervasive analytics and support digital business transformation.

 

Decades of disparate and uncoordinated storage solutions have hindered enterprises’ ability to gain common data services across today’s hybrid cloud, distributed data centers, and burgeoning edge landscapes.

 

Yet only a comprehensive data storage model that includes all platforms, data types, and deployment architectures will deliver the rapid insights that businesses need.

 

Stay with us now as we examine how IBM Storage is leveraging containers and the latest storage advances to deliver the holy grail of inclusive, comprehensive, and actionable storage.

 


To learn more about the future promise of the storage strategies that accelerate digital transformation, please join me now in welcoming our guest, Denis Kennelly, General Manager, IBM Storage. Welcome, Denis.

 

Kennelly: Thank you, Dana. It’s great to be here.

 

Gardner: Clearly the world is transforming digitally. And hybrid cloud is helping in that transition. But what role specifically does storage play in allowing hybrid cloud to function in a way that bolsters and even accelerates digital transformation?

 

Kennelly: As you said, the world is undergoing a digital transformation, and that is accelerating in the current climate of a COVID-19 world. And, really, it comes down to having an IT infrastructure that is flexible, agile, has cloud-like attributes, is open, and delivers the economic value that we all need.

 

That is why we at IBM have a common hybrid cloud strategy. A hybrid cloud approach, we now know, is 2.5 times more economical than a public cloud-only strategy. And why is that? Because as customers transform -- and transform their existing systems – the data and systems sit on-premises for a long time. As you move to the public cloud, the cost of transformation has to overcome other constraints such as data sovereignty and compliance. This is why hybrid cloud is a key enabler.

 

Hybrid cloud for transformation

Kennelly
Now, underpinning that, the core building block of the hybrid cloud platform, is containers and Kubernetes using our OpenShift technology. That’s the key enabler to the hybrid cloud architecture and how we move applications and data within that environment.

 

As the customer starts to transform and looks at those applications and workloads as they move to this new world, being able to access the data is critical and being able to keep that access is a really important step in that journey. Integrating storage into that world of containers is therefore a key building block on which we are very focused today.

 

Storage is where you capture all that state, where all the data is stored. When you think about cloud, hybrid cloud, and containers -- you think stateless. You think about cloud-like economics as you scale up and scale down. Our focus is bridging those two worlds and making sure that they come together seamlessly. To that end, we provide an end-to-end hybrid cloud architecture to help those customers in their transformation journeys.

 

Gardner: So often in this business, we’re standing on the shoulders of the giants of the past 30 years; the legacy. But sometimes legacy can lead to complexity and becomes a hindrance. What is it about the way storage has evolved up until now that people need to rethink? Why do we need something like containers, which seem like a fairly radical departure?

 

Kennelly: It comes back to the existing systems. You know, I think storage at the end of the day was all about the applications, the workloads that we ran. It was storage for storage’s sake. You know, we designed applications, we ran applications and servers, and we architected them in a certain fashion.

When you get to a hybrid cloud world ... If you're in a digitally transformed business, you can respond rapidly. Your infrastructure needs to respond to those needs versus having the maximum throughput capacity.

And, of course, they generated data and we wanted access to that data. That’s just how the world happened. When you get to a hybrid cloud world -- I mean, we talk about cloud-like behavior, cloud-like economics – it manifests itself in the ability to respond.

 

If you’re in a digitally transformed business, you can respond to needs in your supply chain rapidly, maybe to a surge in demand based on certain events. Your infrastructure needs to respond to those needs versus having the maximum throughput capacity that would ever be needed. That’s the benefit cloud has brought to the industry, and why it’s so critically important.

 

Now, maybe traditionally storage was designed for the worst-case scenario. In this new world, we have to be able to scale up and scale down elastically like we do in these workloads in a cloud-like fashion. That’s what has fundamentally changed and what we need to change in those legacy infrastructures. Then we can deliver more of our analysis-services-consumption-type model to meet the needs of the businesses.

 

Gardner: And on that economic front, digitally transformed organizations need data very rapidly, and in greater volumes -- with that scalability to easily go up and down. How will the hybrid cloud model supported by containers provide faster data in greater volumes, and with a managed and forecastable economic burden?

 

Disparate data delivers insights

Kennelly: In a digitally transformed world, data is the raw material to a competitive advantage. Access to data is critical. Based on that data, we can derive insights and unique competitive advantages using artificial intelligence (AI) and other tools. But therein lies the question, right?

 

When we look at things like AI, a lot of our time and effort is spent on getting access to the data and being able to assemble that data and move it to where it is needed to gain those insights.

 


Being able to do that rapidly and at a low cost is critical to the storage world. And so that’s what we are very focused on, being able to provide those data services -- to discover and access the data seamlessly. And, as required, we can then move the data very rapidly to build on those insights and deliver competitive advantage to a digitally transformed enterprise.

 


Gardner:
Denis, in order to have comprehensive data access and rapidly deliver analytics at an affordable cost, the storage needs to run consistently across a wide variety of different environments – bare-metal, virtual machines (VMs), containers -- and then to and from both public and private clouds, as well as the edge.

 

What is it about the way that IBM is advancing storage that affords this common view, even across that great disparity of environments?

 

Kennelly: That’s a key design principle for our storage platform, what we call global access or a global file system. We’re going right back to our roots of IBM Research, decades ago where we invented a lot of that technology. And that’s the core of what we’re still talking about today -- to be able to have seamless access across disparate environments.

A key design principle for our storage platform, what we call global access or a global file system, goes back to our roots at IBM Research. We invented a lot of that technology. And that's at the core of what we're talking about -- seamless access across disparate environments.

Access is one issue, right? You can get read-access to the data, but you need to do that at high performance and at scale. At the same time, we are generating data at a phenomenal rate, so you need to scale out the storage infrastructure seamlessly. That’s another critical piece of it. We do that with products or capabilities we have today in things like IBM Spectrum Scale.

 

But another key design principle in our storage platforms is being to run in all of those environments -- bare-metal servers, to VMs, to containers, and right out to the edge footprints. So we are making sure our storage platform is designed and capable of supporting all of those platforms. It has to run on them and as well as support the data services -- the access services, the mobility services and the like, seamlessly across those environments. That’s what enables the hybrid cloud platform at the core of our transformation strategy.

 

Gardner: In addition to the focus on the data in production environments, we also should consider the development environment. What does your data vision include across a full life-cycle approach to data, if you will?

 

Be upfront with data in DevOps

Kennelly: It’s a great point because the business requirements drive the digital transformation strategy. But a lot of these efforts run into inertia when you have to change. The development processes teams within the organization have traditionally done things in a certain way. Now, all of a sudden, they’re building applications for a very different target environment -- this hybrid cloud environment, from the public cloud, to the data center, and right out to the edge.

 

The economics we’re trying to drive require flexible platforms across the DevOps tool chain so you can innovate very quickly. That’s because digital transformation is all about how quickly you can innovate via such new services. The next question is about the data.

 

As you develop and build these transformed applications in a modern, DevOps cloud-like development process, you have to integrate your data assets early and make sure you know the data is available – both in that development cycle as well as when you move to production. It’s essential to use things like copy-data-management services to integrate that access into your tool chain in a seamless manner. If you build those applications and ignore the data, then it becomes a shock as you roll it into production.

 

This is the key issue. A lot of times we can get an application running in one scenario and it looks good, but as you start to extend those services across more environments – and haven’t thought through the data architecture -- a lot of the cracks appear. A lot of the problems happen.

 

You have to design in the data access upfront in your development process and into your tool chains to make sure that’s part of your core development process.

 

Gardner: Denis, over the past several years we’ve learned that containers appear to be the gift that keeps on giving. One of the nice things about this storage transition, as you’ve described, is that containers were at first a facet of the development environment.

 

Developers leveraged containers first to solve many problems for runtimes. So it’s also important to understand the limits that containers had. Stateful and persistent storage hadn’t been part of the earlier container attributes.

 

How technically have we overcome some of the earlier limits of containers?

 

Containers create scalable benefits

Kennelly: You’re right, containers have roots in the open-source world. Developers picked up on containers to gain a layer of abstraction. In an operational context, it gives tremendous power because of that abstraction layer. You can quickly scale up and scale down pods and clusters, and you gain cloud-like behaviors very quickly. Even within IBM, we have containerized software and enabled traditional products to have cloud-like behaviors.

 

We were able to quickly move to a scalable, cloud-like platform very quickly using container technology, which is a tremendous benefit as a developer. We then moved containers to operations to respond to business needs such as when there’s a spike in demand and you need to scale up the environment. Containers are amazing in how quickly and how simple that is.

We have been able to move to a scalable, cloud-like platform very quickly using container technology, which is a tremendous benefit as a developer. We then moved containers to operations to respond to business needs to scale up and down. Containers are amazing in how quickly and how simple that is.

 

Now, with all of that power and the capability to scale up and scale down workloads, you also have a storage system sitting at the back end that has to respond accordingly. That’s because as you scale up more containers, you generate more input/output (IO) demands. How does the storage system respond?

 

Well, we have managed to integrate containers into the storage ecosystem. But, as an industry, we have some work to do. The integration of storage with containers is not just the simple IO channel to the storage. It also needs to be able to scale out accordingly, and to be managed. It’s an area we at IBM are focused on working closely with our friends at Red Hat to make sure that’s a very seamless integration and gives you consistent, global behavior.

 

Gardner: With security and cyber-attacks being so prominent in people’s minds in early 2021, what impacts do we get with a comprehensive data strategy when it comes to security? In the past, we had disparate silos of data. Sometimes, bad things could happen between the cracks.

 

So as we adopt containers consistently is there an overarching security benefit when it comes to having a common data strategy across all of your data and storage types?

 

Prevent angles of attack

Kennelly: Yes. It goes back to the hybrid cloud platform and having potentially multiple public clouds, data center workloads, edge workloads, and all of the combinations thereof. The new core is containers, but you know that with applications running across that hybrid environment that we’ve expanded the attack surface beyond the data center.

 

By expanding the attack surface, unfortunately, we’ve created more opportunities for people to do nefarious things, such as interrupt the applications and get access to the data. But when people attack a system, the cybercriminals are really after the data. Those are the crown jewels of any organization. That’s why this is so critical.

 


Data protection then requires understanding when somebody is tampering with the data or gaining access to data and doing something nefarious with that data. As we look at our data protection technologies, and as we protect our backups, we can detect if something is out of the ordinary. Integrating that capability into our backups and data protection processes is critical because that’s when we see at a very granular level what’s happening with the data. We can detect if behavioral attributes have changed from incremental backups or over time.

 

We can also integrate that into business process because, unfortunately, we have to plan for somebody attacking us. It’s really about how quickly we can detect and respond very quickly to get the systems back online. You have to plan for the worst-case scenario.

 

That’s why we have such a big focus on making sure we can detect in real time when something is happening as the blocks are literally being written to the disk. We can then also unwind to when we seek a good copy. That’s a huge focus for us right now.

 

Gardner: When you have a comprehensive data infrastructure, can go global and access data across all of these different environments, it seems to me that you have set yourself up for a pervasive analytics capability, which is the gorilla in the room when it comes to digital business transformation. Denis, how does the IBM Storage vision help bring more pervasive and powerful analytics to better drive a digital business?

 

Climb the AI Ladder

Kennelly: At the end of the day, that’s what this is all about. It’s about transforming businesses, to drive analytics, and provide unique insights that help grow your business and respond to the needs of the marketplace.

 

It’s all about enabling top-line growth. And that’s only possible when you can have seamless access to the data very quickly to generate insights literally in real time so you can respond accordingly to your customer needs and improve customer satisfaction.

 

This platform is all about discovering that data to drive the analytics. We have a phrase within IBM, we call it “The AI Ladder.” The first rung on that AI ladder is about discovering and accessing the data, and then being able to generate models from those analytics that you can use to respond in your business.

We're all in a world based on data. AI has a major role to play where we can look at business processes and understand how they are operating and then drive greater automation.That's a huge focus for us -- optimizing and automating existing business processes.

 

We’re all in a world based on data. And we’re using it to not only look for new business opportunities but for optimizing and automating what we already have today. AI has a major role to play where we can look at business processes and understand how they are operating and then, based on analytics and AI, drive greater automation. That’s a huge focus for us as well: Not only looking at the new business opportunities but optimizing and automating existing business processes.

 

Gardner: I’m afraid we’ll have to leave it there. You’ve been listening to a sponsored BriefingsDirect discussion on how consistent and global storage models best propel pervasive analytics and support digital business transformation.

 

And we’ve learned how IBM Storage is leveraging containers and the latest storage advances to deliver inclusive, comprehensive, and actionable data. So please join me in thanking our guest, Denis Kennelly, General Manager, IBM Storage. Thank you so much, Denis.

 

Kennelly: Thank you, Dana.

 

Gardner: And please look forward to when Denis joins me again for the next discussion in this three-part series. We’ll delve beneath the covers to learn more about the actual technologies enabling IBM’s vision for the future of data storage.

 


A big thank you as well to our audience for joining these BriefingsDirect data strategies insights discussions. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host throughout this series of IBM Storage-sponsored BriefingsDirect discussions.

 

Thanks again for listening. Please pass this along to your IT community, and do come back next time.

 

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: IBM Storage.

A transcript of a discussion on how consistent global storage models best accelerate and enable pervasive analytics that support digital business transformation. Copyright Interarbor Solutions, LLC, 2005-2021. All rights reserved.

 

You may also be interested in:

Monday, December 07, 2020

How to Industrialize Data Science to Attain Mastery of Repeatable Intelligence Delivery

Transcript of a discussion on the latest methods, tools, and thinking around making data science an integral core function of any business.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: Hewlett Packard Enterprise.

Dana Gardner: Hello, and welcome to the next BriefingsDirect Voice of Analytics Innovation podcast series. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for this ongoing discussion on the latest insights into data science advances and strategy.

Gardner

Businesses these days are quick to declare their intention to become data-driven, yet the deployment of analytics and the use of data science remains spotty, isolated, and often uncoordinated. To fully reach their digital business transformation potential, businesses large and small need to make data science more of a repeatable assembly line -- an industrialization, if you will, of end-to-end data exploitation.

Stay with us now as we explore the latest methods, tools, and thinking around making data science an integral core function that both responds to business needs and scales to improve every aspect of productivity.

To learn more about the ways that data and analytics behave more like a factory -- and less like an Ivory Tower -- please join me now in welcoming Doug Cackett, EMEA Field Chief Technology Officer at Hewlett Packard Enterprise. Welcome, Doug.


Doug Cackett:
Thank you so much, Dana.

Gardner: Doug, why is there a lingering gap -- and really a gaping gap -- between the amount of data available and the analytics that should be taking advantage of it?

Data’s potential at edge

Cackett: That’s such a big question to start with, Dana, to be honest. We probably need to accept that we’re not doing things the right way at the moment. Actually, Forrester suggests that something like 40 zettabytes of data are going to be under management by the end of this year, which is quite enormous.

Cackett

And, significantly, more of that data is being generated at the edge through applications, Internet of Things (IoT), and all sorts of other things. This is where the customer meets your business. This is where you’re going to have to start making decisions as well.

So, the gap is two things. It’s the gap between the amount of data that’s being generated and the amount you can actually comprehend and create value from. In order to leverage that data from a business point of view, you need to make decisions at the edge. 

You will need to operationalize those decisions and move that capability to the edge where your business meets your customer. That’s the challenge we’re all looking for machine learning (ML) -- and the operationalization of all of those ML models into applications -- to make the difference. 

Gardner: Why does HPE think that moving more toward a factory model, industrializing data science, is part of the solution to compressing and removing this gap?

Cackett: It’s a math problem, really, if you think about it. If there is exponential growth in data within your business, if you’re trying to optimize every step in every business process you have, then you’ll want to operationalize those insights by making your applications as smart as they can possibly be. You’ll want to embed ML into those applications. 

Because, correspondingly, there’s exponential growth in the demand for analytics in your business, right? And yet, the number of data scientists you have in your organization -- I mean, growing them exponentially isn’t really an option, is it? And, of course, budgets are also pretty much flat or declining.

There's exponential growth in the demand for analytics in your business. And yet the number of data scientists in your organization, growing them, is not exponential. And budgets are pretty much flat or declining.

So, it’s a math problem because we need to somehow square away that equation. We somehow have to generate exponentially more models for more data, getting to the edge, but doing that with fewer data scientists and lower levels of budget. 

Industrialization, we think, is the only way of doing that. Through industrialization, we can remove waste from the system and improve the quality and control of those models. All of those things are going to be key going forward.

Gardner: When we’re thinking about such industrialization, we shouldn’t necessarily be thinking about an assembly line of 50 years ago -- where there are a lot of warm bodies lined up. I’m thinking about the Lucille Ball assembly line, where all that candy was coming down and she couldn’t keep up with it.

Perhaps we need more of an ultra-modern assembly line, where it’s a series of robots and with a few very capable people involved. Is that a fair analogy?

Industrialization of data science

Cackett: I think that’s right. Industrialization is about manufacturing where we replace manual labor with mechanical mass production. We are not talking about that. Because we’re not talking about replacing the data scientist. The data scientist is key to this. But we want to look more like a modern car plant, yes. We want to make sure that the data scientist is maximizing the value from the data science, if you like.

We don’t want to go hunting around for the right tools to use. We don’t want to wait for the production line to play catch up, or for the supply chain to catch up. In our case, of course, that’s mostly data or waiting for infrastructure or waiting for permission to do something. All of those things are a complete waste of their time. 


As you look at the amount of productive time data scientists spend creating value, that can be pretty small compared to their non-productive time -- and that’s a concern. Part of the non-productive time, of course, has been with those data scientists having to discover a model and optimize it. Then they would do the steps to operationalize it.

But maybe doing the data and operations engineering things to operationalize the model can be much more efficiently done with another team of people who have the skills to do that. We’re talking about specialization here, really.

But there are some other learnings as well. I recently wrote a blog about it. In it, I looked at the modern Toyota production system and started to ask questions around what we could learn about what they have learned, if you like, over the last 70 years or so.

It was not just about automation, but also how they went about doing research and development, how they approached tooling, and how they did continuous improvement. We have a lot to learn in those areas.

For an awful lot of organizations that I deal with, they haven’t had a lot of experience around such operationalization problems. They haven’t built that part of their assembly line yet. Automating supply chains and mistake-proofing things, what Toyota called jidoka, also really important. It’s a really interesting area to be involved with.

Gardner: Right, this is what US manufacturing, in the bricks and mortar sense, went through back in the 1980s when they moved to business process reengineering, adopted kaizen principles, and did what Deming and more quality-emphasis had done for the Japanese auto companies.

And so, back then there was a revolution, if you will, in physical manufacturing. And now it sounds like we’re at a watershed moment in how data and analytics are processed.

Cackett: Yes, that’s exactly right. To extend that analogy a little further, I recently saw a documentary about Morgan cars in the UK. They’re a hand-built kind of car company. Quite expensive, very hand-built, and very specialized.

And I ended up by almost throwing things at the TV because they were talking about the skills of this one individual. They only had one guy who could actually bend the metal to create the bonnet, the hood, of the car in the way that it needed to be done. And it took two or three years to train this guy, and I’m thinking, “Well, if you just automated the process, and the robot built it, you wouldn’t need to have that variability.” I mean, it’s just so annoying, right?

In the same way, with data science we’re talking about laying bricks -- not Michelangelo hammering out the figure of David. What I’m really trying to say is a lot of the data science in our customer’s organizations are fairly mundane. To get that through the door, get it done and dusted, and give them time to do the other bits of finesse using more skills -- that’s what we’re trying to achieve. Both [the basics and the finesse] are necessary and they can all be done on the same production line.

Gardner: Doug, if we are going to reinvent and increase the productivity generally of data science, it sounds like technology is going to be a big part of the solution. But technology can also be part of the problem.

What is it about the way that organizations are deploying technology now that needs to shift? How is HPE helping them adjust to the technology that supports a better data science approach?

Define and refine

Cackett: We can probably all agree that most of the tooling around MLOps is relatively young. The two types of company we see are either companies that haven’t yet gotten to the stage where they’re trying to operationalize more models. In other words, they don’t really understand what the problem is yet.

Forrester research suggests that only 14 percent of organizations that they surveyed said they had a robust and repeatable operationalization process. It’s clear that the other 86 percent of organizations just haven’t refined what they’re doing yet. And that’s often because it’s quite difficult. 

Many of these organizations have only just linked their data science to their big data instances or their data lakes. And they’re using it both for the workloads and to develop the models. And therein lies the problem. Often they get stuck with simple things like trying to have everyone use a uniform environment. All of your data scientists are both sharing the data and sharing the computer environment as well.

Data scientists can be very destructive in what they're doing. Maybe overwriting data, for example. To avoid that, you end up replicating terabytes of data, which can take a long time. That also demands new resources, including new hardware.

And data scientists can often be very destructive in what they’re doing. Maybe overwriting data, for example. To avoid that, you end up replicating the data. And if you’re going to replicate terabytes of data, that can take a long period of time. That also means you need new resources, maybe new more compute power and that means approvals, and it might mean new hardware, too.

Often the biggest challenge is in provisioning the environment for data scientists to work on, the data that they want, and the tools they want. That can all often lead to huge delays in the process. And, as we talked about, this is often a time-sensitive problem. You want to get through more tasks and so every delayed minute, hour, or day that you have becomes a real challenge.

The other thing that is key is that data science is very peaky. You’ll find that data scientists may need no resources or tools on Monday and Tuesday, but then they may burn every GPU you have in the building on Wednesday, Thursday, and Friday. So, managing that as a business is also really important. If you’re going to get the most out of the budget you have, and the infrastructure you have, you need to think differently about all of these things. Does that make sense, Dana?

Gardner: Yes. Doug how is HPE Ezmeral being designed to help give the data scientists more of what they need, how they need it, and that helps close the gap between the ad hoc approach and that right kind of assembly line approach?

Two assembly lines to start

Cackett: Look at it as two assembly lines, at the very minimum. That’s the way we want to look at it. And the first thing the data scientists are doing is the discovery.

The second is the MLOps processes. There will be a range of people operationalizing the models. Imagine that you’re a data scientist, Dana, and I’ve just given you a task. Let’s say there’s a high defection or churn rate from our business, and you need to investigate why.

First you want to find out more about the problem because you might have to break that problem down into a number of steps. And then, in order to do something with the data, you’re going to want an environment to work in. So, in the first step, you may simply want to define the project, determine how long you have, and develop a cost center.

You may next define the environment: Maybe you need CPUs or GPUs. Maybe you need them highly available and maybe not. So you’d select the appropriate-sized environment. You then might next go and open the tools catalog. We’re not forcing you to use a specific tool; we have a range of tools available. You select the tools you want. Maybe you’re going to use Python. I know you’re hardcore, so you’re going to code using Jupyter and Python.

And the next step, you then want to find the right data, maybe through the data catalog. So you locate the data that you want to use and you just want to push a button and get provisioned for that lot. You don’t want to have to wait months for that data. That should be provisioned straight away, right?


You can do your work, save all your work away into a virtual repository, and save the data so it’s reproducible. You can also then check the things like model drift and data drift and those sorts of things. You can save the code and model parameters and those sorts of things away. And then you can put that on the backlog for the MLOps team.

Then the MLOps team picks it up and goes through a similar data science process. They want to create their own production line now, right? And so, they’re going to seek a different set of tools. This time, they need continuous integration and continuous delivery (CICD), plus a whole bunch of data stuff they want to operationalize your model. They’re going to define the way that that model is going to be deployed. Let’s say, we’re going to use Kubeflow for that. They might decide on, say, an A/B testing process. So they’re going to configure that, do the rest of the work, and press the button again, right?

Clearly, this is an ongoing process. Fundamentally that requires workflow and automatic provisioning of the environment to eliminate wasted time, waiting for stuff to be available. It is fundamentally what we’re doing in our MLOps product.

But in the wider sense, we also have consulting teams helping customers get up to speed, define these processes, and build the skills around the tools. We can also do this as-a-service via our HPE GreenLake proposition as well. Those are the kinds of things that we’re helping customers with.

Gardner: Doug, what you’re describing as needed in data science operations is a lot like what was needed for application development with the advent of DevOps several years ago. Is there commonality between what we’re doing with the flow and nature of the process for data and analytics and what was done not too long ago with application development? Isn’t that also akin to more of a cattle approach than a pet approach?

Operationalize with agility

Cackett: Yes, I completely agree. That’s exactly what this is about and for an MLOps process. It’s exactly that. It’s analogous to the sort of CICD, DevOps, part of the IT business. But a lot of that tool chain is being taken care of by things like Kubeflow and MLflow Project, some of these newer, open source technologies. 

I should say that this is all very new, the ancillary tooling that wraps around the CICD. The CICD set of tools are also pretty new. What we’re also attempting to do is allow you, as a business, to bring these new tools and on-board them so you can evaluate them and see how they might impact what you’re doing as your process settles down.

The way we're doing MLOps and data science is progressing extremely quickly. So you don't want to lock yourself into a corner where you're trapped in a particular workflow. You want to have agility. It's analogous to the DevOps movement.

The idea is to put them in a wrapper and make them available so we get a more dynamic feel to this. The way we’re doing MLOps and data science generally is progressing extremely quickly at the moment. So you don’t want to lock yourself into a corner where you’re trapped into a particular workflow. You want to be able to have agility. Yes, it’s very analogous to the DevOps movement as we seek to operationalize the ML model.

The other thing to pay attention to are the changes that need to happen to your operational applications. You’re going to have to change those so they can tool the ML model at the appropriate place, get the result back, and then render that result in whatever way is appropriate. So changes to the operational apps are also important.

Gardner: You really couldn’t operationalize ML as a process if you’re only a tools provider. You couldn’t really do it if you’re a cloud services provider alone. You couldn’t just do this if you were a professional services provider.

It seems to me that HPE is actually in a very advantageous place to allow the best-of-breed tools approach where it’s most impactful but to also start put some standard glue around this -- the industrialization. How is HPE is an advantageous place to have a meaningful impact on this difficult problem?

Cackett: Hopefully, we’re in an advantageous place. As you say, it’s not just a tool, is it? Think about the breadth of decisions that you need to make in your organization, and how many of those could be optimized using some kind of ML model.

You’d understand that it’s very unlikely that it’s going to be a tool. It’s going to be a range of tools, and that range of tools is going to be changing almost constantly over the next 10 and 20 years.

This is much more to do with a platform approach because this area is relatively new. Like any other technology, when it’s new it almost inevitably to tends to be very technical in implementation. So using the early tools can be very difficult. Over time, the tools mature, with a mature UI and a well-defined process, and they become simple to use.

But at the moment, we’re way up at the other end. And so I think this is about platforms. And what we’re providing at HPE is the platform through which you can plug in these tools and integrate them together. You have the freedom to use whatever tools you want. But at the same time, you’re inheriting the back-end system. So, that’s Active Directory and Lightweight Directory Access Protocol (LDAP) integrations, and that’s linkage back to the data, your most precious asset in your business. Whether that be in a data lake or a data warehouse, in data marts or even streaming applications. 

This is the melting point of the business at the moment. And HPE has had a lot of experience helping our customers deliver value through information technology investments over many years. And that’s certainly what we’re trying to do right now.

Gardner: It seems that HPE Ezmeral is moving toward industrialization of data science, as well as other essential functions. But is that where you should start, with operationalizing data science? Or is there a certain order by which this becomes more fruitful? Where do you start?

Machine learning leads change

Cackett: This is such a hard question to answer, Dana. It’s so dependent on where you are as a business and what you’re trying to achieve. Typically, to be honest, we find that the engagement is normally with some element of change in our customers. That’s often, for example, where there’s a new digital transformation initiative going on. And you’ll find that the digital transformation is being held back by an inability to do the data science that’s required.

There is another Forrester report that I’m sure you’ll find interesting. It suggests that 98 percent of business leaders feel that ML is key to their competitive advantage. It’s hardly surprising then that ML is so closely related to digital transformation, right? Because that’s about the stage at which organizations are competing after all.

So we often find that that’s the starting point, yes. Why can’t we develop these models and get them into production in time to meet our digital transformation initiative? And then it becomes, “Well, what bits do we have to change? How do we transform our MLOps capability to be able to do this and do this at scale?”


Often this shift is led by an individual in an organization. There develops a momentum in an organization to make these changes. But the changes can be really small at the start, of course. You might start off with just a single ML problem related to digital transformation. 

We acquired MapR some time ago, which is now our HPE Ezmeral Data Fabric. And it underpins a lot of the work that we’re doing. And so, we will often start with the data, to be honest with you, because a lot of the challenges in many of our organizations has to do with the data. And as businesses become more real-time and want to connect more closely to the edge, really that’s where the strengths of the data fabric approach come into play.

So another starting point might be the data. A new application at the edge, for example, has new, very stringent requirements for data and so we start there with building these data systems using our data fabric. And that leads to a requirement to do the analytics and brings us obviously nicely to the HPE Ezmeral MLOps, the data science proposition that we have.

Gardner: Doug, is the COVID-19 pandemic prompting people to bite the bullet and operationalize data science because they need to be fleet and agile and to do things in new ways that they couldn’t have anticipated?

Cackett: Yes, I’m sure it is. We know it’s happening; we’ve seen all the research. McKinsey has pointed out that the pandemic has accelerated a digital transformation journey. And inevitably that means more data science going forward because, as we talked about already with that Forrester research, some 98 percent think that it’s about competitive advantage. And it is, frankly. The research goes back a long way to people like Tom Davenport, of course, in his famous Harvard Business Review article. We know that customers who do more with analytics, or better analytics, outperform their peers on any measure. And ML is the next incarnation of that journey.

Gardner: Do you have any use cases of organizations that have gone to the industrialization approach to data science? What is it done for them?

Financial services benefits

Cackett: I’m afraid names are going to have to be left out. But a good example is in financial services. They have a problem in the form of many regulatory requirements.

When HPE acquired BlueData it gained an underlying technology, which we’ve transformed into our MLOps and container platform. BlueData had a long history of containerizing very difficult, problematic workloads. In this case, this particular financial services organization had a real challenge. They wanted to bring on new data scientists. But the problem is, every time they wanted to bring a new data scientist on, they had to go and acquire a bunch of new hardware, because their process required them to replicate the data and completely isolate the new data scientist from the other ones. This was their process. That’s what they had to do.

So as a result, it took them almost six months to do anything. And there’s no way that was sustainable. It was a well-defined process, but it’s still involved a six-month wait each time.

So instead we containerized their Cloudera implementation and separated the compute and storage as well. That means we could now create environments on the fly within minutes effectively. But it also means that we can take read-only snapshots of data. So, the read-only snapshot is just a set of pointers. So, it’s instantaneous.

They scaled out their data science without scaling up their costs or the number of people required. They are now doing that in a hybrid cloud environment. And they only have to change two lines of code to push workloads into AWS, which is pretty magical, right?

They were able to scale-out their data science without scaling up their costs or the number of people required. Interestingly, recently, they’ve moved that on further as well. Now doing all of that in a hybrid cloud environment. And they only have to change two lines of code to allow them to push workloads into AWS, for example, which is pretty magical, right? And that’s where they’re doing the data science.

Another good example that I can name is GM Finance, a fantastic example of how having started in one area for business -- all about risk and compliance -- they’ve been able to extend the value to things like credit risk.

But doing credit risk and risk in terms of insurance also means that they can look at policy pricing based on dynamic risk. For example, for auto insurance based on the way you’re driving. How about you, Dana? I drive like a complete idiot. So I couldn’t possibly afford that, right? But you, I’m sure you drive very safely.

But in this use-case, because they have the data science in place it means they can know how a car is being driven. They are able to look at the value of the car, the end of that lease period, and create more value from it.

These are types of detailed business outcomes we’re talking about. This is about giving our customers the means to do more data science. And because the data science becomes better, you’re able to do even more data science and create momentum in the organization, which means you can do increasingly more data science. It’s really a very compelling proposition.

Gardner: Doug, if I were to come to you in three years and ask similarly, “Give me the example of a company that has done this right and has really reshaped itself.” Describe what you think a correctly analytically driven company will be able to do. What is the end state?

A data-science driven future

Cackett: I can answer that in two ways. One relates to talking to an ex-colleague who worked at Facebook. And I’m so taken with what they were doing there. Basically, he said, what originally happened at Facebook, in his very words, is that to create a new product in Facebook they had an engineer and a product owner. They sat together and they created a new product.

Sometime later, they would ask a data scientist to get involved, too. That person would look at the data and tell them the results.

Then they completely changed that around. What they now do is first find the data scientist and bring him or her on board as they’re creating a product. So they’re instrumenting up what they’re doing in a way that best serves the data scientist, which is really interesting.


The data science is built-in from the start. If you ask me what’s going to happen in three years’ time, as we move to this democratization of ML, that’s exactly what’s going to happen. I think we’ll end up genuinely being information-driven as an organization.

That will build the data science into the products and the applications from the start, not tack them on to the end.

Gardner: And when you do that, it seems to me the payoffs are expansive -- and perhaps accelerating.

Cackett: Yes. That’s the competitive advantage and differentiation we started off talking about. But the technology has to underpin that. You can’t deliver the ML without the technology; you won’t get the competitive advantage in your business, and so your digital transformation will also fail.

This is about getting the right technology with the right people in place to deliver these kinds of results.

Gardner: I’m afraid we’ll have to leave it there. You’ve been with us as we explored how businesses can make data science more of a repeatable assembly line – an industrialization, if you will -- of end-to-end data exploitation. And we’ve learned how HPE is ushering in the latest methods, tools, and thinking around making data science an integral core function that both responds to business needs and scales to improve nearly every aspect of productivity.


So please join me in thanking our guest, Doug Cackett, EMEA Field Chief Technology Officer at HPE. Thank you so much, Doug. It was a great conversation.

Cackett: Yes, thanks everyone. Thanks, Dana.

Gardner: And a big thank you as well to our audience for joining this sponsored BriefingsDirect Voice of Analytics Innovation discussion. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host for this ongoing series of Hewlett Packard Enterprise-supported discussions.

Thanks again for listening. Please pass this along to your IT community, and do come back next time.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: Hewlett Packard Enterprise.

Transcript of a discussion on the latest methods, tools, and thinking around making data science an integral core function of any business. Copyright Interarbor Solutions, LLC, 2005-2020. All rights reserved.

You may also be interested in: