Friday, March 06, 2020

As Containers Go Mainstream, IT Culture Should Pivot to End-to-End DevSecOps

https://www.hpe.com/us/en/solutions/container-platform.html

A discussion on the escalating benefits from secure and robust container use and how security concerns need to be addressed early and often across the new end-to-end container deployment spectrum.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: Hewlett Packard Enterprise.

Dana Gardner: Hello, and welcome to the next edition of the BriefingsDirect Voice of Innovation podcast series.

Gardner
I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for this ongoing discussion on the latest insights into modern IT deployment architecture strategies.

Container-based deployment models have rapidly gained popularity from cloud models to corporate data centers. IT operators are now looking to extend the benefits of containers to more use cases, including the computing edge.

Yet in order to push containers further into the mainstream, security concerns need to be addressed across this new end-to-end container deployment spectrum -- and that means security addressed during development and employment under the rubric of DevSecOps best practices.


Stay with us now as we examine the escalating benefits that come from secure and robust container use with our guest, Simon Leech, Worldwide Security and Risk Management Practice at Hewlett Packard Enterprise (HPE) Pointnext Services. Welcome, Simon.

Simon Leech: Hey, Dana. Good afternoon.

Gardner: Simon, are we at an inflection point where we’re going to see containers take off in the mainstream? Why is this the next level of virtualization?

Mainstream containers coming

Leech: We are certainly seeing a lot of interest from our customers when we speak to them about the best practices they want to follow in terms of rapid application development.

https://www.linkedin.com/in/simonleech/
Leech
One of the things that always held people back a little bit with virtualization was that you are always reliant on an operating system (OS) managing the applications that sit on top of that OS in managing the application code that you would deploy to that environment.

But what we have seen with containers is that as everything starts to follow a cloud-native approach, we start to deal with our applications as lots of individual microservices that all communicate integrally to provide the application experience to the user. It makes a lot more sense from a development perspective to be able to address the development in these small, microservice-based or module-based development approaches.

So, while we are not seeing a massive influx of container-based projects going into mainstream production at the moment, there are certainly a lot of customers testing their toes in the water to identify the best possibilities to adopt and address container use within their own application development environments.

Gardner: And because we saw developers grok the benefits of containers early and often, we have also seen them operate within a closed environment -- not necessarily thinking about deployment. Is now the time to get developers thinking differently about containers -- as not just perhaps a proof of concept (POC) or test environment, but as ready for the production mainstream?

Leech: Yes. One of the challenges I have seen with what you just described is a lot of container projects start as a developer’s project behind his laptop. So the developer is going out there, identifying a container-based technology as something interesting to play around with, and as time has gone by has realized he can actually make a lot of progress by developing his applications using a container-based architecture.
This is often done under the radar of management. one of the things we are discussing with customers as we address DevSecOps and DevOps is to make sure you get buy-in from the executive team and enable top-down integration.

What that means from an organizational perspective is that this is often done under the radar of management. One of the things we are discussing with our customers as we go and talk about addressing DevSecOps and DevOps initiatives is to make sure that you do get that buy-in from the executive team and so you can start to enable some top-down integration.

Don’t just see containers as a developer’s laptop project but look at it broadly and understand how you can integrate that into the overall IT processes that your organization is operating with. And that does require a good level of buy-in from the top.

Gardner: I imagine this requires a lifecycle approach to containers thinking -- not just about the development, but in how they are going to be used over time and in different places.

Now, 451 Research recently predicted that the market for containers will hit $2.7 billion this year. Why do you think that the IT operators -- the people who will be inheriting these applications and microservices -- will also take advantage of containers? What does it bring to their needs and requirements beyond what the developers get out of it?

Quick-change code artists

Leech: One of the biggest advantages from an operational perspective is the ability to make fast changes to the code you are using. So whereas in the traditional application development environment, a developer would need to make a change to some code and it would involve requesting a downtime to be able to update the complete application, with a container-based architecture, you only have to update parts of the architecture.

https://www.hpe.com/us/en/solutions/container-platform.html
So, it allows you to make many more changes than you previously would have been able to deliver to the organization -- and it allows you to address those changes very rapidly.

Gardner: How does this allow for a more common environment to extend across hybrid IT -- from on-premises to cloud to hybrid cloud and then ultimately to the edge?

Leech: Well, applications developed in containers and developed within a cloud-native approach typically are very portable. So you don’t need to be restricted to a particular version or limits, for example. The container itself runs on top of any OS of the same genre. Obviously, you can’t run a Windows container on top of a Linux OS, or vice versa.

But within the general Linux space that pretty much has compatibility. So it makes it very easy for the containers to be developed in one environment and then released into different environments.

Gardner: And that portability extends to the hyperscale cloud environments, the public cloud, so is there a multi-cloud extensibility benefit?

Leech: Yes, definitely. You see a lot of developers developing their applications in an on-premises environment with the intention that they are going to be provisioned into a cloud. If they are done properly, it shouldn’t matter if that’s a Google Cloud Platform instance, a Microsoft Azure instance, or Amazon Web Services (AWS).

Gardner: We have quite an opportunity in front of us with containers across the spectrum of continuous development and deployment and for multiple deployment scenarios. What challenges do we need to think about to embrace this as a lifecycle approach?

What are the challenges to providing security specifically, making sure that the containers are not going to add risk – and, in fact, improve the deployment productivity of organizations?

Make security a business priority 

Leech: When I address the security challenges with customers, I always focus on two areas. The first is the business challenge of adopting containers, and the security concerns and constrains that come along with that. And the second one is much more around the technology or technical challenges.

If you begin by looking at the business challenges, of how to adopt containers securely, this requires a cultural shift, as I already mentioned. If we are going to adopt containers, we need to make sure we get the appropriate executive support and move past the concept that the developer is doing everything on his laptop. We train our coders on the needs for secure coding.
A lot of developers are not trained as security specialists. It makes sense to put a program into place that trains coders to think more about security, especially in a container environment where you have fast release cycles.

A lot of developers have as their main goal to produce high-quality software fast, and they are not trained as security specialists. It makes a lot of sense to put an education program into place, that allows you to train those internal coders so that they understand the need to think a little bit more about security -- especially in a container environment where you have fast release cycles and sometimes the security checks get missed or don’t get properly instigated. It’s good to start with a very secure baseline.

And once you have addressed the cultural shift, the next thing is to think about the role of the security team in your container development team, your DevOps development teams. And I always like to try and discuss with my customers the value of getting a security guy into the product development team from day one.

Often, we see in a traditional IT space that the application gets built, the infrastructure gets designed, and then the day before it’s all going to go into production someone calls security. Security comes along and says, “Hey, have you done risk assessments on this?” And that ends up delaying the project.


If you introduce the security person into the small, agile team as you build it to deliver your container development strategy, then they can think together with the developers. They can start doing risk assessments and threat modeling right from the very beginning of the project. It allows us to reduce delays that you might have with security testing.

At the same time, it also allows us to shift our testing model left in a traditional waterfall model, where testing happens right before the product goes live. But in a DevOps or DevSecOps model, it’s much better to embed the security, best practices, and proper tooling right into the continuous integration/continuous delivery (CI/CD) pipeline.

The last point around the business view is that, again, going back to the comment I made earlier, developers often are not aware of secure coding and how to make things secure. Providing a secure-by-default approach -- or even a security self-service approach – allows developers to gain a security registry, for example. That provides known good instances of container images or provides infrastructure and compliance code so that they can follow a much more template-based approach to security. That also pays a lot of dividends in the quality of the software as it goes out the door.

Gardner: Are we talking about the same security precautions that traditional IT people might be accustomed to but now extending to containers? Or is there something different about how containers need to be secured?

Updates, the container way 

Leech: A lot of the principles are the same. So, there’s obviously still a need for network security tools. There’s still a need to do vulnerability assessments. There is still a need for encryption capabilities. But the difference with the way you would go about using technical controls to protect a container environment is all around this concept of the shared kernel.

An interesting white paper has been released by the National Institute of Standards and Technology (NIST) in the US, SP 800-190, which is their Application Container Security Guide. And this paper identifies five container security challenges around risks with the images, registry, orchestrator, the containers themselves, and the host OS.

So, when we’re looking at defining a security architecture for our customers, we always look at the risks within those five areas and try to define a security model that protects those best of all.

https://www.hpe.com/us/en/solutions/container-platform.html
One of the important things to understand when we’re talking about securing containers is that we have a different approach to the way we do updates. In a traditional environment, we take a gold image for a virtual machine (VM). We deploy it to the hypervisor. Then we realize that if there is a missing patch, or a required update, that we roll that update out using whatever patch management tools we use.

In a container environment, we take a completely different approach. We never update running containers. The source of your known good image is your registry. The registry is where we update containers, have updated versions of those containers, and use the container orchestration platform to make sure that next time somebody calls a new container that it’s launched from the new container image.

It’s important to remember we don’t update things in the running environment. We always use the container lifecycle and involve the orchestration platform to make those updates. And that’s really a change in the mindset for a lot of security professionals, because they think, “Okay, I need to do a vulnerability assessment or risk assessment. Let me get out my Qualys and my Rapid7,” or whatever, and, “I’m going to scan the environment. I’m going to find out what’s missing, and then I’m going to deploy patches to plug in the risk.”

So we need to make sure that our vulnerability assessment process gets built right into the CI/CD pipeline and into the container orchestration tools we use to address that needed change in behavior.

Gardner: It certainly sounds like the orchestration tools are playing a larger role in container security management. Do those in charge of the container orchestration need to be thinking more about security and risk?

Simplify app separation 

Leech: Yes and no. I think the orchestration platform definitely plays a role and the individuals that use it will need to be controlled in terms of making sure there is good privileged account management and integration into the enterprise authentication services. But there are a lot of capabilities built into the orchestration platforms today that make the job easier.

One of the challenges we’ve seen for a long time in software development, for example, is that developers take shortcuts by hard coding clear text passwords into the text, because it’s easier. And, yeah, that’s understandable. You don’t need to worry about managing or remembering passwords.

But what you see a lot of orchestration platforms offering is the capability to deliver sequence management. So rather than storing the passcode in within the code, you can now request the secret from the secrets management platform that the orchestration platform offers to you.
Orchestration tools give you the capability to separate container workloads for differing sensitivity levels. This provides separation between the applications without having to think too much about it.

These orchestration tools also give you the capability to separate container workloads for differing sensitivity levels within your organization. For example, you would not want to run containers that operate your web applications on the same physical host as containers that operate your financial applications. Why? Because although you have the capability with the container environment using separate namespaces to separate the individual container architectures from one another, it’s still a good security best practice to run those on completely different physical hosts or in a virtualized container environment on top of different VMs. This provides physical separation between the applications. Very often the orchestrators will allow you to provide that functionality within the environment without having to think too much about it.

Gardner: There is another burgeoning new area where containers are being used. Not just in applications and runtime environments, but also for data and persistent data. HPE has been leading the charge on making containers appropriate for use with data in addition to applications.

How should the all-important security around data caches and different data sources enter into our thinking?

Save a slice for security 

Leech: Because containers are temporary instances, it’s important that you’re not actually storing any data within the container itself. At the same time, as importantly, you’re not storing any of that data on the host OS either.

It’s important to provide persistent storage on an external storage array. So looking at storage arrays, things like from HPE, we have Nimble Storage or Primera. They have the capability through plug-ins to interact with the container environment and provide you with that persistent storage that remains even as the containers are being provisioned and de-provisioned.

So your container itself, as I said, doesn’t store any of the data, but a well-architected application infrastructure will allow you to store that on a third-party storage array.

Gardner: Simon, I’ve had an opportunity to read some of your blogs and one of your statements jumped out … “The organizational culture still lags behind when it comes to security.” What did you mean by that? And how does that organizational culture need to be examined, particularly with an increased use of containers?

Leech: It’s about getting the security guys involved in the DevSecOps projects early on in the lifecycle of that project. Don’t bring them to the table toward the end of the project. Make them a valuable member of that team. There was a comment made about the idea of a two-pizza team.

https://www.hpe.com/us/en/solutions/container-platform.html

A two-pizza team means a meeting should never have more people in it than can be fed by two pizzas and I think that that applies equally to development teams when you’re working on container projects. They don’t need to be big; they don’t need to be massive.

It’s important to make sure there’s enough pizza saved for the security guy! You need to have that security guy in the room from the beginning to understand what the risks are. That’s a lot of where this cultural shift needs to change. And as I said, executive support plays a strong role in making sure that that happens.

Gardner: We’ve talked about people and process. There is also, of course, that third leg of the stool -- the technology. Are the people building container platforms like HPE thinking along these lines as well? What does the technology, and the way it’s being designed, bring to the table to help organizations be DevSecOps-oriented?

Select specific, secure solutions 

Leech: There are a couple of ways that technology solutions are going to help. The first are the pre-production commercial solutions. These are the things that tend to get integrated into the orchestration platform itself, like image scanning, secure registry services, and secrets management.

A lot of those are going to be built into any container orchestration platform that you choose to adopt. There are also commercial solutions that support similar functions. It’s always up to an organization to do a thorough assessment of whether their needs can be met by the standard functions in the orchestration platform or if they need to look at some of the third-party vendors in that space, like Aqua Security or Twistlock, which was recently acquired by Palo Alto Networks, I believe.
No single solution covers all of an enterprise's requirements. It's a task to assess security shortcomings, what products you need, and then decide who will be the best partner for those total solutions.

And then there are the solutions that I would gather up as post-production commercial solutions. These are for things such as runtime protection of the container environment, container forensic capabilities, and network overlay products that allow you to separate your workloads at the network level and provide container-based firewalls and that sort of stuff.

Very few of these capabilities are actually built into the orchestration platforms. They tend to be third parties such as Sysdig, Guardicore, and NeuVector. And then there’s another bucket of solutions, which are more open-source solutions. These typically focus on a single function in a very cost-effective way and are typically open source community-led. And these are solutions such as SonarQube, Platform as a Service (PaaS), and Falco, which is the open source project that Sysdig runs. You also have Docker Bench and Calico, a networking security tool.

But no single solution covers all of an enterprise customer’s requirements. It remains a bit of a task to assess where you have security shortcomings, what products you need, and who’s going to be the best partner to deliver those products with those technology solutions for you.

Gardner: And how are you designing Pointnext Services to fill that need to provide guidance across this still dynamic ecosystem of different solutions? How does the services part of the equation shake out?

Leech: We obviously have the technology solutions that we have built. For example, the HPE Container Platform, which is based around technology that we acquired as part of the BlueData acquisition. But at the end of the day, these are products. Companies need to understand how they can best use those products within their own specific enterprise environments.

I’m part of Pointnext Services, within the advisory and professional services team. A lot of the work that we do is around advising customers on the best approaches they can take. On one hand, we’d like them to purchase our HPE technology solutions, but on the other hand, a container-based engagement needs to be a services-led engagement, especially in the early phases where a lot of customers aren’t necessarily aware of all of the changes they’re going to have to make to their IT model.

At Pointnext, we deliver a number of container-oriented services, both in the general container implementation area as well as more specifically around container security. For example, I have developed and delivered transformation workshops around DevSecOps.

We also have container security planning workshops where we can help customers to understand the security requirements of containers in the context of their specific environments. A lot of this work is based around some discovery we’ve done to build our own container security solution reference architecture.

Gardner: Do you have any examples of organizations that have worked toward a DevSecOps perspective on continuous delivery and cloud native development? How are people putting this to work on the ground?

Edge elevates container benefits 

Leech: A lot of the customers we deal with today are still in the early phases of adopting containers. We see a lot of POC engagement where a particular customer may be wanting to understand how they could take traditional applications and modernize or architect those into cloud-native or container-based applications.

There’s a lot of experimentation going on. A lot of the implementations we see start off small, so the customer may buy a single technology stack for the purpose of testing and playing around with containers in their environment. But they have intentions within 12 to 18 months of being able to take that into a production setting and reaping the benefits of container-based deployments.

Gardner: And over the past few years, we’ve heard an awful lot of the benefits for moving closer to the computing edge, bringing more compute and even data and analytics processing to the edge. This could be in a number of vertical industries, from autonomous vehicles to manufacturing and healthcare.

https://www.hpe.com/us/en/solutions/container-platform.html
But one of the concerns, if we move more compute to the edge, is will security risks go up? Is there something about doing container security properly that will make that edge more robust and more secure?

Leech: Yes, a container project done properly can actually be more secure than a traditional VM environment. This begins from the way you manage the code in the environment. And when you’re talking about edge deployments, that rings very true.

From the perspective of the amount of resources it has to use, it’s going to be a lot lighter when you’re talking about something like autonomous driving to have a shared kernel rather than lots of instances of a VM running, for example.

From a strictly security perspective, if you deal with container lifecycle management effectively, involve the security guys early, have a process around releasing, updating, and retiring container images into your registry, and have a process around introducing security controls and code scanning in your software development lifecycle -- making sure that every container that gets released is signed with an appropriate enterprise signing key -- then you have something that is very repeatable, compared with a traditional virtualized approach to application and delivery.

That’s one of the big benefits of containers. It’s very much a declarative environment. It’s something that you prescribe … This is how it’s going to look. And it’s going to be repeatable every time you deploy that. Whereas with a VM environment, you have a lot of VM sprawl. And there are a lot of changes across the different platforms as different people have connected and changed things along the way for their own purposes.

There are many benefits with the tighter control in a container environment. That can give you some very good security benefits.

Gardner: What comes next? How do organizations get started? How should they set themselves up to take advantage of containers in the right way, a secure way?

Begin with risk evaluation 

Leech: The first step is to do the appropriate due diligence. Containers are not going to be for every application. There are going to be certain things that you just can’t modernize, and they’re going to remain in your traditional data center for a number of years.

I suggest looking for the projects that are going to give you the quickest wins and use those POCs to demonstrate the value that containers can deliver for your organization. Make sure that you do appropriate risk awareness, work with the services organizations that can help you. The advantage of a services organization is they’ve probably been there with another customer previously so they can use the best practices and experiences that they have already gained to help your organization adopt containers.

Just make sure that you approach it using a DevSecOps model. There is a lot of discussion in the market at the moment about it. Should we be calling it DevOps or should we call it SecDevOps or DevOpsSec? My personal opinion is call it DevSecOps because security in a DevSecOps module sits right in the middle of development and operations -- and that’s really where it belongs.

https://www.hpe.com/us/en/solutions/container-platform.html
In terms of assets, there is plenty of information out there in a Google search; it finds you a lot of assets. But as I mentioned earlier, the NIST White Paper SP 800-190 is a great starting point to understand not only container security challenges but also to get a good understanding of what containers can deliver for you.

At the same time, at HPE we are also committed to delivering relevant information to our customers. If you look on our website and also our enterprise.nxt blog site, you will see a lot of articles about best practices on container deployments, case studies, and architectures for running container orchestration platforms on our hardware. All of this is available for people to download and to consume.

Gardner: I’m afraid we will have to leave it there. We have been exploring how container-based deployment models have gained popularity -- from cloud models to corporate data centers. And we have learned how, in order to push containers further into the mainstream security concerns need to be addressed across this new end-to-end container deployment spectrum.

So please join me in thanking our guest, Simon Leech, Worldwide Security and Risk Management Practice at HPE Pointnext Services. Thank you so much, Simon.

Leech: Thanks for having me.


Gardner: I learned a lot. And thanks as well to our audience for joining this sponsored BriefingsDirect Voice of Innovation discussion. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host for this ongoing series of HPE-supported discussions.

Thanks again for listening. Please pass this along to your IT community, and do come back next time.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: Hewlett Packard Enterprise.

A discussion on the escalating benefits from secure and robust container use and how security concerns need to be addressed early and across the new end-to-end container deployment spectrum. Copyright Interarbor Solutions, LLC, 2005-2020. All rights reserved.

You may also be interested in:

Friday, February 28, 2020

Mission Critical Use Cases Show How Analytics Architectures Usher in an Artificial Intelligence Era

www.hpe.com/AI

A discussion on how artificial intelligence and advanced analytics solutions coalesce into top competitive differentiators that prove indispensable for digital business transformation.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: Hewlett Packard Enterprise.

Dana Gardner: Hello, and welcome to the next edition of the BriefingsDirect Voice of AI Innovation podcast series.

Gardner
I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for this ongoing discussion on the latest insights into artificial intelligence (AI) use cases and strategies.

Major trends in AI and advanced analytics are now coalescing into top competitive differentiators for most businesses. Access to advanced algorithms, more cloud options, high-performance compute (HPC) resources, and an unprecedented data asset collection have all come together to make AI more attainable -- and more powerful -- than ever.

Stay with us as we now examine why AI is indispensable for digital transformation through deep-dive interviews on prominent AI use cases and their escalating business benefits.


To learn more about analytic solutions that support mission-critical solutions, we’re joined by two experts, Andy Longworth, Senior Solution Architect in the AI and Data Practice at Hewlett Packard Enterprise (HPE) Pointnext Services. Welcome, Andy.

Andy Longworth: Thank you, Dana.

Gardner: We’re also here with Iveta Lohovska, Data Scientist in the Pointnext Global Practice for AI and Data at HPE. Welcome, Iveta.

Iveta Lohovska: Thank you.

Gardner: Let’s look at the trends coalescing around modern analytics and AI and why they’re playing an increasingly essential role in digital business transformation. Andy, what do you see as top drivers making AI more prominent in most businesses?

AI data boost to business 

Longworth: We have three main things driving AI at the moment for businesses. First of all, we know about the data explosion. These AI algorithms require huge amounts of data. So we’re generating that, especially in the industrial setting with machine data.

https://www.linkedin.com/in/alongwor/
Longworth
Also, the relative price of computing is coming down, giving the capability to process all of that data at accelerating speeds as well. You know, the graphics processing units (GPUs) and tensor processing units (TPUs) are becoming more available, enabling us to get through that vast volume of data.

And thirdly, the algorithms. If we look to organizations like Facebook, Google, and academic institutions, they’re making algorithms available as open source. So organizations don’t have to go and employ somebody to build an algorithm from the ground up. They can begin to use these pre-trained, pre-created models to give them a kick-start in AI and quickly understand whether there’s value in it for them or not.

Gardner: And how do those come together to impact what’s referred to as digital transformation? Why are these actually business benefits?

Longworth: They allow organizations to become what we call data driven. They can use the massive data that they’ve previously generated but never tapped into to improve business decisions, impacting the way they drive the business through AI. It’s transforming the way they work.

Across several types of industry, data is now driving the decisions. Industrial organizations, for example, improve the way they manufacture. Without the processing of that data, these things wouldn’t be possible.

Gardner: Iveta, how do the trends Andy has described make AI different now from a data science perspective? What’s different now than, say, two or three years ago?

Lohovska: Most of the previous AI algorithms were 30, 40, and even 50 years old in terms of the linear algebra and their mathematical foundations. The higher levels of computing power enable newer computations and larger amounts of data to train those algorithms.

https://www.linkedin.com/in/iveta-lohovska-40210362/
Lohovska
Those two components are fundamentally changing the picture, along with the improved taxonomies and the way people now think of AI as differentiated between classical statistics and deep learning algorithms. Now, not just technical people can interact with these technologies and analytic models. Semi-technical people can with a simple drag-and-drop interaction, based on the new products in the market, adopt and fail fast -- or succeed faster -- in the AI space. The models are also getting better and better in their performance based on the amount of data they get trained on and their digital footprint.

Gardner: Andy, it sounds like AI has evolved to the point where it is mimicking human-like skills. How is that different and how does such machine learning (ML) and deep learning change the very nature of work?

Let simple tasks go to machines 

Longworth: It allows organizations and people to move some of the jobs that were previously very tedious for people so they can be done by machines and repurposes the people’s skills into more complex jobs. For example, in computer vision and applying that in quality control. If you’re creating the same product again and again and paying somebody to look at that product to say whether there’s a defect on it, it’s probably not the best use of their skills. And, they become fatigued.

If you look at the same thing again and again, you start to miss features of that and miss the things that have gone wrong. A computer doesn’t get that same fatigue. You can train a model to perform that quality-control step and it won’t become tired over time. It can keep going for longer than, for example, an eight-hour shift that a typical person might work. So, you’re seeing these practical applications, which then allows the workforce to concentrate on other things.

Gardner: Iveta, it wasn’t that long ago that big data was captured and analyzed mostly for the sake of compliance and business continuity. But data has become so much more strategic. How are businesses changing the way they view their data?

Lohovska: They are paying more attention to the quality of the data and the variety of the data collection that they are focused on. From a data science perspective, even if I want to say that the performance of models is extremely important, and that my data science skills are a critical component to the AI space and ecosystem, it’s ultimately about the quality of the data and the way it’s pipelined and handled.
Organizations will realize that being more selective and paying more attention to the foundations of how they handle big data -- or small data -- will get them to the data science part of the process.

This process of data manipulation, getting to the so-called last mile of the data science contribution, is extremely important. I believe it’s the critical step and foundation. Organizations will realize that being more selective and paying more attention to the foundations of how they handle big data -- or small data – will get them to the data science part of the process.

You can already see the maturity as many customers, partners, and organizations pay more attention to the fundamental layers of AI. Then they can get better performance at the last mile of the process.

Gardner: Why are the traditional IT approaches not enough? How do cloud models help?

Cloud control and compliance 

Longworth: The cloud brings opportunities for organizations insomuch as they can try before they buy. So if you go back to the idea of processing all of that data, before an organization spends real money on purchasing GPUs, they can try them in the cloud to understand whether they work and deliver value. Then they can look at the delivery model. Does it make sense with my use case to make a capital investment, or do I go for a pay-per-use model using the cloud?

You also have the data management piece, which is understanding where your data is. From that sense, cloud doesn’t necessarily make life any less complicated. You still need to know where the data resides, control that data, and put in the necessary protections in line with the value of the data type. That becomes particularly important with legislation like the General Data Protection Regulation (GDPR) and the use of personally identifiable information (PII).

www.hpe.com/AI

If you don’t have your data management under control and understand where all those copies of that data are, then you can’t be compliant with GDPR, which says you may need to delete all of that data.

So, you need to be aware of what you’re putting in the cloud versus what you have on-premises and where the data resides across your entire ecosystem.

Gardner: Another element of the past IT approaches has to do with particulars vs. standards. We talk about the difference between managing a cow and managing a herd.

How do we attain a better IT infrastructure model to attain digital business transformation and fully take advantage of AI? How do we balance between a standardized approach, but also something that’s appropriate for specific use cases? And why is the architecture of today very much involved with that sort of a balance, Andy?

Longworth: The first thing to understand is the specific use case and how quickly you need insights. We can process, for example, data in near real-time or we can use batch processing like we did in days of old. That use case defines the kind of processing.

If, for example, you think about an autonomous vehicle, you can’t batch-process the sensor data coming from that car as it’s driving on the road. You need to be able to do that in near real-time -- and that comes at a cost. You not only need to manage the flow of data; you need the compute power to process all of that data in near real-time.

So, understand the criticality of the data and how quickly you need to process it. Then we can build solutions to process the data within that framework and within the right time that it needs to be processed. Otherwise, you’re putting additional cost into a use case that doesn’t necessarily need to be there.
When we build those use cases we typically use cloud-like technologies. That allows us portability of the use case, even if we're not necessarily going to deploy it in the cloud. It allows us to move the use case as close to the data as possible.

When we build those use cases we typically use cloud-like technologies -- be that containers or scalar technologies. That allows us portability of the use case, even if we’re not necessarily going to deploy it in the cloud. It allows us to move the use case as close to the data as possible.

For example, if we’re talking about a computer vision use case on a production line, we don’t want to be sending images to the cloud and have the high latency and processing of the data. We need a very quick answer to control the production process. So you would want to move the inference engine as close to the production line as possible. And, if we use things like HPE Edgeline computing and containers, we can place those systems right there on the production line to get the answers as quickly as we need.

So being able to move the use case where it needs to reside is probably one of the biggest things that we need to consider.

Gardner: Iveta, why is the so-called explore, experiment, and evolve approach using such a holistic ecosystem of support the right way to go?

Scientific methods and solutions

Lohovska: Because AI is not easy. If it were easy, then everyone would be doing it and we would not be having this conversation. It’s not a simple statistical use case or a program or business intelligence app where you already have the answer or even an idea of the questions you are asking.

The whole process is in the data science title. You have the word “science,” so there is a moment of research and uncertainty. It’s about the way you explore the data, the way you understand the use cases, starting from the fact that you have to define your business case, and you have to define the scope.

My advice is to start small, not exhaust your resources or the trust of the different stakeholders. Also define the correct use case and the desired return on investment (ROI). HPE is even working on the definitions and the business case when approaching an AI use case, trying to understand the level of complexity and the required level of prediction needed to achieve the use case’s success.

www.hpe.com/AI

Such an exploration phase is extremely important so that everyone is aligned and finds a right path to minimize failure and get to the success of monetizing data and AI. Once you have the fundamentals, once you have experimented with some use cases, and you see them up and running in your production environment, then it is the moment to scale them.

I think we are doing a great job bringing all of those complicated environments together, with their data complexity, model complexity, and networking and security regulations into one environment that’s in production and can quickly bring value to many use cases.

This flow is extremely important, of experimenting and not approaching things like you have a fixed answer or fixed approach. It’s extremely important, and this is the way we at HPE are approaching AI.


Gardner: It sounds as if we are approaching some sort of a unified reference architecture that’s inclusive of systems, cloud models, data management, and AI services. Is that what’s going to be required? Andy, do we need a grand unifying theory of AI and data management to make this happen?

Longworth: I don’t think we do. Maybe one day we will get to that point, but what we are reaching now is a clear understanding of what architectures work for which use cases and business requirements. We are then able to apply them without having to experiment every time we go into this because it’s a complement to what Iveta said.

When we start to look at these use cases, when we engage with customers, what’s key is making sure there is business value for the organization. We know AI can work, but the question is, does it work in the customer’s business context?

If we can take out a good deal of that experimentation and come in with a fairly good answer to the use case in a specific industry, then we have a good jump start on that.

As time goes on and AI develops, we will see more generic AI solutions that can be used for many different things. But at the moment, it’s really still about point solutions.

Gardner: Let’s find out where AI is making an impact. Let’s look first, Andy, at digital prescriptive maintenance and quality control. You mentioned manufacturing a little earlier. What’s the problem, context, and how are we getting better business outcomes?

Monitor maintenance with AI

Longworth: The problem is the way we do maintenance schedules today. If you look back in history, we had reactive maintenance that was basically … something breaks and then we fix it.

Now, most organizations are in a preventative mode so a manufacturer gives a service window and says, “Okay, you need to service this machinery every 1,000 hours of running.” And that happens whether it’s needed or not.
Read the White Paper on Digital Prescriptive Maintenance and Quality Control
When we get into prescriptive and predictive maintenance, we only service those assets as they actually need it, which means having the data, understanding the trends, recognizing if problems are forthcoming, and then fixing them before they impact the business.

That data from machinery may sense temperature, vibration, speed, and getting a condition-based monitoring view and understanding in real time what’s happening with the machinery. You can then also use past history to be able to predict what is going to happen in the future with that machine.

We can get to a point where we know in real time what’s happening with the machinery and have the capability to predict the failures before they happen.

The prescriptive piece comes in when we understand the business criticality or the business impact of an asset. If you have a production line and you have two pieces of machinery on that production line, both may have the identical probability of failure. But one is on your critical manufacturing path, and the other is some production buffer.
The prescriptive piece goes beyond the prediction to understand the business context of that machinery and applying that to how you are behaving, and then how you react when something happens with that machine.

As a business, the way that you are going to deal with those two pieces of machinery is different. You will treat the one on the critical path differently than the one where you have a product buffer. And so the prescriptive piece goes beyond the prediction to understanding the business context of that machinery and applying that to how you are behaving, and then how you react when something happens with that machine.

That’s the idea of the solution when we build digital prescriptive maintenance. The side benefit that we see is the quality control piece. If you have a large piece of machinery that you can test to it running perfectly during a production run, for example, then you can say with some certainty what the quality of the outcoming product from that machine will be.

Gardner: So we have AI overlooking manufacturing and processing. It’s probably something that would make you sleep a little bit better at night, knowing that you have such a powerful tool constantly observing and reporting.

Let’s move on to our next use case. Iveta, video analytics and surveillance. What’s the problem we need to solve? Why is AI important to solving it?

Scrutinize surveillance with AI 

Lohovska: For video surveillance and video analytics in general, the overarching field is computer vision. This is the most mature and currently the trendiest AI field, simply because the amount of data is there, the diversity is there, and the algorithms are getting better and better. It’s no longer state-of-the-art, where it’s difficult to grasp, adopt, and bring into production. So, now the main goal is moving into production and monetizing these types of data sources.
Read the White Paper on
Video Analytics and Surveillance
When you talk about video analytics or surveillance, or any kind of quality assurance, the main problem is improving on or detecting human errors, behaviors, and environments. Telemetry plays a huge role here, and there are many complements and constraints to consider in this environment.

That makes it hardware-dependent and also requires AI at the edge, where most of the algorithms and decisions need to happen. If you want to detect fire, detect fraud or prevent certain types of failure, such as quality failure or human failure -- time is extremely important.

As HPE Pointnext Services, we have been working on our own solution and reference architectures to approach those problems because of the complexity of the environment, the different cameras, and hardware handling the data acquisition process. Even at the beginning it’s enormous and very diverse. There is no one-size-fits-all. There is no one provider or one solution that can handle surveillance use cases or broad analytical use cases at the manufacturing plant or oil and gas rig where you are trying to detect fire or oil and gas spills from the different environments. So being able to approach it holistically, to choose the right solution for the right complement, and design the architecture is key.

https://www.hpe.com/us/en/solutions/artificial-intelligence.html
Also, it’s essential to have the right hardware and edge devices to acquire the data and handle the telemetry. Let’s say when you are positioning cameras in an outside environment and you have different temperatures, vibrations, and heat. This will reflect on the quality of the acquired information going through the pipeline.

Some of the benefits in use cases using computer vision and video surveillance include real time information coming from manufacturing plants, knowing that all the safety and security standards there are met, and that the people operating are following the instructions and have the safeguards required for a specific manufacturing plant is also extremely important.

When you have a quality assurance use case, video analytics is one source of information to tackle the problem. For example, improving the quality of your products or batches is just one source in the computer vision field. Having the right architecture, being agile and flexible, and finding the right solution for the problem and the right models deployed at the right edge device -- or at the right camera -- is something we are doing right now. We have several partners working to solve the challenges of video analytics use cases.

Gardner: When you have a high-scaling, high-speed AI to analyze video, it’s no longer a gating factor that you need to have humans reviewing the processes. It allows video to be used in so many more applications, even augmented reality, so that you are using video on both ends of the equation, as it were. Are we seeing an explosion of applications and use cases for video analytics and AI, Iveta?

Lohovska: Yes, absolutely. The impact of algorithms in this space is enormous. Also, all the open source datasets, such as ImageNet and ResNet, allow a huge amount of data to train any kind of algorithms on those open source datasets. You can adjust them and pre-train them for your own use cases, whether it’s healthcare, manufacturing, or video surveillance. It’s very enabling.

You can see the diversity of the solutions people are developing and the different programs they are tackling using computer vision capabilities, not only from the algorithms, but also from the hardware side, because the cameras are getting more and more powerful.

Currently, we are working on several projects in the non-visible human spectrum. This is enabled by the further development of the hardware acquiring those images that we can’t see.

Gardner: If we can view and analyze machines and processes, perhaps we can also listen and talk to them. Tell us about speech and natural language processing (NLP), Iveta. How is AI enabling those businesses and how they transform themselves?

Speech-to-text to protect

Lohovska: This is another strong field for how AI is used and still improving. It’s not as mature as computer vision, simply because the complexity of human language and speech, and the way speech gets recorded and transferred. It’s a bit more complex, so it’s not only a problem of technologies and people writing algorithms, but also linguists being able to combine the grammar problems and write the right equation to solve those grammar problems.
But one very interesting field in the speech and NLP area is speech-to-text, so basically being able to transcribe speech into text. It’s very helpful for emergency organizations handling emergency calls or fraud detection, where you need, in real time, to detect fraud or danger. If someone is in danger, it’s a very common use case for law enforcement or for security organizations or for simply improving the quality of your service for call centers.

https://www.hpe.com/us/en/solutions/artificial-intelligence.html

This example is industry- or vertical-independent. You can have finance, manufacturing, retail -- but all of them have some kind of customer support. This is the most common use case, being able to record and improve the quality of your services, based on the analysis you can apply. Similar to the video analytics use case, the problem here, too, is handling the complexity of different algorithms, different languages, and the varying quality of the recordings.

A reference architecture, where you have the different components designed on exactly this holistic approach, allows the user to explore, evolve, and experiment in this space. We choose the right complement for the right problem and how to approach it.

And in this case, if we combine the right data science tool with the right processing tool and the right algorithms on top of it, then you can simply design the solution and solve the specific problem.

Gardner: Our next and last use case for AI is one people are probably very familiar with, and that’s the autonomous driving technology (ADT).

Andy, how are we developing highly automated-driving infrastructures that leverage AI and help us get to that potential nirvana of truly self-driving and autonomous vehicles?

Data processing drives vehicles 

Longworth: There are several problems around highly autonomous driving as we have seen. It’s taking years to get to the point where we have fully autonomous cars and there are clear advantages to it.

If you look at, for example, what the World Health Organization (WHO) says, there are more than 1 million deaths per year in road traffic accidents. One of the primary drivers for ADT is that we can reduce the human error in cars on the road -- and reduce the number of fatalities and accidents. But to get to that point we need to train these immensely complex AI algorithms that take massive amounts of data from the car.

Just purely from the sensor point of view, we have high-definition cameras giving 360-degree views around the car. You have radar, GPS, audio, and vision systems. Some manufacturers use light detection and ranging (LIDAR), some not. But you have all of these sensors giving massive amounts of data. And to develop those autonomous cars, you need to be able to process all of that raw data.
Typically, in an eight-hour shift, an ADT car generates somewhere between 70 and 100 terabytes of data. If you have an entire fleet of cars, then you need to be able to very quickly get that data off of the car so that you can get them back out on the road as quickly as possible. Then you need to get that data from where you offload it into the data center so that the developers, data scientists, analysts, and engineers can build to the next iteration of the autonomous driving strategy.

When you have built that, tested it, and done all the good things that you need to do, you need to next be able to get those models and that strategy from the developers back into the cars again. It’s like the other AI problems that we have been talking about, but on steroids because of the sheer volume of data and because of the impact of what happens if something should go wrong.

At HPE Pointnext Services, we have developed a set of solutions that address several of the pain points in the ADT development process. First is the ingest; how can we use HPE Edgeline processing in the car to pre-process data and reduce the amount of data that you have to send back to the data center. Also, you have to send back the most important data after the eight-hour drive first, and then send the run-of-the-mill, backup data later.
At HPE Pointnext Services, we have developed a set of solutions that address several of the pain points in the ADT development process.

The second piece is the data platform itself, building a massive data platform that is extensible to store all the data coming from the autonomous driving test fleet. That needs to also expand as the fleet grows as well as to support different use cases.

The data platform and the development platform are not only massive in terms of the amount of data that it needs to hold and process, but also in terms of the required tooling. We have been developing reference architectures to enable automotive manufacturers, along with the suppliers of those automotive systems, to build their data platforms and provide all the processing that they need so their data scientists can continuously develop autonomous driving strategies and be able to test them in a highly automated way, while also giving access to the data to the additional suppliers.

For example, the sensor suppliers need to see what’s happening to their sensors while they are on the car. The platform that we have been putting together is really concerned with having the flexibility for those different use cases, the scalability to be able to support the data volumes of today, but also to grow -- to be able to have the data volumes of the not-too-distant future.

https://www.hpe.com/us/en/solutions/artificial-intelligence.html

The platform also supports the speed and data locality, so being able to provide high-speed parallel file systems, for example, to feed those ML development systems and help them train the models that they have.

So all of this pulls together the different components we have talked about with the different use cases, but at a scale that is much larger than several of the other use cases, probably put together.

Gardner: It strikes me that the ADT problem, if solved, enables so many other major opportunities. We are talking about micro-data centers that provide high-performance compute (HPC) at the edge. We are talking about the right hybrid approach to the data management problem -- what to move, what to keep local, how to then have a lifecycle approach to. So, ADT is really a key use-case scenario.

Why is HPE uniquely positioned to solve ADT that will then lead to so many enabling technologies for other applications?

Longworth: Like you said, the micro-data center -- every autonomous driving car essentially becomes a data center on wheels. So being able to provide that compute at the edge to enable the processing of all that sensor data.

If you look at the HPE portfolio of products, there are very few organizations that have edge compute solutions and the required processing power in such small packages. But it’s also about being able to wrap it up in, not only the hardware, but the solution on top, the support, and being able to provide a flexible delivery model.

Lots of organizations want to have a cloud-like experience, not just from the way they consume the technology, but also in the way they pay for the technology. So, by HPE providing everything as-a-service allows being able to pay for it all, as you use it, for your autonomous driving platform. Again, there are very few organizations in the world that can offer that end-to-end value proposition.

Collaborate and corroborate 

Gardner: Iveta, why does it take a team-sport and solution-approach from the data science perspective to tackle these major use cases?

They can attack the complexity of those use cases from each side because it requires not just data science and the hardware but a lot of domain-specific expertise to solve those problems, too.

Lohovska: I agree with Andy. The way we approach those complex use cases and the fact that you can have them as a service -- and not only infrastructure-as-a-service (IaaS) or data-as-a-service (DaaS) -- but working on AI and modeling-as-a-service (MaaS). You can have a marketplace for models and being able to plug-and-play different technologies, experiment, and rapidly deploy them allows you to rapidly get value out of those technologies. That is something we are doing on a daily basis with amazing experts and people with the knowledge of the different layers. They can then attack the complexity of those use cases from each side, because it requires not just data science and the hardware, but a lot of domain-specific expertise to solve those problems. This is something we are looking at and we are doing in-house.

And I am extremely happy to say that I have the pleasure to work with all of those amazing people and experts within HPE.

Gardner: And there is a great deal more information available on each of these use cases for AI. There are white papers on the HPE website in Pointnext Services.

What else can people do, Andy, to get ready for these high-level AI use cases that lead to digital business transformation? How should organizations be setting themselves up on a people, process, and technology basis to become adept at AI as a core competency?

Longworth: It is about people, technology, process, and all these things combined. You don’t go and buy AI in a box. You need a structured approach. You need to understand what the use cases are that give value to your organization and to be able to quickly prototype those, quickly experiment with them, and prove the value to your stakeholders.

Where a lot of organizations get stuck is moving from that prototyping, proof of concept (POC), and proof of value (POV) phase into full production. It is tough getting the processes and pipelines that enable you to transition from that small POV phase into a full production environment. If you can crack that nut, then the next use-cases that you implement, and the next business problems that you want to solve with AI, become infinitely easier. It is a hard step to go from POV through to the full production because there are so many bits involved.

You have that whole value chain from grabbing hold of the data at the point of creation, processing that data, making sure you have the right people and process around that. And when you come out with an AI solution that gives some form of inference, it gives you some form of answer, you need to be able to act upon that answer.

You can have the best AI solution in the world that will give you the best predictions, but if you don’t build those predictions into your business processes, you may well have never made them in the first place.

Gardner: I’m afraid we will have to leave it there. We have been exploring how major trends in AI and advanced analytics have coalesced into top competitive differentiators for many businesses. And we have learned how AI is indispensable for digital transformation by looking at several prominent use cases and their escalating benefits.


So please join me in thanking our guests, Andy Longworth, Senior Solution Architect in the AI and Data Practice at HPE Pointnext Services. Thank you so much, Andy.

Longworth: Thank you, Dana.

Gardner: And Iveta Lohovska, Data Scientist in the Pointnext Global Practice for AI and Data at HPE. Thank you so much, Iveta.

Lohovska: Thank you.

Gardner: And a big thank you as well to our audience for joining us for this sponsored BriefingsDirect Voice of AI Innovation discussion. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host for this ongoing series of Hewlett Packard Enterprise-supported discussions.

Thanks again for listening. Please pass this along to your IT community, and do come back next time.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: Hewlett Packard Enterprise.

A discussion on how AI and advanced analytics solutions coalesce into top competitive differentiators that prove indispensable for digital business transformation. Copyright Interarbor Solutions, LLC, 2005-2020. All rights reserved.

You may also be interested in: