Showing posts with label containers. Show all posts
Showing posts with label containers. Show all posts

Thursday, July 16, 2020

AWS and Unisys Join Forces to Accelerate and Secure the Burgeoning Move to Cloud

https://www.unisys.com/offerings/cloud-and-infrastructure-services/cloudforte/cloudforte-for-aws

A discussion on cloud adoption best practices that help businesses cut total costs, manage operations remotely, and scale operations up and down.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: Unisys and Amazon Web Services.

Dana Gardner: Hi, this is Dana Gardner, Principal Analyst at Interarbor Solutions and you’re listening to BriefingsDirect.

Gardner
A powerful and unique set of circumstances are combining in mid-2020 to make safe and rapid cloud adoption more urgent and easier than ever.

Dealing with the novel coronavirus pandemic has pushed businesses to not only seek flexible IT hosting models, but to accommodate flexible work, hasten applications’ transformation and improve overall security while doing so.

This BriefingsDirect cloud adoption best practices discussion examines how businesses plan to further use cloud models to cut costs, manage operations remotely, and gain added capability to scale their operations up and down.


To learn more about the latest on-ramps to secure an agile cloud adoption, please join me now in welcoming our guests, Anupam Sahai, Vice President and Cloud Chief Technology Officer at Unisys. Welcome, Anupam.

Anupam Sahai: Thank you, Dana. It’s good to be here.

Gardner: We’re also joined by Ryan Vanderwerf, Partner Solutions Architect at Amazon Web Services (AWS). Welcome, Ryan.

Ryan Vanderwerf: Hi, Dana. Glad to join.

Gardner: Anupam, why is going to the public cloud an attractive option more now than ever?

Sahai
Sahai: There are multiple driving factors leading to these tectonic shifts. One is that the whole IT infrastructure is moving to the cloud for a variety of business and technology reasons. And then, as a result, the entire application infrastructure -- along with the underlying application services infrastructure -- is also moving to the cloud.

The reason is very simple because of what cloud brings to the table. It brings a lot of capabilities, such as providing scalability in a cost-effective manner. It makes IT and applications behave as a utility and obviates the need for every company to host local infrastructure, which otherwise becomes a huge operations and management challenge.

So, a number of business and technological factors, along with the COVID-19 pandemic situation, which essentially makes us work remotely, and having cloud-based services and applications available as a utility makes them easy to consume and use.

Public cloud on everyone’s horizon 

Gardner: Ryan, have you seen in your practice over the past several months more willingness to bring more apps into the public cloud? Are we seeing more migration to the cloud?

Vanderwerf: We’ve definitely had a huge uptick in migration. As people can’t be in an office, things like workspaces and doing remote desktops, have also seen a huge increase. People are trying to find ways to be elastic, cost-efficient, and make sure they’re not spending too much money.

Vanderwerf
Following up on what Anupam said, the reasons people are moving in the cloud haven’t changed. They have just been accelerated because they need agility and to speed-up access to the resources they need. They need cost savings by not having to maintain data centers by themselves.

By being more elastic, they can provision only for what they’re using and not have stuff running and costing money when you don’t need to. They can also deploy globally in minutes, which is a big deal across many regions, and allows people to innovate faster.

And right now, there’s a need to innovate faster, get more revenue, and cut costs – especially in times where fluctuation in demand goes up and down. You have to be ready for it.

Gardner: Yes, I recently spoke with a CIO who said that when the pandemic hit, they had to adjust workloads and move many from a certain set of apps that they weren’t going to be using as much to a whole other set that they were going to be using a lot more. And if it weren’t for the cloud, they just never would have been able to do that. So agility saved them a tremendous amount of hurt.

Anupam, why when we seek such cloud agility do we also have to think about lower risk and better security?

Sahai: Risk and security are critical because you’re talking about commercial, mission-critical workloads that have potentially sensitive data. As we move to the cloud, you should think three different trajectories. And some of this, of course, is being accelerated because of the COVID-19 pandemic.
Learn More About 
Unisys CloudForte
One of the cloud-migration trajectories, as Ryan said earlier, is the need for elastic computing, cost savings, performance, and efficiencies when building, deploying, and managing applications. But as we move applications and infrastructure to the cloud, there is a need to ensure that the infrastructure falls under what is called the shared responsibility model, where the cloud service provider protects and secures infrastructure up to a certain level and then the customers have their responsibility, a shared responsibility, to ensure that they’re protecting their workloads, applications, and critical data. They also have to comply with the regulations that those customers need to adhere to. 

In such a shared responsibility model, customers need to work very closely with the service providers, such as AWS, to ensure they are taking care of all security and compliance-related issues.

https://www.unisys.com/offerings/cloud-and-infrastructure-services/cloudforte/cloudforte-for-aws
You know, security breaches in the cloud -- while less than compared to on-premises-related deployments -- are still pretty rampant. That’s because some of the cloud security hygiene-related issues are still not being taking care of. That’s why solutions have to manage security and compliance for both the infrastructure and the apps as they move from on-premises to the cloud.

Gardner: Ryan, shared responsibility in practice can be complex when it’s hard to know where one party’s responsibility begins and ends. It cuts across people, process, and even culture.

When doing cloud migrations, how should we make sure there are no cracks for things to fall through? How do we make sure that we segue from on-premises to cloud in a way that the security issues are maintained throughout?

Stay safe with best-practices

Vanderwerf: Anupam is exactly right about the shared responsibility model. AWS manages and controls the components from the host operating system and virtualization layer down to physically securing the facilities. But it is up to AWS customers to build secure applications and manage their hygiene.

We have programs to help customers make sure they’re using those best practices. We have a well-architected program. It’s available on the AWS Management Console, and we have several lenses if you’re doing specific things like serverless, Internet of things (IoT), or analytics, for example.
Solutions architects can help the customer review all of their best practices and do a deep-dive examination with their teams to raise any flags that people might not be aware of and help find solutions.

Things like that have to be focused toward the business, but solutions architects can help the customer review all of their best practices and do a deep-dive examination with their teams to raise any flags that people might not be aware of and help them find solutions to remedy them.

We also have an AWS Technical Baseline Review that we do for partners. In it we make sure that partners are also following best practices around security and make sure that the correct things are in place for a good experience for their customers as well.

Gardner: Anupam, how do we ensure security-as-a-culture from the beginning and throughout the lifecycle of an application, regardless of where it’s hosted or resides? DevSecOps has become part of what people are grappling with. Does the security posture need to be continuous?

Sahai: That’s a very critical point. But first I want to double-click on what Ryan mentioned about the shared responsibility model. If you look at the overall challenges that customers face in migrating or moving to the cloud, there is certainly the security and compliance part of it that we mentioned.

There is also the cost governance issue and making sure it’s a well-architected framework architecture. The AWS Well-Architected Framework (WAF), for example, is supported by Unisys.

https://www.prnewswire.com/news-releases/new-security-and-optimization-features-in-unisys-cloudforte-bolster-services-delivered-on-amazon-web-services-300967623.html

Additionally, there are a number of ongoing issues around optimization, cost governance, security, compliance governance, and optimization of workloads that are critical for our customers. Unisys does a Cloud Success Barometer study every year and, and what we find is very interesting.

One thing is clear, about 90 percent of organizations are transitioned to the cloud. So no surprise there. But in the journey to the cloud what we also found is that 60 percent of the organizations are unable to move to the cloud, or hold on to their cloud migrations, because of some of these unexpected roadblocks. And so that’s where partners like Unisys and AWS are coming together to offer visibility and solutions to address them. Those challenges remain, and, of course, we are able to help address them.

Coming back to the DevSecOps question, let’s take a step back and understand why DevOps came into being. It was basically because of the migration to the cloud that we had the need to break down the silos between development and operations to deploy infrastructure-as-code. That’s why DevOps essentially brings about faster, shorter development cycles; faster deployment, faster innovation.

Studies have shown that DevOps leads to at least 60 percent faster innovation and turnaround time compared to traditional approaches, not to mention the cost savings and the IT headcount savings when you merge the dev and ops organizations.
As DevOps goes mainstream, and as cloud-centric applications are becoming mainstream, there is a need to inject security into the DevOps cycle. Having DevSecOps is key.

But as DevOps goes mainstream, and as cloud-centric applications are becoming mainstream, there is a need to inject security into the DevOps cycle. So, having DevSecOps is key. You want to enable developers, operations, and security professionals to work together on yet another silo, to break them down and merge with the DevOps team.

But we also need to provide tools that are amenable to the DevOps processes, continuous integration/continuous delivery (CI/CD) tools that enable the speed and agility needed for DevOps, but also injecting security -- without slowing them down. It is a challenge, and that’s why the all-new field of DevSecOps enables security and compliance injection into the DevOps cycle. It is very, very critical.

Gardner: Right, you want to have security but without giving up agility and speed. How have Unisys and AWS come together to ease and reduce the risk of cloud adoption while greasing the skids to the on-ramps to cloud adoption?

Smart support on the cloud journey

Sahai: Unisys in December 2019 announced CloudForte capabilities with the AWS cloud. A number of capabilities were announced that help customers adopt cloud without worrying about security and compliance.

CloudForte today provides a comprehensive solution to help customers manage their customer cloud journeys, whether it’s greenfield or brownfield; and there is hybrid cloud support, of course, for the AWS cloud along with multi-cloud support from a deployment perspective.

The solution combines production services that enable three primary use cases: Cloud migration, as we talked about, and apps migration using DevSecOps. We’ve codified that in terms of best practices, reference architecture, and well-architected principles, and we have wrapped that in advisory services and deployment services as well.
Learn More About 
Unisys CloudForte
The third use case is around cloud posture management, which is understanding and optimizing existing deployments, including hybrid cloud deployments, to ensure you’re managing costs, managing security and compliance, and also taking care of any other IT-related issues around governance of resources to make sure that you migrate to the cloud in a smart and secure manner.

Gardner: Ryan, why did AWS get on-board with CloudForte? What was it about it that was attractive to you in helping your customers?

Vanderwerf: We are all about finding solutions that help our customers and enabling our partners to help their customers. With the shared responsibility model, that’s on the customer, and CloudForte has really good risk management and a portfolio of applications and services to help people get ahold of that responsibility themselves.

Instead of customers trying to go on their own -- or just following general best practices – Unisys also has the tooling in place to help customers. That’s pretty important because with DevSecOps, people suffer from a lack of business agility, security agility, and face the risks around change to their businesses. People fear that.
With the shared responsibility model, that's on the customer, and CloudForte has really good risk management and a portfolio of apps and services to help people get ahold of that responsibility themselves.

These tools have really helped customers manage that journey. We have a good feeling about being secure and being compliant, and the dashboards they have inside of it are very informative, as a matter of fact.

Gardner: Of course, Unisys has been around for quite a while. They have had a very large and consistent installed base over the years. Are the tooling, services, and value in CloudForte bringing in a different class of organization, or different parts of organizations, into AWS?

Vanderwerf: I think so, especially in the enterprise area where they have a lot of things to wrangle on the journey to the cloud -- and it’s not easy. When you’re migrating as much as you can to a cloud setting – seeking to keep control over assets and making sure there are no rogue things running -- it’s a lot for an enterprise IT manager to handle. And so, the more tools they have in their tool-belt to manage that is way better than them trying to cook up their own stuff.


Gardner: Anupam, did you have a certain type of organization, or part of an organization, in mind when you crafted CloudForte for AWS?

Sahai: Let’s take a step back and understand the kind of services we offer. Our services are tailored and applicable for both enterprises and the public sector. We offer advisory services to begin with, which essentially allows us to pass-through products. You have the CloudForte Navigator product, which allows us to assess the current posture of the customer and understand the application capabilities the customer has, whether it needs a transformation, and, of course, this is all driven by business outcomes that the customers desires.

https://securitybrief.eu/story/unisys-delivers-new-cloud-security-features-on-aws

Second, through CloudForte we bring best practices, reference architectures, and blueprints for the various customer journeys that I mentioned earlier. Greenfield or brownfield opportunities, whatever the stage of adoption, we have created a template to help with the specific migration and customer journey.

Once customers are able and ready to get on-boarded, we enable DevSecOps using CI/CD tools, best practices, and tools to ensure the customers use a well-architected framework. We also have a set of accelerators provided by Unisys that enable customers to get on-boarded with guardrails provided. So, in short, the security policies, compliance policies, organizational framework, and the organizational architectures are all reflected in the deployment.

Then, once it's up and running, we manage and operate the hybrid cloud security and compliance posture to ensure that any deviations, any drifts, are monitored and remediated to ensure they are continuously having an acceptable posture.

https://www.unisys.com/offerings/cloud-and-infrastructure-services/cloudforte/cloudforte-for-aws
Finally, we also have AIOps capabilities, which include AI-enabled outcomes that the customer is looking for. We use artificial intelligence and machine learning (AI/ML) technologies to optimize the resources. We drive cost savings through resource optimization. We also have an instant management capability to bring down costs dramatically using some those analytics and AIOps capabilities.

So our objective is to drive digital transformation for customers using a combination of products and services that CloudForte has, and working in close conjunction with what AWS offers, so that we create a compelling offering that’s complementary to each other, but very compelling from a business outcomes perspective.

Gardner: The way you describe them, it sounds like these services would be applicable to almost any organization, regardless of where they are on their journey to the cloud. Tell us about some of the secret sauce under the hood. The Unisys Stealth technology, in particular, is unique in how it maintains cloud security.

Stealth solutions for hybrid security 

Sahai: The Unisys Stealth technology is very compelling, especially in the hybrid cloud security sense. As we discussed earlier, the shared responsibility model requires customers to take care of and share the responsibility to make sure that workloads in the cloud infrastructure are compliant and secure.

And we have a number of tools in that regard. One is the CloudForte Cloud Compliance Director solution, which allows you to assess and manage your security and compliance posture for the cloud infrastructure. So it’s a cloud security posture management solution.

Then we also have the Stealth solution, essentially a zero trust, micro-segmentation capability that leverages the identity, or the user roles, in an organization to establish a community that’s trusted and is capable of doing certain actions. It creates communities of interest that allow and secure through a combination of micro-segmentation and identity management.

https://www.marketscreener.com/UNISYS-CORPORATION-14744/news/Unisys-Achieves-Amazon-Web-Services-Managed-Service-Provider-and-Amazon-Web-Services-Well-Architec-30855799/

Think of that as a policy management and enforcement solution that essentially manipulates the OS native stacks to enforce policies and rules that otherwise are very hard to manage.

If you take Stealth and marry that with CloudForte compliance, some of the accelerators, and Navigator, you have a comprehensive Unisys solution for hybrid cloud security, both on-premises and in the AWS cloud infrastructure and workloads environment.

Gardner: Ryan, it sounds like zero trust and micro-segmentation augment the many services that AWS already provides around identity and policy management. Do you agree that the zero trust and micro-segmentation aspects of something like Stealth dovetail very well with AWS services?

Vanderwerf: Oh, yes, absolutely. And in addition to that, we have a lot of other security tools like AWS WAF, AWS Shield, Security Hub, Macie, IAM Access Analyzer and Inspector. And I am sure under the hood they are using some of these services directly.

The more power you have the better. And it’s tough to manage. Some people are just getting into cloud and they have challenges. It’s not always technical, sometimes it's just communications issues at a company or lack of sponsorship or resource allocation or undefined key performance indicators (KPI). So all these things, or even just timing, those are all important for a security situation.

Gardner: All those spinning parts, those services, that’s where the professional services come in so that organizations don’t have to feel like they are doing it alone. How does the professional services and technical support fit into helping organizations go about these cloud journeys?

Sahai: Unisys is trusted by our customers to get things right. So we say that we do cloud correctly, and we do cloud right, and that includes a combination of trusted advisory services. That means everything from identifying legacy assets, to billing, and to governance, and then using a combination of products and services to help customers transform as they move to the cloud.
Our cloud-trained people and expertise speeds up the migrations, gives visibility, and provides operational improvements. Thereby we are able to do cloud right and in a secure fashion by establishing security practices, trust through security and compliance, and AIOps.

Our cloud-trained people and expertise speeds up the migrations, gives visibility, and provides operational improvements. Thereby we are able to do the cloud right and in a secure fashion by establishing security practices, establishing trust through a combination of micro-segmentation, security, and compliance ops, AIOps, and that certainly is the combination of products and services that we offer today.

And our customers tell us we are rated very highly, 95 percent-plus in terms of customer satisfaction. It’s a testament to the fact that our professional services -- along with our products – complements the AWS services and products that customers need to deliver their business outcomes.

Gardner: Anupam, do you have any examples of organizations that leveraged both AWS and Unisys CloudForte? What have they been doing and what did they get from it?

Student success supported 

Sahai: I have a number of examples where a combination of CloudForte and AWS deployments are happening. One is right here where I live in the San Francisco Bay Area. The business challenge they faced was to enhance the student learning experience and deliver technology services critical to student success and graduation initiatives. And given the COVID-19 scenario, you can understand why cloud becomes an important factor in that.

Unisys cloud and infrastructure services, using CloudForte, helped them deploy a hybrid cloud model with AWS. We had Ansible for automation, ServiceNow for IT service management (ITSM), AIOps, and we deployed a logarithm and a portfolio of tools and services.

They were then able to accelerate their capability to offer critical administrative services, such as student scheduling and registration, to about half-a-million students and 52,000 faculty and staff members across 23 campuses. It delivered 30 percent better performance while realizing about 33 percent cost savings and 40 percent growth in usage of these services. So, great outcomes, great cost savings, and you are talking about reduction of about $4.5 million in computed storage costs and about $3 million in cost avoidance.
Learn More About 
Unisys CloudForte
So this is an example of a customer who leveraged the power of the AWS Cloud and the CloudForte products and services to deliver these business outcomes, which is a win-win situation for us. So that’s one example.

Gardner: Ryan, what do you expect for the next level of cloud adoption benefits? Is the AIOps something that we are going to be doubling-down on? Or are there other services? How do you see the future of cloud adoption improving?

The future is integrated 

Vanderwerf: It’s making sure everything is able to integrate. Like, for example, with a hybrid cloud situation we now have AWS Outposts. Now people can run a rack of servers in their data center and be connected directly to the cloud.

Some things don’t make sense always to go to cloud. Perhaps machinery running analytics, for example, has very low latency requirements. You could still write native applications to work with the cloud in AWS and run those apps locally.

Also, AIOps is huge because so many people are doing AI/ML in their workloads, from deciding security posture threats, to finding whether machines are breaking down. There are so many options in data analytics and then wrangling all these things together with data lakes. Definitely, the future is about better integrating all of these things.

AI/MLOps is really popular now because there are so many data scientists and people integrating ML into things. They need some sort of organizational structure to keep that organized, just like CI/CD did for DevOps. And all of those areas continue to grow. At AWS, we have 175-plus services, and they are always coming up with new ones every day. I don’t see that slowing down anytime soon.

Gardner: Anupam, for your future outlook, to this point that Ryan raised about integration, how do you see organizations like Unisys helping to manage the still growing complexity around the adoption and operations in the cloud and hybrid cloud environments?

Sahai: Yes, that is a huge challenge. As Ryan mentioned, hybrid cloud is here to stay. Not everything will move to the cloud. And while cloud migration trends will continue, there will be some core set of apps that will be staying on-premises. So leveraging AWS Outposts, as he said, to help with the hybrid cloud journeys will be important. And Unisys offers hybrid cloud and multi-cloud offerings that we are certainly committed to.
Security and compliance issues are not going away, unfortunately. Cloud breaches are out there. And so there is a need to actively manage and be proactive about managing your security and compliance posture. Customers are going to work with AWS and Unisys to fortify both their defense and offense proactively.

The other thing is that security and compliance issues are not going away, unfortunately. Cloud breaches are out there. And so there is a need to actively manage and be proactive about managing your security and compliance posture. And so that’s another area that I think our customers are going to be working together with AWS and Unisys to help them fortify not just their defenses, but also the offense -- to be proactive in dealing with these threats and breaches and preventing them.

The third area is around AIOps, and this whole notion of AI-enabled CloudForte, and we see AI and ML to be prorating every path of the customer journey. Not just in AIOps, which is the operations and management piece, which is a critical part of what we do, but AI in enabling the customer journeys in terms of predicting.

So let’s say a customer is trying to move to the cloud, we want to be able to use predictions to predict what their customer journey would look like if they move to the cloud and to be proactive about predicting and remediating issues that might come up.

And, of course, AI is fueled by the data revolution -- the data lakes, the data buses -- that we have today to transport data seamlessly across applications, across hybrid cloud infrastructures, and to tie all of this together. You have the app migration, the CI/CD, and the DevSecOps capabilities that are part of the CloudForte advisory and product services.

We are enabling customers to move to the cloud without compromising speed, agility, and security and compliance, whether they are moving infrastructure to the cloud, using infrastructure as code, or moving applications to the cloud using applications as code by leveraging the micro-services infrastructure, the cloud native infrastructure that AWS provides -- and Kubernetes included.

We have support for a lot of these capabilities today, and we will continue to evolve them to make sure no matter where the customer is in their customer journey to the cloud -- whatever the stage of evolution -- we have a compelling set of production services that customers can use to get to the cloud and stay there with the help of Unisys and AWS.

Gardner: I’m afraid we will have to leave it there. You have been listening to a sponsored BriefingsDirect discussion on the latest on-ramps to secure and agile cloud adoption.


And we have learned how a partnership between AWS and Unisys allows businesses to increasingly go to cloud models, cut total costs, manage operations better, and gain added agility for scaling up and down.

So please join me now in thanking our guests, Anupam Sahai, Vice President and Cloud Chief Technology Officer at Unisys. Thank you, Anupam.

Sahai: Thank you, Dana. It was great talking to you and Ryan, and I appreciate the opportunity to be here.

Gardner: And we have also been here with Ryan Vanderwerf, Partner Solutions Architect at AWS. Thank you, Ryan.

Vanderwerf: Thank you, Dana and Anupam, it’s been great having a chat with you.

Sahai: Same here, friend.

Gardner: And a big thank you as well to our audience for joining this BriefingsDirect cloud computing adoption best practices discussion. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host throughout this series of Unisys- and AWS-sponsored BriefingsDirect discussions.

Thanks again for listening. Please pass this along to your IT community, and do come back next time.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: Unisys and Amazon Web Services.

A discussion on cloud adoption best practices that help businesses cut total costs, manage operations remotely, and scale operations up and down. Copyright Interarbor Solutions, LLC, 2005-2020. All rights reserved.

You may also be interested in:

Friday, March 06, 2020

As Containers Go Mainstream, IT Culture Should Pivot to End-to-End DevSecOps

https://www.hpe.com/us/en/solutions/container-platform.html

A discussion on the escalating benefits from secure and robust container use and how security concerns need to be addressed early and often across the new end-to-end container deployment spectrum.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: Hewlett Packard Enterprise.

Dana Gardner: Hello, and welcome to the next edition of the BriefingsDirect Voice of Innovation podcast series.

Gardner
I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for this ongoing discussion on the latest insights into modern IT deployment architecture strategies.

Container-based deployment models have rapidly gained popularity from cloud models to corporate data centers. IT operators are now looking to extend the benefits of containers to more use cases, including the computing edge.

Yet in order to push containers further into the mainstream, security concerns need to be addressed across this new end-to-end container deployment spectrum -- and that means security addressed during development and employment under the rubric of DevSecOps best practices.


Stay with us now as we examine the escalating benefits that come from secure and robust container use with our guest, Simon Leech, Worldwide Security and Risk Management Practice at Hewlett Packard Enterprise (HPE) Pointnext Services. Welcome, Simon.

Simon Leech: Hey, Dana. Good afternoon.

Gardner: Simon, are we at an inflection point where we’re going to see containers take off in the mainstream? Why is this the next level of virtualization?

Mainstream containers coming

Leech: We are certainly seeing a lot of interest from our customers when we speak to them about the best practices they want to follow in terms of rapid application development.

https://www.linkedin.com/in/simonleech/
Leech
One of the things that always held people back a little bit with virtualization was that you are always reliant on an operating system (OS) managing the applications that sit on top of that OS in managing the application code that you would deploy to that environment.

But what we have seen with containers is that as everything starts to follow a cloud-native approach, we start to deal with our applications as lots of individual microservices that all communicate integrally to provide the application experience to the user. It makes a lot more sense from a development perspective to be able to address the development in these small, microservice-based or module-based development approaches.

So, while we are not seeing a massive influx of container-based projects going into mainstream production at the moment, there are certainly a lot of customers testing their toes in the water to identify the best possibilities to adopt and address container use within their own application development environments.

Gardner: And because we saw developers grok the benefits of containers early and often, we have also seen them operate within a closed environment -- not necessarily thinking about deployment. Is now the time to get developers thinking differently about containers -- as not just perhaps a proof of concept (POC) or test environment, but as ready for the production mainstream?

Leech: Yes. One of the challenges I have seen with what you just described is a lot of container projects start as a developer’s project behind his laptop. So the developer is going out there, identifying a container-based technology as something interesting to play around with, and as time has gone by has realized he can actually make a lot of progress by developing his applications using a container-based architecture.
This is often done under the radar of management. one of the things we are discussing with customers as we address DevSecOps and DevOps is to make sure you get buy-in from the executive team and enable top-down integration.

What that means from an organizational perspective is that this is often done under the radar of management. One of the things we are discussing with our customers as we go and talk about addressing DevSecOps and DevOps initiatives is to make sure that you do get that buy-in from the executive team and so you can start to enable some top-down integration.

Don’t just see containers as a developer’s laptop project but look at it broadly and understand how you can integrate that into the overall IT processes that your organization is operating with. And that does require a good level of buy-in from the top.

Gardner: I imagine this requires a lifecycle approach to containers thinking -- not just about the development, but in how they are going to be used over time and in different places.

Now, 451 Research recently predicted that the market for containers will hit $2.7 billion this year. Why do you think that the IT operators -- the people who will be inheriting these applications and microservices -- will also take advantage of containers? What does it bring to their needs and requirements beyond what the developers get out of it?

Quick-change code artists

Leech: One of the biggest advantages from an operational perspective is the ability to make fast changes to the code you are using. So whereas in the traditional application development environment, a developer would need to make a change to some code and it would involve requesting a downtime to be able to update the complete application, with a container-based architecture, you only have to update parts of the architecture.

https://www.hpe.com/us/en/solutions/container-platform.html
So, it allows you to make many more changes than you previously would have been able to deliver to the organization -- and it allows you to address those changes very rapidly.

Gardner: How does this allow for a more common environment to extend across hybrid IT -- from on-premises to cloud to hybrid cloud and then ultimately to the edge?

Leech: Well, applications developed in containers and developed within a cloud-native approach typically are very portable. So you don’t need to be restricted to a particular version or limits, for example. The container itself runs on top of any OS of the same genre. Obviously, you can’t run a Windows container on top of a Linux OS, or vice versa.

But within the general Linux space that pretty much has compatibility. So it makes it very easy for the containers to be developed in one environment and then released into different environments.

Gardner: And that portability extends to the hyperscale cloud environments, the public cloud, so is there a multi-cloud extensibility benefit?

Leech: Yes, definitely. You see a lot of developers developing their applications in an on-premises environment with the intention that they are going to be provisioned into a cloud. If they are done properly, it shouldn’t matter if that’s a Google Cloud Platform instance, a Microsoft Azure instance, or Amazon Web Services (AWS).

Gardner: We have quite an opportunity in front of us with containers across the spectrum of continuous development and deployment and for multiple deployment scenarios. What challenges do we need to think about to embrace this as a lifecycle approach?

What are the challenges to providing security specifically, making sure that the containers are not going to add risk – and, in fact, improve the deployment productivity of organizations?

Make security a business priority 

Leech: When I address the security challenges with customers, I always focus on two areas. The first is the business challenge of adopting containers, and the security concerns and constrains that come along with that. And the second one is much more around the technology or technical challenges.

If you begin by looking at the business challenges, of how to adopt containers securely, this requires a cultural shift, as I already mentioned. If we are going to adopt containers, we need to make sure we get the appropriate executive support and move past the concept that the developer is doing everything on his laptop. We train our coders on the needs for secure coding.
A lot of developers are not trained as security specialists. It makes sense to put a program into place that trains coders to think more about security, especially in a container environment where you have fast release cycles.

A lot of developers have as their main goal to produce high-quality software fast, and they are not trained as security specialists. It makes a lot of sense to put an education program into place, that allows you to train those internal coders so that they understand the need to think a little bit more about security -- especially in a container environment where you have fast release cycles and sometimes the security checks get missed or don’t get properly instigated. It’s good to start with a very secure baseline.

And once you have addressed the cultural shift, the next thing is to think about the role of the security team in your container development team, your DevOps development teams. And I always like to try and discuss with my customers the value of getting a security guy into the product development team from day one.

Often, we see in a traditional IT space that the application gets built, the infrastructure gets designed, and then the day before it’s all going to go into production someone calls security. Security comes along and says, “Hey, have you done risk assessments on this?” And that ends up delaying the project.


If you introduce the security person into the small, agile team as you build it to deliver your container development strategy, then they can think together with the developers. They can start doing risk assessments and threat modeling right from the very beginning of the project. It allows us to reduce delays that you might have with security testing.

At the same time, it also allows us to shift our testing model left in a traditional waterfall model, where testing happens right before the product goes live. But in a DevOps or DevSecOps model, it’s much better to embed the security, best practices, and proper tooling right into the continuous integration/continuous delivery (CI/CD) pipeline.

The last point around the business view is that, again, going back to the comment I made earlier, developers often are not aware of secure coding and how to make things secure. Providing a secure-by-default approach -- or even a security self-service approach – allows developers to gain a security registry, for example. That provides known good instances of container images or provides infrastructure and compliance code so that they can follow a much more template-based approach to security. That also pays a lot of dividends in the quality of the software as it goes out the door.

Gardner: Are we talking about the same security precautions that traditional IT people might be accustomed to but now extending to containers? Or is there something different about how containers need to be secured?

Updates, the container way 

Leech: A lot of the principles are the same. So, there’s obviously still a need for network security tools. There’s still a need to do vulnerability assessments. There is still a need for encryption capabilities. But the difference with the way you would go about using technical controls to protect a container environment is all around this concept of the shared kernel.

An interesting white paper has been released by the National Institute of Standards and Technology (NIST) in the US, SP 800-190, which is their Application Container Security Guide. And this paper identifies five container security challenges around risks with the images, registry, orchestrator, the containers themselves, and the host OS.

So, when we’re looking at defining a security architecture for our customers, we always look at the risks within those five areas and try to define a security model that protects those best of all.

https://www.hpe.com/us/en/solutions/container-platform.html
One of the important things to understand when we’re talking about securing containers is that we have a different approach to the way we do updates. In a traditional environment, we take a gold image for a virtual machine (VM). We deploy it to the hypervisor. Then we realize that if there is a missing patch, or a required update, that we roll that update out using whatever patch management tools we use.

In a container environment, we take a completely different approach. We never update running containers. The source of your known good image is your registry. The registry is where we update containers, have updated versions of those containers, and use the container orchestration platform to make sure that next time somebody calls a new container that it’s launched from the new container image.

It’s important to remember we don’t update things in the running environment. We always use the container lifecycle and involve the orchestration platform to make those updates. And that’s really a change in the mindset for a lot of security professionals, because they think, “Okay, I need to do a vulnerability assessment or risk assessment. Let me get out my Qualys and my Rapid7,” or whatever, and, “I’m going to scan the environment. I’m going to find out what’s missing, and then I’m going to deploy patches to plug in the risk.”

So we need to make sure that our vulnerability assessment process gets built right into the CI/CD pipeline and into the container orchestration tools we use to address that needed change in behavior.

Gardner: It certainly sounds like the orchestration tools are playing a larger role in container security management. Do those in charge of the container orchestration need to be thinking more about security and risk?

Simplify app separation 

Leech: Yes and no. I think the orchestration platform definitely plays a role and the individuals that use it will need to be controlled in terms of making sure there is good privileged account management and integration into the enterprise authentication services. But there are a lot of capabilities built into the orchestration platforms today that make the job easier.

One of the challenges we’ve seen for a long time in software development, for example, is that developers take shortcuts by hard coding clear text passwords into the text, because it’s easier. And, yeah, that’s understandable. You don’t need to worry about managing or remembering passwords.

But what you see a lot of orchestration platforms offering is the capability to deliver sequence management. So rather than storing the passcode in within the code, you can now request the secret from the secrets management platform that the orchestration platform offers to you.
Orchestration tools give you the capability to separate container workloads for differing sensitivity levels. This provides separation between the applications without having to think too much about it.

These orchestration tools also give you the capability to separate container workloads for differing sensitivity levels within your organization. For example, you would not want to run containers that operate your web applications on the same physical host as containers that operate your financial applications. Why? Because although you have the capability with the container environment using separate namespaces to separate the individual container architectures from one another, it’s still a good security best practice to run those on completely different physical hosts or in a virtualized container environment on top of different VMs. This provides physical separation between the applications. Very often the orchestrators will allow you to provide that functionality within the environment without having to think too much about it.

Gardner: There is another burgeoning new area where containers are being used. Not just in applications and runtime environments, but also for data and persistent data. HPE has been leading the charge on making containers appropriate for use with data in addition to applications.

How should the all-important security around data caches and different data sources enter into our thinking?

Save a slice for security 

Leech: Because containers are temporary instances, it’s important that you’re not actually storing any data within the container itself. At the same time, as importantly, you’re not storing any of that data on the host OS either.

It’s important to provide persistent storage on an external storage array. So looking at storage arrays, things like from HPE, we have Nimble Storage or Primera. They have the capability through plug-ins to interact with the container environment and provide you with that persistent storage that remains even as the containers are being provisioned and de-provisioned.

So your container itself, as I said, doesn’t store any of the data, but a well-architected application infrastructure will allow you to store that on a third-party storage array.

Gardner: Simon, I’ve had an opportunity to read some of your blogs and one of your statements jumped out … “The organizational culture still lags behind when it comes to security.” What did you mean by that? And how does that organizational culture need to be examined, particularly with an increased use of containers?

Leech: It’s about getting the security guys involved in the DevSecOps projects early on in the lifecycle of that project. Don’t bring them to the table toward the end of the project. Make them a valuable member of that team. There was a comment made about the idea of a two-pizza team.

https://www.hpe.com/us/en/solutions/container-platform.html

A two-pizza team means a meeting should never have more people in it than can be fed by two pizzas and I think that that applies equally to development teams when you’re working on container projects. They don’t need to be big; they don’t need to be massive.

It’s important to make sure there’s enough pizza saved for the security guy! You need to have that security guy in the room from the beginning to understand what the risks are. That’s a lot of where this cultural shift needs to change. And as I said, executive support plays a strong role in making sure that that happens.

Gardner: We’ve talked about people and process. There is also, of course, that third leg of the stool -- the technology. Are the people building container platforms like HPE thinking along these lines as well? What does the technology, and the way it’s being designed, bring to the table to help organizations be DevSecOps-oriented?

Select specific, secure solutions 

Leech: There are a couple of ways that technology solutions are going to help. The first are the pre-production commercial solutions. These are the things that tend to get integrated into the orchestration platform itself, like image scanning, secure registry services, and secrets management.

A lot of those are going to be built into any container orchestration platform that you choose to adopt. There are also commercial solutions that support similar functions. It’s always up to an organization to do a thorough assessment of whether their needs can be met by the standard functions in the orchestration platform or if they need to look at some of the third-party vendors in that space, like Aqua Security or Twistlock, which was recently acquired by Palo Alto Networks, I believe.
No single solution covers all of an enterprise's requirements. It's a task to assess security shortcomings, what products you need, and then decide who will be the best partner for those total solutions.

And then there are the solutions that I would gather up as post-production commercial solutions. These are for things such as runtime protection of the container environment, container forensic capabilities, and network overlay products that allow you to separate your workloads at the network level and provide container-based firewalls and that sort of stuff.

Very few of these capabilities are actually built into the orchestration platforms. They tend to be third parties such as Sysdig, Guardicore, and NeuVector. And then there’s another bucket of solutions, which are more open-source solutions. These typically focus on a single function in a very cost-effective way and are typically open source community-led. And these are solutions such as SonarQube, Platform as a Service (PaaS), and Falco, which is the open source project that Sysdig runs. You also have Docker Bench and Calico, a networking security tool.

But no single solution covers all of an enterprise customer’s requirements. It remains a bit of a task to assess where you have security shortcomings, what products you need, and who’s going to be the best partner to deliver those products with those technology solutions for you.

Gardner: And how are you designing Pointnext Services to fill that need to provide guidance across this still dynamic ecosystem of different solutions? How does the services part of the equation shake out?

Leech: We obviously have the technology solutions that we have built. For example, the HPE Container Platform, which is based around technology that we acquired as part of the BlueData acquisition. But at the end of the day, these are products. Companies need to understand how they can best use those products within their own specific enterprise environments.

I’m part of Pointnext Services, within the advisory and professional services team. A lot of the work that we do is around advising customers on the best approaches they can take. On one hand, we’d like them to purchase our HPE technology solutions, but on the other hand, a container-based engagement needs to be a services-led engagement, especially in the early phases where a lot of customers aren’t necessarily aware of all of the changes they’re going to have to make to their IT model.

At Pointnext, we deliver a number of container-oriented services, both in the general container implementation area as well as more specifically around container security. For example, I have developed and delivered transformation workshops around DevSecOps.

We also have container security planning workshops where we can help customers to understand the security requirements of containers in the context of their specific environments. A lot of this work is based around some discovery we’ve done to build our own container security solution reference architecture.

Gardner: Do you have any examples of organizations that have worked toward a DevSecOps perspective on continuous delivery and cloud native development? How are people putting this to work on the ground?

Edge elevates container benefits 

Leech: A lot of the customers we deal with today are still in the early phases of adopting containers. We see a lot of POC engagement where a particular customer may be wanting to understand how they could take traditional applications and modernize or architect those into cloud-native or container-based applications.

There’s a lot of experimentation going on. A lot of the implementations we see start off small, so the customer may buy a single technology stack for the purpose of testing and playing around with containers in their environment. But they have intentions within 12 to 18 months of being able to take that into a production setting and reaping the benefits of container-based deployments.

Gardner: And over the past few years, we’ve heard an awful lot of the benefits for moving closer to the computing edge, bringing more compute and even data and analytics processing to the edge. This could be in a number of vertical industries, from autonomous vehicles to manufacturing and healthcare.

https://www.hpe.com/us/en/solutions/container-platform.html
But one of the concerns, if we move more compute to the edge, is will security risks go up? Is there something about doing container security properly that will make that edge more robust and more secure?

Leech: Yes, a container project done properly can actually be more secure than a traditional VM environment. This begins from the way you manage the code in the environment. And when you’re talking about edge deployments, that rings very true.

From the perspective of the amount of resources it has to use, it’s going to be a lot lighter when you’re talking about something like autonomous driving to have a shared kernel rather than lots of instances of a VM running, for example.

From a strictly security perspective, if you deal with container lifecycle management effectively, involve the security guys early, have a process around releasing, updating, and retiring container images into your registry, and have a process around introducing security controls and code scanning in your software development lifecycle -- making sure that every container that gets released is signed with an appropriate enterprise signing key -- then you have something that is very repeatable, compared with a traditional virtualized approach to application and delivery.

That’s one of the big benefits of containers. It’s very much a declarative environment. It’s something that you prescribe … This is how it’s going to look. And it’s going to be repeatable every time you deploy that. Whereas with a VM environment, you have a lot of VM sprawl. And there are a lot of changes across the different platforms as different people have connected and changed things along the way for their own purposes.

There are many benefits with the tighter control in a container environment. That can give you some very good security benefits.

Gardner: What comes next? How do organizations get started? How should they set themselves up to take advantage of containers in the right way, a secure way?

Begin with risk evaluation 

Leech: The first step is to do the appropriate due diligence. Containers are not going to be for every application. There are going to be certain things that you just can’t modernize, and they’re going to remain in your traditional data center for a number of years.

I suggest looking for the projects that are going to give you the quickest wins and use those POCs to demonstrate the value that containers can deliver for your organization. Make sure that you do appropriate risk awareness, work with the services organizations that can help you. The advantage of a services organization is they’ve probably been there with another customer previously so they can use the best practices and experiences that they have already gained to help your organization adopt containers.

Just make sure that you approach it using a DevSecOps model. There is a lot of discussion in the market at the moment about it. Should we be calling it DevOps or should we call it SecDevOps or DevOpsSec? My personal opinion is call it DevSecOps because security in a DevSecOps module sits right in the middle of development and operations -- and that’s really where it belongs.

https://www.hpe.com/us/en/solutions/container-platform.html
In terms of assets, there is plenty of information out there in a Google search; it finds you a lot of assets. But as I mentioned earlier, the NIST White Paper SP 800-190 is a great starting point to understand not only container security challenges but also to get a good understanding of what containers can deliver for you.

At the same time, at HPE we are also committed to delivering relevant information to our customers. If you look on our website and also our enterprise.nxt blog site, you will see a lot of articles about best practices on container deployments, case studies, and architectures for running container orchestration platforms on our hardware. All of this is available for people to download and to consume.

Gardner: I’m afraid we will have to leave it there. We have been exploring how container-based deployment models have gained popularity -- from cloud models to corporate data centers. And we have learned how, in order to push containers further into the mainstream security concerns need to be addressed across this new end-to-end container deployment spectrum.

So please join me in thanking our guest, Simon Leech, Worldwide Security and Risk Management Practice at HPE Pointnext Services. Thank you so much, Simon.

Leech: Thanks for having me.


Gardner: I learned a lot. And thanks as well to our audience for joining this sponsored BriefingsDirect Voice of Innovation discussion. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host for this ongoing series of HPE-supported discussions.

Thanks again for listening. Please pass this along to your IT community, and do come back next time.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: Hewlett Packard Enterprise.

A discussion on the escalating benefits from secure and robust container use and how security concerns need to be addressed early and across the new end-to-end container deployment spectrum. Copyright Interarbor Solutions, LLC, 2005-2020. All rights reserved.

You may also be interested in: