Tuesday, August 23, 2022

How Deep Observability Powers Strong Cybersecurity and Network Insights Across Complex Cloud Environments

network deep observability
Transcript of a discussion on how new advances in deep observability provide powerful access and knowledge about multi-cloud and mixed-network behaviors. 

Listen to the podcast. Find it on iTunesDownload the transcript. View the video. Sponsor: Gigamon.

 

Dana Gardner: Hi, this is Dana Gardner, Principal Analyst at Interarbor Solutions, and you’re listening to BriefingsDirect. The growing prevalence of complex multi- and hybrid-cloud environments has opened a Pandora’s Box of unseen risks around security and performance.

 

Gardner

But unlike when IT and network operators had the tools and access to track their own internal systems and data, the mixed-cloud model of today is much harder to know and secure. Pandora’s Box is open but observing what’s going on in and around it is cloaked by inadequate means to gain actionable insights amid all the distributed variables.

 

Enter deep observability and its capabilities, which are designed to provide rich access to multi-cloud and mixed-network behaviors. Such observations and data gathering can be analyzed to rapidly secure end-to-end applications and protect sensitive data.

 

Stay with us now as we explore the latest advances around deep observability and how a neutral deployment approach for observation technology spans more infrastructure and services to best protect and accelerate digital business success.

 

To learn how deep observability puts cloud chaos and hard-to-know risks back under control, please join me now in welcoming our guest, Shane Buckley, President and CEO of Gigamon. Welcome, Shane.

 

Shane Buckley: Thanks, it’s great to be here.

 

Gardner: Shane, what makes knowing and securing today’s complex cloud activities especially challenging?

 

Buckley: That’s a great opening question. We’ve seen over the last half of a decade or more the desire for organizations to be able to be flexible in terms of where they deploy their workloads. Traditionally workloads were deployed in the data center, with a hard perimeter, lots of security, and compliance needs were met within the organization.

Buckley
Then came the desire to create more flexible workloads, to run faster, to scale better, and also to reduce cost. The cloud model offered ways to gain these great advantages. But, as we often say here at Gigamon, the cloud is simple -- until it isn’t.

And so now organizations are looking at deploying more workloads in the public clouds as well as using colocation providers and within private cloud environments by leveraging technology such as VMware. We are also seeing the emergence of containers and Kubernetes as a technique to provide better automation, higher scalability, and lower cost.

 

The cloud conundrum

 

The great flexibility that the cloud provides is very positive to companies. It allows them to move faster. And that’s essential in the era of digital transformation because more organizations, driven by the COVID-19 pandemic, want their applications to flexibly reach more customers through remote access, handheld devices, mobile phones, and computers.

 

The snag is that the security footprint doesn’t track as straightforwardly as the workload boost when moving from the protected data center to a shared cloud environment. This is the conundrum companies face today. How do they make sure they can run their apps fast, stay secure, and innovate? These requirements are at loggerheads with each other. And that’s one of the major challenges that the Gigamon team’s solutions address.

 

Gardner: There are always trade-offs when adopting technology, of course, but when we’re forced to move quickly, the trade-offs can become riskier. When businesses could control their network perimeter, they knew what was coming and going. Now, we must take the good with the bad traffic. So, if you can’t control the perimeter, how can you at least moderate the risk?

 

Buckley: For many years technologies such as observability have been used for application performance monitoring. Observability, of course, is the technique of looking at an application’s performance remotely, leveraging things such as metrics, events, logs, and traces, which are commonly called MELT data, and have been very effective.

When you shift an application to the cloud, you don't have the same controls you enjoy when the application sits within your own infrastructure and you have control from the network layer right up to the app layer.

The issue though is one of security. And if you want to secure your applications, if you want to take a workload from a protected data center where you have layers and layers of security --  because security has always been about defense and depth -- and you want to shift that application into a cloud-based environment, you don’t have the same controls that you enjoy when the application sits within your own infrastructure and you get control from the network layer right up to the application layer.

 

So, the big question for chief information security officers (CISOs) and security professionals today is, “How do I secure the applications I have deployed to give the organization the flexibility, but maintain the security posture and compliance?” It’s become the number-one issue that CISOs face today as they try to support the business and the organization’s desire to run fast and innovate. The missing part is how to stay secure. It’s really, really complex.

 

Gardner: Now, ideally you should be able to attain the same level of control, visibility, and security in the cloud deployments that you had on-premises. Is that not ever going to be possible? Isn’t it simply a matter of putting the right technology in place?

 

Buckley: Traditionally, organizations have used layered defense tools such as firewalls, web application monitoring technologies, data leakage-prevention technologies, and the capability to encrypt and decrypt traffic streams. Yet more than 90 percent of the threats in organizations sit inside these encrypted traffic streams, which are largely bound to the tools.

 

As one moves to the cloud environment, this gets a lot more complicated because you don’t own the network. The network is owned by the cloud provider. And so how, in a public cloud environment specifically, or as one deploys via containers, can you see inside and see what’s happening?

 

Observe deeply to stay out of deep trouble

 

The emerging technology to fix this issue is deep observability. We refer to it as the deep observability pipeline. Deep observability is about taking the technique of observability to the applications by looking deeply inside the flow of the network traffic. Because logs and traces are mutable, they can be turned off.

 

And, in fact, in many environments where applications are compromised, the nefarious actor will either turn logging off or more perniciously they will overwrite the log. In that way, the security operation center (SOC) is fooled into thinking the application is performing as usual, because logs have been muted.

 

Network traffic is immutable. It cannot be changed. If you take a hard copy of traffic going to or from an application or server and you diagnose it, you know exactly what’s happening within that traffic flow. The ability to get to that level of granularity, that level of fidelity in terms of what’s happening inside the application -- and extract key information, which then you can send to the tools -- is really, really powerful.

 

It’s a technique that we at Gigamon have used for more than 15 years: The ability to extract network traffic insights and send them to advanced tools in a consistent way, same as we have done when the workload sits in a data center. Now, one can do the same with an application or workload in a data center, or move it inside a container, or inside the private cloud, or any public cloud. Anywhere across the hybrid-cloud continuum, we have a consistent approach in how we implement your security insights layer.
 

Gardner: It’s one thing to be able to capture and observe; it’s another thing to be able to deal with a fire hose of data. How, during the past 15 years, have we benefited from handling streaming in near-real-time these massive amounts of data in and around networks and cloud environments?

 

Buckley: You raise an interesting point. Networks are now operating faster and faster. More and more applications are talking to each other, particularly using technology like microservices where one application may make a call to multiple other applications -- often referred to as east-west traffic. That east-west traffic is not just traditionally inside the physical data center, this time it could be across multiple cloud service providers, or across multiple different domains; who knows?

How do you capture all the east-west traffic? Cyber professionals will tell you that lateral movement is how nefarious actors and hackers get access across the estate.You need to catch them from an east-west perspective.

As more of this traffic exists, how do you capture all the information about the traffic? Traditional firewalls, even cloud-based firewalls, typically capture north-south traffic. How do you capture all this east-west traffic, too, because as cyber professionals will tell you, lateral movement is how nefarious actors and hackers get across the estate. You need to catch them from an east-west perspective as well.

 

Secondarily, a lot of the key information happens on that east-west basis. It’s where you get the context of use. But trying to take all the traffic from all the applications all the time creates massive bloat. Typically, a customer’s security information and event management (SIEM) capability will fill with pretty useless information, and so it takes the SOC way too long to delve through the details and find that one needle in multiple haystacks.

 

The ability to instead extract only the relevant information, the metadata, from this traffic flow reduces the data volume by more than 90 percent, meaning you have a lot fewer haystacks to find that one needle in.

 

It also means that you can extract the data from public-cloud networks and reduce the cost of the deployment. You pay less in fees to the cloud providers as they take traffic in and out of the cloud environment to support custom or on-premises tools -- even though the application is sitting inside a public cloud. And that gives tremendous flexibility and tremendous consistency. It means that you can keep your security posture and ensure compliance is maintained across an organization.

 

Gardner: We want deep observability to also be extensible observability. It must observe across an end-to-end continuum of hybrid-cloud services and data flows. How difficult is it to get both deep and pervasive observability?

 

Buckley: It isn’t as difficult as it used to be. The technology now exists. Certainly, at Gigamon we’re providing what we call our deep observability pipeline to customers in addition to the traditional observability they get from many IT vendors. And by deep observability pipeline I mean the ability to look at the application workflows and the traffic that’s going to and from those workflows at the network level and extract the data. Typically, it’s metadata that’s extracted and creates the pipeline of actionable intelligence. That is then sent forward to the relevant tools, to SIEMs and other devices, which can then absorb or extract the information.

 

If you have a network detection and response tool, Gigamon provides high-fidelity traffic that has been optimized, via metadata extraction, to provide the best possible context behind that information. That’s in addition to the other observability infrastructure that you may have. Gigamon also has partnerships with many of the leading observability vendors, whereby we feed directly into their dashboards and systems the high-fidelity pipeline of deep observability information. Customers have the option of doing it multiple ways.

At the end of the day, security is about defense and depth. It’s important for organizations to ensure a consistent security posture regardless of where the workload sits. Nobody gets a hall pass if they move a workload from a protected environment such as a physical data center to a cloud environment where it’s less protected. That doesn’t make business sense. We have to make sure we provide the same level of protection as that application workload moves in a flexible way from on-premises to colocation or public-cloud models.

 

Gardner: I suppose all the players in this ecosystem benefit when they have access to the network data and observations. There’s no sense in trying to corner the market, if you will, or building a walled garden around the observations. It should be ecumenical observability data access in order to be the most useful and impactful, right?

 

Observability that’s neutral and scalable

 

Buckley: That’s 100 percent correct. You hit the nail on the head. We often describe Gigamon as being Switzerland; neutral, we don’t have a dog in the fight. Our job is to do the best possible job to take the most relevant information across all these different platforms and these different workloads and send it to whatever toolset that the customer is looking for -- whether you have one tool or whether you have 1,000 tools, it doesn’t matter to us.

 

We’ve always been neutral at Gigamon in providing the best contextual information to make the best possible decisions from a network, application performance, and security perspective. And the ability to provide deep observability pipeline information extends that now to all forms of cloud in a way that’s never been done before. We are completely egalitarian. We are completely open. We will send whatever information you want, to whatever tools you want.

 

Gardner: At Gigamon, you said that this deep observability technology has been in the works for 15 years, but this use case, this hybrid-cloud problem set, wasn’t evident 15 years ago. How has the background of Gigamon put you in a position to be able to deliver on these technologies and capabilities?

 

Buckley: If we rewind back a number of years, customers attached a toolset to a SPAN port or a switch to access the traffic. That, of course, becomes very unreliable because switches are not designed to ensure that every single packet of data on the SPAN port is transferred. There’s congestion inside the switch, too. When some anomaly happens in the network, oftentimes those packets and that information is lost, and so it’s just not fit for function.
 

Gigamon pioneered and invented the attack-and-aggregation technology that allows you to take a copy of traffic -- whether it’s north-south or east-west -- you can aggregate it together and send the traffic to the desired tools. Over the last 12 or so years, we’ve enhanced that to optimize the traffic, extract the metadata from the network, put application filtering rules in, and decrypt traffic at the center. As a result, we see the information uniquely across the entire infrastructure. You only do an encryption once; you don’t have to do it multiple times.

 

We have protected and supported the largest, most secure, most complex networks in the world. As these networks evolved to provide cloud, multi-cloud, and hybrid-cloud techniques, we have used the same architectural approach. It’s been tried and tested over the past 15 years. So instead of physical taps inside these physical networks, you have virtual taps or Open vSwitch (OVS) mirroring techniques in the cloud. We then have virtual aggregation versus physical aggregation. We have virtual optimization versus physical optimization.

The technique we use inside the cloud is the same textbook approach that we've provided to CISOs and organizations for many years, and they have relied and depended upon. Now we can scale this within cloud environments.

The technique we use inside the cloud is the same textbook approach that we’ve provided to CISOs and organizations for many years and that they’ve relied and depended upon. Now we’ve been able to transform this technology from an embedded solution inside a very high-performance hardware device to provide tremendous scalability -- scale up and scale out -- within cloud environments.

 

As a result, you get low overhead and very light touch. This can be built into the orchestration and automation systems that the customers have. Then it can be scaled up and scaled out, always providing the same level of protection as we used to do with our Gigamon hardware technologies that are famous within the biggest and the fastest data centers on the planet.

 

Gardner: If we have deep observability and it’s pervasive across cloud environments, we extract the metadata, which can be very valuable. We’ve talked about the security use case, but it seems to me that such observability provides intelligence in other areas, too.

 

Particularly nowadays, as the general cost of cloud use is going up, are there ways to extend observability value to help make the best use of your cloud spend? Perhaps to compare and contrast your cloud activities for the best minimum and viable fit?

 

Make the most of your cloud spend

 

Buckley: Super question, and, of course, the answer is, yes. The concept that one has to send everything to everywhere all the time is not scalable in today’s world. Whether you’re running 400 gigabits per second links to your physical data center or whether you’re running on the fastest cloud platforms in the world, it doesn’t matter.

 

There is a nearly infinite amount of data being sent across these very large networks on a daily basis. So, the capability to optimize the data flow, to eliminate the unnecessary data -- whether it’s duplicates of the data, whether it’s having the full payload that’s no longer required because the metadata is sufficient -- the ability to extract that information without losing the fidelity of information and reduce the quantity of information by over 90 percent saves companies and organizations tremendously.

 

Take, for example, the speed and the capacity of their firewalls, of their other security devices across the network, and their application performance tools. If you’re seeing a tenth of the traffic across the infrastructure, you need a tenth of the performance of the tools. This is beneficial to the customer because you end up saving money, and in a potentially recessionary environment, this is even a more important message.

 

But, in addition to that, because we also see all the east-west traffic, we can send more information to the tool, while it actually needs to process less. So instead of just seeing an onset of traffic, we can add that east-west dimension as well. We can also ensure that the traffic is decrypted so that all the bad stuff inside the encrypted stream is highlighted. In a very simple way, the blind spots are where the bad guys and gals hang out. We illuminate those blind spots, so we can know where they hang out. We do that in a way that sends less traffic to the network.

 

Gardner: What are some of the top cloud use cases for deep observability in practice? What are the benefits that organizations are getting in real terms? How does this help a CISO sleep better at night?

 

Migrate to cloud with good security

 

Buckley: Typically, a customer comes to us, and they have used Gigamon for a decade or more. We are the visibility analytics provider for their infrastructure. We have helped protect their infrastructure for a long time.

 

And now they have a cloud-migration project and so a requirement to move workloads. In many cases, financial institutions want to move workloads to a colocation provider or private-cloud environment. They often leverage a solution like VMware’s NSX to move an application or workload to a public-cloud provider, such as AWSMicrosoft Azure, or Google Cloud Platform, whatever. And they’re saying, “How do I do this in a way that ensures that I can get compliance approval and maintain my security posture?”

 

We’ll work with them usually on two types of migrations. One that’s a lift-and-shift, where you take the application as it is, which is the preponderance of applications within larger organizations. You pick it up, bring it across, and drop it inside the environment -- the container, private cloud, public cloud, or whatever. Then we reattach all the network, application forms, and the security tooling in a way that is similar to what they did before. You don’t lose anything, and you maintain everything that you had from a security and applications forms’ perspective.

 


T
he second type of application migration has the customer saying, “Hey, we’re modernizing this application to make sure it works more efficiently in a cloud environment, so it can scale up and scale out, and be in line with what the environment needs.”
 

That migration approach might require different tooling, but we use the very same technique. We ensure that we can capture all the traffic going to and from that application. We can process it and optimize it, as I just described, and then we work with the customer to determine the tooling for compliance and what the CISO needs to ensure the security posture of those business applications -- and then we put that all in place.

 

Also, now we’re seeing as many workloads move from the public clouds to a hybrid-cloud model as we’ve seen going the other way. Oftentimes customers say, “I tried an application in a public-cloud environment, but it doesn’t give me the performance and the cost savings that I expected -- and so I want to move it back.”

 

We enable that type of customer to have the flexibility to take the application and put it back where it was -- or put it somewhere else. Maybe they want to put it inside a private-cloud environment, or maybe they’re moving from a private-cloud environment to 100 percent public.

 

Whatever the customer wants to do, we will work with them to understand where it was, where it’s going, what the potential needs are. We will ensure they maintain the compliance and the security posture of that application, as well as the performance because that remains a very important component, too.

 

Gardner: We’re not just talking about deep observability for security and performance benefits, but you’re bringing up an important workload's portability capability. And any way to help move workloads among hybrid-cloud deployment options while maintaining security posture presents a huge digital business and economic benefit. Have people been able to share with you some of the cost benefits that I suspect are there?

 

Money-saving choices, app by app

 

Buckley: When we run the analysis with customers, we see a return on investment (ROI) in less than six months in terms of the cost associated with the Gigamon deployment and the savings that they’ll get on a go-forward basis. And that’s just direct costs. That doesn’t include operational costs and efficiencies that come with modernizing applications or moving them to a cloud framework to begin with. The multiple benefits are quite significant.

 

Incidentally, the latest research shows that the level of deployment to the public clouds is not as great as had been forecast. The forecast was that we should now be close to having 60 to 70 percent of applications moved to the public cloud. But we’re seeing a resurgence of the colocation model as people leverage container-based technologies and private-cloud technologies. As a result, we’re seeing the public-cloud providers themselves offering on-premises and/or colocation capabilities to leverage the flexibility and the ease-of-use of those data center-hosted application stacks.

 

And so, the visibility gained from deep observability to choose whatever is best on a per application basis is becoming very, very important. Regardless of what the enterprise does in terms of deployment options, they will ultimately be able to save money.

 

Gardner: Your heritage places you in the wheelhouse of a network operations executive or leader. But what you just described is something a bit higher, if you will, in the organization, at the architecture decision-making level. That means those making major decisions about deployment strategies. Do you need to make Gigamon’s value then known to a different persona, perhaps at the architect or Chief Technology Officer (CTO) level?

 

Buckley: I would say network operations executives and organizations have always been core to the success of our business. They saw uniquely the advantage of having a single platform or a fabric that gave flexibility of deployment, flexibility of scale, load balancing, and all the great advantages that our technology provides to customers.

From a deep observability perspective, most organizations have handed the responsibility of securing their hybrid cloud environment to the CISO. So now we have the opportunity to work with the app security and SecOps people, as well as the network security people.

For over half a decade now we’ve been working very closely with security groups as well -- from CISOs to security architects, security operations groups, etc. -- to understand their problems. In many ways, the value of our fabric has been tremendously well-received within security operations over half a decade.

 

From a persona perspective, whether you’re a network operations (NetOps) leader or a security operations (SecOps) leader, obviously we work very closely with both. From a deep observability perspective, most organizations have handed the responsibility of securing their hybrid-cloud environment to their CISO. Now, oftentimes within the CISO’s organization, which is becoming larger clearly, there are new sub-personas within that space. And so we have the opportunity to work with application security people as well as the traditional network security or security operations folks in addition to who we work with today.

 

The good news though is that they’re all super-connected. They have a lot of alignment between them.

 

Gardner: They should be.

 

Buckley: Yes, they should be. And so, we’re well-known. Gigamon is well-known as being inside these environments. It’s been the core platform to ensure that we provide that security footprint.

 

Certainly, we are spending a lot more time talking to business information security officers (BISOs), too, as well as the application security folks to help them understand how this technology can be leveraged within a hybrid-cloud environment.

 

Gardner: How about vertical use cases? Is there low-lying fruit? You mentioned finance. I imagine the regulatory issues there are pressing. But where does the rubber hit the road first and best for deep observability needs?

 

Zero trust everywhere

 

Buckley: Financial services obviously is a hotspot for organizations trying to secure their infrastructure, for obvious reasons. The other area that’s very important to us is our public sector business, on a global basis. The US federal government particularly has taken a very progressive view on security, with the recent executive order from President Biden for zero trust and the implementation of zero trust across federal organizations and contractors. We’re very close to that issue as well.

 

Security in hybrid-cloud uses many of the techniques that we leverage within zero trust. And within zero trust there are typically seven pillars, one of which is visibility and analytics. It’s considered foundational to have zero trust security, in that if you can’t see stuff, you can’t secure it. And all elements and the other pillars depend on the visibility and analytics pillar to operate.

 

Zero trust is not just sought by the governments; it’s of course being adopted and being used by organizations around the world. If you look at protecting critical infrastructure, for example, it’s a really big deal. So sometimes we get involved in conversations with operational technology (OT) and protecting OT devices, whether it’s healthcare, nuclear facilities, and other hardened and critical facilities for organizations, that becomes a really big deal for customers as well.

 


Within the hyperscalers and the software as a service (SaaS) vendors, many of the big SaaS vendors use Gigamon to provide that layer of protection to their applications because the customer often can’t secure it intrinsically themselves. So, you’ll find Gigamon’s approach or connection across many different verticals on a worldwide basis.
 

As we increasingly move to 5G, a core element of 5G is the capability to extract information from these ultrahigh speed networks and to provide correlation between the user plane and the control plane to provide the right traffic to the right tools at the right time.

 

In many of those networks, you see Gigamon is at the center of the ability to deploy the infrastructure as well. So, we’re present in a lot of these different verticals and ecosystems because it’s the same problem, but it’s just used in a slightly different way. And when you’re a fabric, which Gigamon is, you have the benefit of being able to deploy, whether it’s a software footprint or software/hardware footprint or any combination across all these different environments.

 

Gardner: We’ve used the word ecosystem quite a bit and that implies partners working together with other companies. Is there a channel and/or partnership benefit here? How does Gigamon and deep observability fit into a whole larger than the sum of the parts?

 

Buckley: As you would imagine, we work with some of the best and leading system integrators and value-added resellers and other partners on a worldwide basis. They have the ability to take all the piece-parts and bring them together. When Gigamon is deployed successfully, we’re a fabric. We provide this pipeline of actionable intelligence to customers and to tools. And then there are other tools to take advantage of that.

We're the heartbeat that makes the networks and applications run. We bring the whole value chain together. We can ensure that one plus one equals three -- or five or six.

The architectural design of the network is somewhat changed because we’re at the center. We’re the heartbeat that makes the networks and applications run. We work with a lot of the vendors to ensure that they bring the whole value chain together. They have the experience dealing with all of the security, application, performance, and networking tools so that they can interconnect it all in the appropriate way to optimize and protect the network traffic.

 

Partnerships within the channel and vendor community are super important. Many ecosystem vendors we work with include through joint marketing, jointly entering global markets with better capabilities, and via joint events. In doing so, we can ensure that one plus one equals three -- or five or six. We do that on a regular basis.

 

Gardner: Shane, in conversations I have in the field, we often talk about the most important imperatives facing organizations. Security, best use of the cloud, understanding and controlling your data, and being better able to understand your customers to provide a better experience are all among the top concerns.

 

And one of the salient common elements among all of these is having better intelligence about what’s going on, both in the business operations and the IT systems, and then how to constantly improve them. It seems to me that deep observability is an essential core constituent in supporting an intelligence drive within any organization.

 

Do you see machine learning (ML) and other analytics capabilities evolving from the benefits of deep observability and the metadata that you’re providing?

 

Eliminate blind spots, increase intelligence

 

Buckley: I agree a billion percent with what you just said. Having the right information at the right time is incredibly important for professionals, whether you’re in security or to make the appropriate decisions to protect the organization. How many times do people say, “If only I knew; if it was only possible for me to see. I had no idea that they lay inside this application or this part of my network when I was compromised.”

 

The ability to eliminate blind spots so that the security team has the best possible opportunity to protect the infrastructure is of prominent importance. Make no mistake, this is a cat and mouse game. In 2021, 68 percent of US organizations were hacked, and ransomware was demanded. Some 50 percent of those had to pay ransomware. And in the cat and mouse game, the mouse is winning, not the cat.

 

Our job is to make sure that we slow the mouse down and give the cat an opportunity to catch it faster and protect the infrastructure. But it will continue to be that cat and mouse game because as soon as we -- and I mean the whole ecosystem, not just Gigamon -- put our systems together better, the bad folks figure out ways to compromise it. That’s just the way it is.

 

But by streamlining the information, by optimizing the information, and ensuring that we can provide absolutely the right information -- actual intelligence -- to the right tools at the right time, we minimize the chance that the mouse gets away.

 

Gardner: I’m afraid we’ll have to leave it there. You’ve been listening to a sponsored BriefingsDirect discussion on how new advances in deep observability provide powerful access and knowledge about multi-cloud and mixed-network behaviors. And we’ve learned how a neutral deployment approach to observability that spans more infrastructure and services best protects and accelerates digital business value across nearly any cloud configuration.

 

A big thank you to our guest, Shane Buckley, President and CEO of Gigamon. Thank you, sir.

 

Buckley: Thank you very much, Dana.

 

Gardner: And a big thank you as well to our audience for joining this BriefingsDirect cloud complexity risk reduction discussion. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host for this Gigamon-sponsored interview.

 

Thanks again for listening. Please pass this along to your IT operations and security communities, and do come back next time.

 

Listen to the podcast. Find it on iTunesDownload the transcript. View the video. Sponsor: Gigamon.

 

Transcript of a discussion on how new advances in deep observability provide powerful access and knowledge about multi-cloud and mixed-network behaviors. Copyright Interarbor Solutions, LLC, 2005-2022. All rights reserved.

 

You may also be interested in:

 

Wednesday, June 22, 2022

HPE Accelerates its Sustainability Goals While Improving the Impact of IT on the Environment and Society

Transcript of a discussion on how Hewlett Packard Enterprise has newly accelerated its many programs and initiatives to reduce its carbon emissions, conserve energy, and reduce waste. 

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: Hewlett Packard Enterprise.

Dana Gardner: Hello, and welcome to the next edition of the BriefingsDirect podcast series. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for this ongoing discussion on the impact of information technology (IT) on the environment and society.

As businesses worldwide seek to maximize their value to their customers and communities, the total value equation has expanded to now include the impact on sustainability for the environment.

The ways that companies, along with their partners, suppliers, and employees best manage and govern their resources and assets speaks volumes about their place among peers. And it allows them to take a leadership position as stewards and protectors of the future. The sooner the world’s industries develop a commitment to reach a net-zero carbon emissions posture, for example, the better for everyone in gaining environmental sustainability.

Stay with us now as we examine how Hewlett Packard Enterprise (HPE) has newly accelerated its many programs and initiatives to reduce its carbon emissions, conserve energy, and reduce waste -- including far earlier net-zero dates and more impactful emission-reduction milestones. We’ll now learn how HPE’s newest living progress report provides a blueprint for other organizations in and outside of the HPE orbit to also hasten and improve their sustainability efforts.


Here to share the latest on HPE’s plans and goals for broad and lasting sustainability is John Frey, Chief Technologist, Sustainable Transformation at HPE. Welcome to BriefingsDirect, John.

John Frey: Thank you. It’s great to be here.

Gardner: How does HPE define ESG and how long has it been working toward improving its impacts across these goals?

Frey: To make sure everyone knows what we mean when we say ESG, that’s actually an acronym for environmental, social, and corporate governance. This is language that was first used by investors in the financial community, and now it’s used much more broadly to emphasize that when we discuss sustainability. We mean more than just the environmental aspects. We mean the social aspects as well.

Frey
From HPE’s perspective, we’ve named our ESG programs Living Progress, and that’s really our business strategy for creating sustainable and equitable technology solutions for a data-first world. These efforts are tied to our corporate strategy and our purpose, which is to advance the way people live and work.

Our programs go back many, many years. In fact, back in 1957 when our program started, the program was called Corporate Citizenship and it was based around how HPE would grow beyond the borders of the United States. We have a long history of leadership as Hewlett Packard. When Hewlett Packard and Hewlett Packard Enterprise became two separate companies, we adapted the best practices at that point in time and then built our LivingProgress programs around that.

Our programs today have three main elements -- driving a low carbon economy, investing in people, and operating with integrity. We have goals across that entire spectrum of sustainability and throughout the lifecycle of our products.

Gardner: It’s very impressive that you have been doing this for going on 65 years. How has the world changed more recently that has prompted you to accelerate, to even dig in deeper on your commitments here? 

Frey: Climate change is one of the greatest threats to our common future. We recognize that we have limited resources and lots of impacts that are complex societal and environmental challenges. At HPE, we believe that addressing climate change is not only a moral imperative; it is also a business opportunity as we innovate technology to help our customers thrive in this carbon-constrained world.

Years ago, we set our goal to be net-zero by 2050, and it was backed up by science-based targets throughout our entire value chain. When we set this goal, it was clear leadership. However, the Intergovernmental Panel on Climate Change (IPCC)’s most recent reports indicate that going to net-zero by 2050 is not fast enough. We have to accelerate our goals.

HPE has committed to becoming a net-zero enterprise across our entire value chain by 2040. Our commitment is backed by our roadmap to net-zero, which consists of a science-based targets.

Therefore, HPE has committed to becoming a net-zero enterprise across our entire value chain by 2040. Our commitment is backed by our roadmap to net-zero, which consists of a new suite of science-based targets that are consistent with that one-and-a-half-degree pathway and approved by the Science Based Target Initiative.

We set those interim targets and longer-term targets. Our interim targets for 2030 include reducing our scope-one and scope-two emissions by 70 percent and reducing our absolute scope-three emissions by 42 percent, both off of a 2020 baseline. And that scope-three target includes the use of our products by our customers, upstream transportation and distribution, and scopes one and two supplier emissions. Our longer-term target for 2040 is to reduce the absolute scopes one, two and three emissions by 90 percent off of that 2020 baseline as well.

Getting to these targets will require a fundamental transformation in everything we do. Our leaders need to be accountable for driving this and we’ve tied key climate metrics to executive compensation. We will need to ‘walk the talk’ and procure 100 percent renewable energy for our own operations while at the same time helping our customers and suppliers bring new renewables to the grid where they operate.

And most importantly, we’ll enable our customers to meet their own net-zero ambitions. This is important, because about two-thirds of our climate impact on the globe occurs when our customers use our technology solutions. So HPE is putting our innovation engine into action to develop more sustainable IT solutions while working closely with our customers to help optimize their IT infrastructure so that they can meet their own net-zero goals.

Gardner: That’s very interesting when you say nearly two-thirds of the climate impacts happen in your customer base from the use of your solutions. Can you expand on that? What does that mean?

Sustainability demands change

Frey: When we think about our footprint across the company, a small single-digit percentage is because of our own operations, our buildings and our employees and employee travel and those sorts of things. Around a third of it then is our supply chain – when we bring products to the market and when we take those products back from customers at their end of life. But the bulk, nearly two-thirds of our climate impact on the globe is when our customers use our technology products.

Learn More About

HPE's Living Progress Initiatives.

For us to help our customers get to net-zero and for HPE to lower our own carbon emissions across our entire portfolio means that we really must help our customers use their technology more efficiently. So that really gets to things such as helping them right-size the amount of technology they have, increasing the performance they get from the technology for each watt. We have to help them continue to optimize in real time their technology so that it uses the lowest amount of energy and does the most work at the same time.

Gardner: It’s no exaggeration to say that it’s technology that’s going to come to our aid, but it’s technology that we need to, in a sense, solve.


Frey:
Absolutely. In fact, we think of technology as a force multiplier for solving climate challenges. Technology really enables a lot of these solutions, and it also facilitates a lot of clean energy innovation as well.

Gardner: What are some of the major hurdles that need to be overcome to achieve this? It’s quite a bit to bite off and chew.

Frey: Yes, absolutely. Experts estimate that about half of the carbon reductions that the world needs to achieve net-zero emissions in the coming decade will come from technologies that don’t even exist yet. So that’s challenging. And in fact, if we look at just the companies that have made net-zero commitments already, we don’t have enough capability in terms of renewable energy and carbon offsets and things to even cover those commitments.

Technology can be an enabler here. HPE is spending a tremendous amount of effort innovating with solutions such as HPC technologies that are used by climate scientists and clean energy researchers.

So that is a huge challenge, but technology can be such an enabler there. HPE is spending a tremendous amount of effort innovating with solutions such as our high performance computing technologies that are used by climate scientists and by clean-energy researchers who are trying to find better ways to bring those solutions to market. With our customers, who are using our professional services and our technology services, instead of buying assets, we help them right-size the technology they need, we help them manage their technology from the edge to the cloud and optimize all the way through that.

Lots of opportunity. I prefer to think in terms of the positive, rather than the hurdles, which I think of as business opportunities. But what I can say from my experience working with our customers around the globe is many of our customers are really fixated on trying to help solve these challenges, many of our customers see great business opportunities in trying to help fix these challenges, and they’re all turning to technology as the enabler of that innovation.

Gardner: Yes. I’ve heard it said elsewhere. You can do quite well as a business by doing good for sustainability in the economy.

Frey: Absolutely. We fully agree with that.

Gardner: It seems like HPE has taken quite a lead here, but it involves more than just you the company. It affects your supply chains, as you’ve mentioned, your customers, your partners. So how do you characterize HPE’s role in that larger community?  Are you an example to follow, maybe a facilitator, an educator accelerating growth of potential, or all of the above?

Our example: Fail fast, then innovate again

Frey: We really play all of those roles. In some cases, we are an example that others point to and say, “Hey, we’re not going down this path alone. HPE has gone down this path.” In many cases, we’re an educator and will share with customers this long sustainability journey that we’ve been on, the lessons we learned. Often, it’s better to learn from what someone who has been down the path said they would never do again, or what they learned from their journey. We so often focus as a sustainability community on the things that went well. Yet, there’s a lot of lessons learned, and we really try from an HPE perspective to take a ‘fail fast and then innovate again’ approach. We’re constantly learning, and that education has great value.

In many cases, there’s a need for a facilitator. We know that these challenges exist across many industries, but there isn’t a central body pulling together multiple stakeholders and multiple customers to help solve that challenge. A couple of examples of that are organizations such as the Responsible Business Alliance (RBA).That’s an organization that HPE helped found years ago. We realized when we were auditing factories in our supply chain that these factories were building products for other technology companies as well. So, the factories were following our expectations in the lines building HPE products but may not have been protecting workers adequately in some of the other lines. When we took a step back and said, “Well, why is that?” We were told, well, that other vendor doesn’t make us do these things, and we said, “Well, wait a minute. That’s actually not the right answer.”


If we’re really trying to make sure that workers in our supply chain are being treated fairly, paid a living wage, have their health and safety protected, and are protecting the environment that’s a non-competitive issue. So, we took a step back and formed what was then called the Electronic Industry Citizenship Coalition (EICC) and invited companies in our industry to come together to have a common set of expectations for our suppliers and then put in place assurance programs. Well, that was so successful, other industries came to us and said, “Could we adopt that same practice for our industry?”  Today, the organization that does all of that is called the Responsible Business Alliance. And so, it’s having a huge impact on supply chains around the world, but all started because there was that need for a facilitator to bring companies together.

Another great example of that is the Clean Energy Buyers Alliance (CEBA). As more and more companies started making renewable energy commitments, we realized that to get the scale we needed in the pricing for renewable energy, we could do so much together as an alliance, have common procurement expectations and get better pricing.

Learn More About

HPE's Living Progress Initiatives.

One of the ways I talk about it is catalytic collaboration. How do we bring voices to the table that may not have been heard before?  How do we think from an innovation and an accelerator perspective much broader than just for example, publicly traded companies coming together?  How do we bring in the voice of stakeholders and customers and governments?  So, in all of these ways, HPE plays a variety of roles trying to accelerate the world’s progress to solve these big challenges.

Gardner: John, it seems no matter where you live, you’re getting a steady stream of reminders about why this is important. It could be wildfires, hurricanes, permafrost melting, rising sea levels. But on the other hand, this has been a challenge for many of the rates of increase to be met or reduced. So, what are the risks for businesses if they don’t make sustainability a priority?

Ignore environmental impacts at your own risk

Frey: Well, there’s a variety of risks, but let’s start with the business risk. The missed market opportunities. Businesses cost more and they can lose customers. One of the things we know about sustainability is that in many cases it’s about preventing waste, and waste has a cost associated with it. At the same time, we find customers increasingly saying that they want to do business with companies who have strong reputations, who have strong social and environmental programs, and companies that have a purpose and assist in making the world a better place.

In all those ways from a business perspective, customers are watching what companies do, and they’re making purchase decisions based on the attributes of the companies that they want to do business with. Frankly, if you’re not being a sustainability leader or at least keeping up with your industry, you’re going to start missing many of those market opportunities.

Customers are watching what companies do, and they're making purchase decisions based on the attributes of the companies that they want to do business with. If you're not a sustainability leader, you're going to miss market opportunities.

Another one could be, and we hear this from many of our customers, in this increasingly difficult time that we live in, finding employees is very challenging. Employees want to work for a company where they can see how what they’re doing contributes to the company’s purpose. And so that’s another opportunity that they miss.

I’ll just give you a sense. We had International Data Corporation (IDC) do a survey for us last year. We asked technology executives across several countries why they were investing in and participating in sustainable IT and sustainability programs in their technology operations, and what they told us was really interesting. The digital leaders, those companies that are the innovators and the fast movers said that they were investing in sustainability programs to attract and retain institutional investors.

Now, the companies in the middle, the digital mainstream said they were doing it to attract and retain customers and the digital followers. Those companies that move a little slower are not quite as far in their own digital transformation said they were doing it to attract and retain employees. So, there’s a variety of business reasons to do this. Increasingly, there are regulatory reasons as governmental agencies start asking companies to talk about things that are either material from a financial perspective, such as we’re seeing here in the U.S. with the proposed Securities and Exchange Commission (SEC) regulations or other places around the world where there are regulatory reporting reasons to make sure that you have strong sustainability programs because you have to disclose data to a regulatory agency.

Gardner: Do you have any examples or use cases for how sustainability leadership moves beyond reputation to be a driver of business growth, which, as you said is one of the chief reasons to embrace sustainability fully?

Frey: There are a variety of opportunities. We’ve seen it ourselves. For example, in the last year, we’ve had over 1,400 customer inquiries asking HPE about our own sustainability and social and environmental programs whether it relates to our products or whether it relates to our business. That’s just one example of the way customers are paying attention and they’re asking increasingly in-depth questions. It used to be questions such as, “Do you have your own sustainability program, yes or no?” Then it moved into “Are you using some of the various standards that show us that you’re managing this as a process and as a system across your business?” Now, they’re asking us questions all the way down to “Tell us the carbon footprint of this product or solution that HPE is bringing to the market.”

Now, what we know is when we have good answers to that and we share expertise with customers, we tend to do much better from a business perspective as well, and customers want to do business with us. We certainly see on our own that there are lots of opportunities for additional value by having the strong programs.

Gardner: All right. Are there any even more specific examples of how HPE has helped customers to improve their businesses while also accelerating sustainability improvements? Do we have some concrete examples of how this works in practice?

Win-win: Great business and ESG results

Frey: Yes, I’ll give you just a few. Wibmo is India’s leading digital payment provider, and they use a variety of HPE technologies, but they wanted to consider moving to a much more flexible technology we call HPE Synergy, which is a composable infrastructure. What that really means is that you have compute, storage, and networking in a common chassis that shares power supplies and gives you great scalability. It gives you a pool of resources that the customer can tap from, and what Wibmo really wanted to do was move from a blade infrastructure to that Synergy infrastructure to increase their capability to respond very quickly to changing customer requirements. As we did that for them, to give them the same capability, reduced their IT capital expenditures by 80 percent, reduced their creation and delivery of new accounts from weeks to hours and it lowered their carbon footprint by 50 percent. So, we observed great business outcomes and great environmental outcomes coming from the work with that provider.

Now, another one was Nokia Software, and they’re an HPE GreenLake customer, which is our as-a-service offering. Nokia has always been progressive around their environmental objectives, and they wanted to strive for a carbon-negative data-center operation, and one of the things they wanted to do to achieve that was using a renewable energy source. They wanted to take water from a nearby Finnish lake to cool the data center. They wanted to move to liquid cooling and using renewable energy sources to power that data center. HPE was able to help them do that. One of the great things about HPE GreenLake is that because it’s consumption-based, we help customers tailor the infrastructure to their needs without additional equipment that is sitting there and not doing any work. We enable them to reduce their capital expenses and reduce their environmental footprint at the same time.

Gardner: Let’s talk next about one or two examples of how technology accelerates environmental change, not just from the IT perspective, but perhaps other views that are more data driven and offer the capability to exercise more efficiency, and more ways when you’ve got a data driven organization from edge to Cloud.

Learn More About

HPE's Living Progress Initiatives.

Frey: I’ll give you two quick examples. Purdue University is one, and we’re really partnering with Purdue on sustainable agriculture. One of the challenges we have as a global population is that we’re swelling to about nine-billion people by 2050. And, so, the world is going to have to double our agricultural output or have starvation challenges around the world. Purdue’s College of Agriculture partnered with us to do a variety of research around sustainable agriculture, increasing agricultural output in using edge technologies to allow farmers to really be able to tailor things such as irrigation and fertilization only to the places in their fields that they are absolutely needed. The ultimate goal of this, of course, is to drive more effective ways to grow nutritious, healthy, and abundant food for this growing planet. So that’s one great example and that research continues.

Another great example is Carnegie Clean Energy, and they’re an Australian wave, solar, and battery energy company. But they’re really focused on making wave power a reality. They’ve developed a wave energy technology called CETO that uses the wave energy off Western Australia’s Garden Island to power the country’s largest naval base.

Now, you may not realize that one of the big advantages of wave power is predictability. The sun stops shining at times, the wind stops blowing, but the ocean’s waves don’t stop flowing in. Wave forecasts can look out about a week in the future to figure out how the wave energy is going to be, and they only have about a 20 percent margin of error, which allows CETO to predict how much power is going to be generated looking into the future. It even allows them to tailor the effectiveness of CETO, based on how big or small they predict those waves are going to be. They can generate precise knowledge about the shape and the timing of upcoming waves so that they can make sure they get the maximum amount of energy from each wave that comes in.

Those are two examples of the way we’re using technology for social and environmental good.

Gardner: John, you mentioned, of course, about the long period that HPE has been involved with looking for sustainability and improvement and the impact on its communities, and you’ve just said, “Okay, we were on track, but we’re going to accelerate that. We’re going to move it forward.” How can other companies who might want to decide to accelerate what they’re doing get started? What’s a good way to think about a methodological or comprehensive way to get faster, better, and more impactful when it comes to sustainability?

Partner up for possibility

Frey: The first way we suggest is do a materiality assessment, and that’s talking to your customers, your stakeholders, and your employees about the things that are most relevant to your business and the things that you have the greatest ability to impact. So, figure out what’s most material and publish plans to solve those challenges. In fact, HPE gives an example every year in our Living Progress Report. We publish our own materiality assessment and then show how the initiatives we’re taking are driven straight from that materiality assessment.

Another thing that we would recommend is to learn from leaders. Don’t reinvent the wheel. Companies like HPE freely share this knowledge with our customers, stakeholders, and others in the broader community because we feel that not everybody needs to go back and develop their programs from scratch. Learn from those that have been doing it, learn those lessons and then use that to accelerate your progress.

Learn from the leaders, Don't reinvent the wheel. Companies like HPE freely share their knowledge with our customers, stakeholders, and others in the broader community because we feel that not everybody needs to go back and develop their own programs from scratch.

And finally, partner for success. You don’t have to go it alone. Leverage the expertise throughout your value chain. In HPE’s case, for example, we share our sustainable IT strategy, our white papers and our workbook that helps customers implement a sustainable IT strategy freely, and we put them out on the Internet so that anybody can have access to them and tap into those resources. So, look up and down your value chain and see where there are others that already have that expertise and learn from them. 

Gardner: Before we close out, let’s take advantage of the fact that we must look to current and new technologies to solve these problems. What are some of the future opportunities? Even if we don’t know the how, perhaps we have a sense of the what. What is it that we can be doing in the future to bring these carbon net-zero realities right into our backyards?

Frey: We’ve talked a little bit previously about the fact that we don’t have all the low-carbon solutions we need. And one of the things that HPE did to help with that effort is we co-launched the Low Carbon Patent Pledge. HPE gathered with partners Meta, formerly Facebook, JPMorgan Chase, and Microsoft.

By putting those patents out there, making them freely available, we hope to accelerate the innovation opportunities out there. Perhaps it will be for things that we could have never imagined patents being used for, but some innovator will see a connection and be able to accelerate some new low-carbon solutions. I think there are other ways as well and that we’re seeing a shift from moving in technology from the general compute world to workload specific hardware and software solutions. We’re seeing advances in liquid cooling that are necessary as densities go up, and I think there’s a huge opportunity around software efficiency as well. This is a great untapped opportunity. Yet, some studies say that using a more effective software programming language, such as, for example, Rust, could reduce power consumption by the technology industry by up to 50 percent.

Learn More About

HPE's Living Progress Initiatives.

I think there are opportunities to have common platforms from the edge of the cloud so that we can all see across our technology operations, look at things such as utilization rates, power consumption, carbon emissions, and see those in a common way across that value chain and by being transparent, it highlights opportunities for improvement.

And finally, I think there’s a lot of opportunity that artificial intelligence (AI) and machine learning (ML) bring to optimization.

But we have to do that while paying attention to ethical AI principles as well, because these types of technologies can be misused if we’re not paying attention to the ethical implications. I feel that we have a strong need to not only use the ethical AI principles that are in place today but to continue to advance that thinking as well as more and more AI and ML solutions are brought to market.


Gardner:
It’s been a fascinating discussion, but I’m afraid we’ll have to leave it there. We’ve been exploring how companies along with their partners, suppliers, and employees can best manage and govern the resources and assets for sustainability. And we’ve learned how HPE has newly accelerated its many programs and initiatives to reduce its carbon emissions, conserve energy, and reduce waste far earlier than its earlier net-zero days. So please join me now in thanking our guests. We’ve been here with John Frey, chief technologist, sustainable transformation at Hewlett Packard Enterprise. Thanks so much, John.

Frey: My pleasure. It was a delight to be with you today.

Gardner: And a big thank you as well to our audience for joining us for this sponsored BriefingsDirect discussion on the impact of information technology on the environment and society. I’m Dana Gardner, principal Analyst at Interarbor Solutions, your host for this ongoing series of HPE-supported discussions. Thanks again for listening. Please pass this along to your community and do come back next time.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: Hewlett Packard Enterprise.

Transcript of a discussion on how Hewlett Packard Enterprise has newly accelerated its many programs and initiatives to reduce its carbon emissions, conserve energy, and reduce waste. Copyright Interarbor Solutions, LLC, 2005-2022. All rights reserved.

You may also be interested in: