Showing posts with label microservices. Show all posts
Showing posts with label microservices. Show all posts

Friday, October 01, 2021

Traceable AI Platform Builds Usage Knowledge that Detects and Thwarts API Vulnerabilities

Transcript of a discussion on a new platform designed from the ground up to define, manage, secure, and optimize the API underpinnings for so much of what drives today’s digital business.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: Traceable AI.

Dana Gardner: Hi, this is Dana Gardner, Principal Analyst at Interarbor Solutions, and you’re listening to BriefingsDirect.

The rapidly expanding use of application programming interfaces (APIs) to accelerate application development and advanced business services has created a vast constellation of interrelated services -- now often called the API Economy.

Yet the speed and complexity of this API adoption spree has largely outrun the capability of existing tools and methods to keep tabs on the services topology -- let alone keep these services secure and resilient.

Stay with us as we explore a new platform designed from the ground up specifically to define, manage, secure, and optimize the API underpinnings for so much of what drives today’s digital business.

To learn more about how Traceable AI aims to make APIs reach their enormous potential safely and securely, please welcome Sanjay Nagaraj, Chief Technology Officer (CTO) and Co-Founder at Traceable AI. Welcome, Sanjay.

Sanjay Nagaraj: Thanks, Dana, for having me.

Gardner: Why is addressing API security different from the vulnerabilities of traditional applications and networks? Why do we need a different way to head off API vulnerabilities?

Nagaraj: If you compare this to the analogy of protecting a house, previously there was a single house with a single door. You only had to protect that door to block someone from coming into the house. It was a lot easier.

Nagaraj

Now, you have to multiply that because there are many rooms in the house, each with an open window. That means an attacker can come in through any of these windows, rather than only through a single door to the house.

To extend the analogy across the API economy, most businesses today are API-driven businesses. They expose APIs. They also use third-party libraries that connect to even more APIs. All of these APIs are powering the business but are also interacting with both internal and third-party APIs.

APIs and services are everywhere. The microservices are developed to power an entire application, which is then powering a business. That’s why it is getting so complex compared to what used to be a typical network security or a basic application security solution. Before, you would take care of the perimeter for a particular application and secure the business. Now, that extends to all these services and APIs. 

And when you look at network security, that operated at a different layer. It used to be more static. You therefore had a good understanding of how the network was set up and where the different application components were deployed.

Nowadays, with rapidly changing services coming online all the time, and APIs coming online all the time, there is no single perimeter. In this complex world, where it is all APIs across the board, you must take into consideration more aspects to understand the security risks for your APIs, and -- in turn -- what your business risks are. Business is riskier when it comes to today’s security.

Because it’s so very complex, the older security solutions can’t keep up. We at Traceable AI choose to take care of security by looking at the data that comes in as part of the calls hitting the URLs. We take into consideration more context to detect whether something is an attack or some anomaly that is not necessarily malicious but may be a reconnaissance-type of attack.

All of these issues mean we need more sophisticated solutions that frankly the industry hasn’t caught up to even though developer and development, security, and operations (DevSecOps) advances have moved a lot faster. 

Gardner: And, of course, these are business-critical services. We’re talking about mission-critical data moving among and between these APIs, in and out of organizations and across their perimeters. With such critical data at hand, the reputation of your business is at stake because you could end up in a headline tomorrow.

Data is everywhere, exposed

Nagaraj: Exactly. At the end of the day, APIs are exposing data to their business users. That means the data flowing through might be part of the application, or it might be from another business-to-business API. You might be taking the user’s data and pushing it to a third-party service.

We’ve all seen the attacks on very sophisticated technology companies. These are very hard problems. As a developer myself, I can tell you what keeps me up most of the time: Am I doing the right thing when it comes to the functionality of my application? Am I doing the right thing when it comes to the overall quality of it? Am I doing the right thing when it comes to delivering the right kind of performance? Am I meeting the performance expectations of my users?

We've all seen the attacks on very sophisticated technology companies. These are very hard problems. As a developer myself, I can tell you what keeps me up most of the time: Am I doing the right thing when it comes to the functionality of my application?

What do I, as a developer, think about the security of every single API that I’m writing? At the end of the day, it’s about the data that is getting exposed through these APIs. It’s important now to understand how this data is getting used. How is this data getting passed around through internal services and third-party APIs? That’s where the risk associated with your API is.

Gardner: Given that we have a different type of security problem to solve, what was your overarching vision for making APIs both powerful and robust? What is it in your background that helped you get to this vision of how the world should be?

Nagaraj: If you dial back the clock for myself and Jyoti Bansal, my co-founder at Traceable, we built the company AppDynamics, which was on the forefront of helping developers and DevOps teams understand their applications’ performance. When that product started, there was a basic understanding of how applications performed and were delivered to the customers. Over time, we started to think about this in a different way. One of the goals at AppDynamics was to understand applications from the ground up. You had to understand how these applications with their modules and sub-modules, and with the sub-services, were interacting with each other.

Learn More 

A basic understanding was required to learn if the end-user experience was being delivered with the expected performance. That gave rise to application performance management (APM) in terms of a fuller understanding of an application’s underlying performance itself.

From an AppDynamics’ perspective, it was very important for us to know how the services were impacting each other. That means when a call gets made from service A to service B, you should understand how much time was consumed on the call and what was happening between the two, as well as how much time was spent within the service, between the services, and how much total time was spent delivering the data back to the user.

This is all in the performance context. But one of the key things we clearly knew as we started Traceable AI was that APIs were exploding. As we talked about with the API Economy, every one of the customers Traceable started to talk to asked us about more than just the performance aspects of APIs. They also wanted to know whether these APIs and applications were secure. That’s where they were having a difficult time. As much as developers like to make sure that APIs are secure, they are unable to do it simply because they don’t understand what goes into securing APIs.

That’s when we started to think about how to bring some of the learning we had in the past around application performance for developers and DevOps teams, and bring that to an understanding of APIs and services. We had to think about application security in a new way.

We started Traceable AI to find the best way to understand applications and the interactions of the applications, as well as understanding the uses. The way to do it was the technology built over the last decade for distributed tracing. By helping us trace the calls from one service to another, we were able to tap the data flowing through the services to understand the context of the data and services.

From the context and the data, you can learn who the users of these APIs are, what type of data is flowing, and which APIs are interacting with each other. You can see which APIs are getting called as part of a single-user session, for example, and from which third-party APIs the data is being pulled from or pushed to.

This overall context is what we wanted to understand. That’s where we started, and we built on the existing tracing technology to deliver an open-source platform, called Hypertrace. Developers can easily use it for all kinds of tracing use cases, including performance. We have quite a few customers that have started to use it as an open-source resource.

But the goal for us was to use that distributed tracing technology to solve application security challenges. It all starts with so many customers saying, “Hey, I don’t even know where my APIs exist. Developers seem to be pushing out a lot of APIs, and we don’t understand where these APIs are. How are they impacting our overall business in terms of security? What if some of these things get exposed, what happens then? If you must do a forensic analysis of these, what happens then?”

See it to secure it with tracing

We said, “Let’s use this technology to understand the applications from the ground up, detect all these APIs from the ground up.” If the customers don’t understand where the APIs exist, and what the purpose of these APIs are, then they won’t be able to secure them. For us, the basic concept was bringing the discovery of these applications and APIs into focus so that customers can understand it. That’s the vision of where we started.

Then, based on that, we said, “Once they discover and understand what APIs they have, let’s go further to understand what the normal behavior of these APIs are.”

Once APIs are published there are tools to document those APIs in the form of an OpenAPI or a Swagger spec. But if you talk to most enterprises, there are rarely maintained records of those things. What developers do very well is ship code. They ship good functionality; they try to ship bug-free code that performs well.

But, at the same time, the documentation aspects of it are where it gets weak because they’re continuously shipping. Because the code is changing continuously, from a continuous integration/continuous delivery (CI/CD) perspective, the developers are not able to continuously keep the spec documentation up-to-date, especially as it continuously gets deployed and redeployed into production.

The whole DevSecOps movement needs to come together so the security practitioners are embedded with the developer and DevOps teams. That means the security folks have to have a continuous understanding of the security practices to ensure the APIs that are coming online are understood.

The whole DevSecOps movement needs to come together so the security practitioners are embedded with the developer and DevOps teams. That means the security folks have to have a continuous understanding of the security practices to ensure the APIs that are coming online continuously are understood.

Our customers now also are expecting our solution to help them automate these things. They want to automatically understand the risks of APIs -- which APIs should be blocked from being deployed into production and which APIs should be monitored more. There needs to be a cycle of observing these APIs on a continuous basis. It’s very, very critical.

From our perspective, once we build this ongoing understanding of the APIs – as we discover and build an understanding of the APIs – we then want to protect those APIs before they get into production.

The inability to properly protect these APIs is not because some small company doesn’t have the technology skills or the proper engineering. It’s not about developers not having the right kind of training. We are talking about capable companies like Facebook, Shopify, and Tesla. These are technology-rich companies that are still having these issues because the APIs are continuously evolving. And there are still siloed pieces of development. That means in some cases they might understand the dependencies of the services, but in a lot of cases they don’t fully understand the dependencies and the security implications because of those dependencies.

This reality exposes a lot of different types of attacks, such as business logic attacks, as you and Jyoti talked about in your previous conversations. We know why those are very, very critical, right?

Learn More 

How do you protect against these business logic vulnerabilities? The API discovery and understanding the API risk are very key. Then, on top of those, the protection aspects are very, very key. So, that was where we started. This is part of the vision that we have built out.

Because of the way our new platform has been built, we enable all these understandings. We want to expose these understandings to our customers so they can go and hunt for different types of attacks that may be lurking. They can also use and analyze this information not just for heading off prospective attacks but to help influence all the different types of development and security activities.

This was the vision we began with. How do you bring observability into application security? That’s what we built. We help evolve their overall application security practices.

Gardner: In now understanding your vision, and to avoid a firehose of data and observations, how did you design the Traceable platform to attain automation around API intelligence? How did you make API observability a value that scales?

Continuous comprehension

Nagaraj: One of the key aspects of building a solution is to not just throw data at your customers. That means you’re correcting the data; you’re not just presenting a data lake and asking them to slice and dice and analyze it using manual processes. The goal from the get-go for us was to understand the APIs and to categorize them in useful ways.

That means we must understand which APIs are external-facing, which are internal-facing, and where the sensitive data is. What amount and type of sensitive data is getting carried through these APIs? Who are the users of these APIs? What roles do they have with an API?

We are also building a wealth of insights into how the APIs themselves behave. This helps our customers know what to focus on. It is not just about the data. Data forms a basis for all these other insights. It’s not about presenting the data to the customers and saying, “Hey, go ahead and figure things out yourself.” 

We bring insights that enable the security and operations teams -- along with the developers and DevSecOps teams -- to know what security aspects to focus on. That was a key principle we started to build the product on.

The second principle is that we know the security and operations teams are very swamped. Most of the time they are under-resourced in terms of the right people. It was therefore very important that the data we present to those teams is actionable. The types of protection we provide from detection of anomalies must have very low levels of false positives. That was one of the key aspects of building our solution as well.

A third guiding principle for us, from the DevSecOps team’s perspective, is to give them actionable data to understand the code that is being deployed even when the services are deployed in a cloud-native fashion. How do you understand at the code level, which ones are making a database call and where that data is flowing to? How do you know which cloud-based APIs are making third-party API calls to know if there are vulnerabilities? That is also very important to manage.

We have taken these principles very seriously as we built the solution. We bring our deep understanding of these APIs together with artificial intelligence (AI) and machine learning (ML) on top of the data to extract the right insights -- and make sure those are actionable insights for our users. That is how we built the platform from the ground up. Because continuous delivery (CD) is how applications are deployed today, it’s very important that we are continuously providing these insights.

We have taken these principles very seriously as we built the solution. We bring our deep understanding of these APIs together with AI and ML on top of the data to extract the right insights -- and make those actionable for our users.

It’s not enough to just say, “Hey, here are your APIs. Here are the insights on top of those, and here is where you should be focusing from a risk perspective.” We must also continuously adjust and gain new insights as the APIs evolve and change.

 There was one last thing we set out to do. We knew our customers are in a journey to microservices. That means we must provide the solution across diverse infrastructures, for customers fully in a cloud-native microservices environment as well as customers making their journey from legacy, monolithic applications; and everything in-between. We must provide a bridge for them to get to their destinations regardless of where they are.

Gardner: Yes, Traceable AI recently released your platform’s first freely available offering in August. Now that it’s in the marketplace, you’re providing a strong value to developers, by helping them to iterate, improve, and catch mistakes in their APIs design and use. Additionally, by being able to define vulnerabilities in production, you’re also helping security operations teams. They can limit the damage when something goes wrong.

By serving both of those two constituencies, you’re able to bridge the gap between them. Consequently, there’s a cultural assimilation value between the developers and the security teams. Is that cultural bond what you expected?

Reduce risk with secure interactions

Nagaraj: Absolutely. I think you said it right. In a lot of cases, these organizations are rapidly getting bigger and bigger. Typically, today’s microservices-based, API-driven development teams have six to eight members building many pieces of functionality, which eventually form an overall application. That’s the case internally at Traceable AI, too, as we build out our product and platform.

And so, in those cases, it’s very important that there is an understanding around how API requests come into an overall application. How do they translate across all the different services deployed? What are the services – defined as part of those small teams -- and how are they interacting with each other to deliver a single customer’s request? That has a huge impact on understanding the overall risk to the application itself.

The overall risk in a lot of cases is based on a combination of factors driven by all the APIs being exposed to those applications. But knowing all the APIs interacting with these services -- and the data that’s going through these services -- is very important to get a holistic understanding of the application, and the overall application infrastructure, to make sure you’re delivering security at an application level.

Learn More 

It’s no longer enough just to say, “Yes, we are secure. We’re practicing all the secure-coding practices.” You must also ask, “But what are the interactions with the rest of the organization?” That’s why it was essential for us to build what we call API Intelligence from the ground up based on the actual data. We attain a deeper understanding of the data itself.

That intelligence now helps us say, “Hey, here are all the APIs used across your organization. Here’s how they’re interacting with each other. Here’s how the data goes between them. Here are the third-party APIs being accessed as part of those services.”

We get that holistic understanding. That broad and inclusive view is very important because it’s just not about external APIs being accessed. It includes all the internal APIs being built and used, as well, from the many small teams.

Customers often tell me after using our solution that their developers are shocked there are so many APIs in use. In some cases, they thought they were duplicate APIs. They never expected those APIs to show up as part of any single service. It feels good to hear that we are bringing that level of visibility and realization. 

Next, based on our API Intelligence, comes the understanding of the risks. And that is so very important because once the developers understand the risks associated with a particular API, the way they go about protecting them also becomes very important. It means the vulnerabilities are going to get prioritized and then the fixes are going to be prioritized the right way, too. The ways they protect the APIs and put in the guards against these API vulnerabilities will change.

At the end of the day, the goal for us is to bring together the developers and the DevOps and security teams. Whether you look at them as a single team or separate teams, it doesn’t matter for an organization. They all must work together to make security happen. We wanted to provide a single pane of glass for them to all see the same types of data and insights.

Gardner: I have been impressed that the single pane view can impact so many different roles and cultures. I also was impressed with the interface. It allows those different personas to drill down specific to the context of their roles and objectives.

Tell us how that drilling down capability within the Traceable AI user interface (UI) gives the developers an opportunity to compress the time of gaining an understanding of what’s going on in API production and bring that knowledge back into pre-production for the next iteration?

Ounce of pre-production prevention

Nagaraj: One of the key things in any development lifecycle is the stages of testing you go through. Typically, applications get tested in the development and quality assurance (QA) stages along the way.

But one of the “testing” opportunities that can get missed in pre-production is to learn from the production data itself. That is what we are addressing here. As a developer, I like to think that all the tests being written in my pre-production environment cover all the use cases. But the reality is that the way customers use the applications in production can be different than expected. And the type of data that flows through can be different too.

This is even more true now because of API-driven applications. With API-driven applications, the developer has an intent of how their APIs are used, and most of their tests mimic that intent. But once you give the APIs to third-party developers – or hackers -- they might see the same APIs that the developer sees yet use them in unintended ways. Once they gain an understanding of how the API logic has been built internally the external users might be able to get a lot more information than they should be able to.

If we understand the true risks associated with these APIs in use, we can present that in-production-use knowledge back into pre-production. That means decisions about which APIs need to be protected differently can be made by using the right kinds of controls.

This is where it gets complex. This means that rather than treating production and pre-production as silos, the thought process is to bring the production learning and knowledge to help improve the application’s  security posture in pre-production because we know how certain APIs are actually being used.

If we understand the true risks associated with these APIs in use, we can present that in-production-use knowledge back into pre-production, such as users accessing APIs they aren’t supposed to be accessing. That means decisions about which APIs need to be protected differently can be made by using the right kinds of controls.

The core benefit to customers is that they can understand their API risks earlier so that they can protect their APIs better.

Gardner: The good news is there’s new value in post-production and pre-production. But who oversees bringing the Traceable AI platform into the organization? Who signs the PO? Who are the people who should be most aware of this value?

APIs behavior in a single pane of glass

Nagaraj: Yes, there are typically various types of organizations at work. It’s no longer a case of a central security team making all the decisions. There are engineering-driven, DevOps teams that are security-conscious. That means many of our customers are engineering leaders who are making security their top priority. It means that the Traceable AI deployment aspects also come to pre-production and production as part of their total development lifecycle.

One of the things we are exploring as part of our August launch is to make the solution increasingly self-service. We’ve provided low friction way for developers and DevOps teams to get value from Traceable AI in their pre-production and production systems, to make it part of their full lifecycle. We are heavily focused on enabling our customers to have easy deployment as a self-service experience.

On the other hand, when the security and operations teams need to encourage the developers or DevOps teams to deploy Traceable AI, then, of course, that ease-of-use experience is also very important.

A big value for the developers is that they get a single pane of glass, that means they are seeing the same information that the security teams are seeing. It is no longer the security people saying, “There are these vulnerabilities which is a problem;” or, “There are these attacks we are seeing,” and the developers don’t have the same data. Now, we are offering the same types of data by bringing observability from a security perspective to provide the same analysis to both sides of the equation. This makes everyone into a more effective team solving the security problems.

Gardner: And, of course, you’re also taking advantage of the ability to enter an organization through the open-source model. You have a free open-source edition, in addition to your commercial edition, that invites people to customize, experiment, and tailor the observability to their particular use cases -- and then share that development back. How does your open-source approach work?

Nagaraj: We built a distributed tracing platform, which was needed to support all the security use cases. That forms a core component for our platform because we wanted to bring in tracing and observability for API security.

That distributed tracing platform, called Hypertrace, as part of the Traceable AI solution, will enable developers to adopt the distributed tracing element by itself. As you mentioned we are making it available for free and as open source.

We’ve also launched a free tier of the Traceable AI security solution which includes the basic versions of API discovery, risk monitoring, and basic protection, for securing your applications. This is available to everybody.

Our idea was we wanted to democratize access to good API security tools, to help developers easily get the functionality of API observability and risk assessment so that everyone can be a pro-active part of the solution. To do this we launched the Free tier and the Team tier, which includes more of the functionality that our Enterprise tier includes.

Learn More 

That means, as a DevOps team, you’re able to understand your APIs and the risks associated with them, and to enable basic protections on those APIs. We’re very excited about opening this up to everyone.

But the thing that excites the engineer in me is that we are making our distributed tracing platform source code available for people to go build solutions on top of. They can use it in their own environments. At the end of the day, the developers can solve their own business problems. We are in the business of helping them solve the security problems, and they can solve their other business needs.

For us, it is about how do we secure their APIs. How do we help them understand their APIs? How can they best discover and understand the risks associated with those APIs? And that’s our core. We are putting it out there for developers and DevOps teams to use.

Gardner: Sanjay, going back to your vision and the rather large task you set out for yourselves, as Traceable AI becomes embedded in organizations, is there an opportunity for the API economy to further blossom?

How big of an impact do you expect to have over the next few years, and how important is that for not only the API economy, but the whole economy?

Economy thrives with continuous delivery

Nagaraj: From an API economy perspective, it’s thriving because of the robust use of these APIs and the reuse of services. Any time we hear news about APIs getting hacked or data getting lost, there is an inclination to say, “Hey, let’s stop the code from shipping,” or, “Let's not ship too many features,” or, “Let's make sure it is secure enough before it ships.”

But that means the continuous delivery benefits powering the API economy are not going to work. We, as a community of developers, must come up with ways of ensuring security and privacy so we can continue to maintain the pace of a continuous software development life cycle. Otherwise, this will all stall. And these challenges will only get bigger because APIs are here to stay. The API economy is here to stay. APIs will be continuously evolving, and they will be delivering more and more functionality on a continuous basis.

The only way we can get better at this is by bringing in the technology that enables the continuous delivery of code that is secured in pre-production and not just at runtime.

The only way we can get better at this is by bringing in the technology that enables the continuous delivery of code that is secured in pre-production and not just at runtime. And that’s the goal from our perspective, to build that long-term and viable solution for enterprises.

Gardner: I’m afraid we’ll have to leave it there. You’ve been listening to a sponsored BriefingsDirect discussion on how the rapidly expanding use of APIs to advance business services has created a complex constellation of interrelated services.

And we’ve learned how an AI-enabled security capability in a new platform from Traceable AI is designed from the ground up to discover,  secure, and optimize the API underpinnings of today’s digital businesses for teams across the full lifecycle of development.

So, a big thank you to our guest, Sanjay Nagaraj, Chief Technology Officer and Co-Founder at Traceable.ai. Thank you so much, Sanjay.

Nagaraj: Thanks a lot.

Gardner: And a big thank you as well for our audience for joining this BriefingsDirect API resiliency discussion. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host throughout this series of Traceable AI-sponsored BriefingsDirect interviews.

Thanks again for listening. Please pass this along to your business community and do come back for our next chapter.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: Traceable AI.

Transcript of a discussion on a new platform designed from the ground up specifically to define, manage, secure, and optimize the API underpinnings for so much of what drives today’s digital businesses. Copyright Interarbor Solutions, LLC, 2005-2021. All rights reserved.

You may also be interested in:

Monday, August 30, 2021

How to Migrate Your Organization to a More Security-Minded Culture

Transcript of a discussion on creating broader awareness of security risks and building a security-minded culture across organizations and ecosystems.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: TraceableAI.

Dana Gardner: Hi, this is Dana Gardner, Principal Analyst at Interarbor Solutions, and you’re listening to BriefingsDirect.

Bringing broader awareness of security risks and building a security-minded culture within any public or private organization has been a top priority for years. Yet halfway through 2021, IT security remains as much a threat as ever -- with multiple major breaches and attacks costing tens of millions of dollars occurring nearly weekly.

Why are the threat vectors not declining? Why, with all the tools and investment, are businesses still regularly being held up for ransom or having their data breached? To what degree are behavior, culture, attitude, and organizational dissonance to blame?

Stay with us now as we probe into these more human elements of IT security with a leading chief information security officer (CISO).


To learn more about adjusting the culture of security to make organizations more resilient, please join me in welcoming Adrian Ludwig, CISO at Atlassian. Welcome, Adrian.

Adrian Ludwig: Hi, Dana. Glad to be here.

Gardner: Adrian, we are constantly bombarded with headlines showing how IT security is failing. Yet, for many people, they continue on their merry way -- business as usual.

Are we now living in a world where such breaches amount to acceptable losses? Are people not concerned because the attacks are perceived as someone else’s problem?

Security on the forefront

Ludwig

Ludwig: A lot of that is probably true, depending on whom you ask and what their state of mind is on a given day. We’re definitely seeing a lot more than we’ve seen in the past. And there’s some interesting twists to the language. What we’re seeing does not necessarily imply that there is more exploitation going on or that there are more problems -- but it’s definitely the case that we’re getting a lot more visibility.

I think it’s a little bit of both. There probably are more attacks going on, and we also have better visibility.

Gardner: Isn’t security something we should all be thinking about, not just the CISOs?

Ludwig: It’s interesting how people don’t want to think about it. They appoint somebody, give them a title, and then say that person is now responsible for making security happen.

But the reality is, within any organization, doing the right thing -- whether that be security, keeping track of the money, or making sure that things are going the way you’re expecting -- is a responsibility that’s shared across the entire organization. That’s something that we are now becoming more accustomed to. The security space is realizing it’s not just about the security folks doing a good job. It’s about enabling the entire organization to understand what’s important to be more secure and making that as easy as possible. So, there’s an element of culture change and of improving the entire organization.

Gardner: What’s making these softer approaches -- behavior, culture, management, and attitude – more important now? Is there something about security technology that has changed that makes us now need to look at how people think?

Ludwig: We’re beginning to realize that technology is not going to solve all our problems. When I first went into the security business, the company I worked for, a government agency, still had posters on the wall from World War II: Loose lips sink ships.

Learn More 

The idea of security culture is not new, but the awareness is, across organizations that any person could be subject to phishing, or any person could have their credentials taken -- those mistakes could be originating at any place in the organization. That broad-based awareness is relatively new. It probably helps that we’ve all been locked in our houses for the last year, paying a lot more attention to the media, and hearing about attacks that have been going on at governments, the hacking, and all those things. That has raised awareness as well.

Gardner:  It’s confounding that people authenticate better in their personal lives. They don’t want their credit cards or bank accounts pillaged. They have a double standard when it comes to what they think about protecting themselves versus protecting the company they work for.

Data safer at home or work?

Ludwig: Yes, it’s interesting. We used to think enterprise security could be more difficult from the user experience standpoint because people would put up with it because it was work.

But the opposite might be true, that people are more self-motivated in the consumer space and they’re willing to put up with something more challenging than they would in an enterprise. There might be some truth to that, Dana.

Gardner: The passwords I use for my bank account are long and complex, and the passwords I use when I’m in the business environment … maybe not so much. It gets us back to how you think and your attitude for improved security. How do we get people to think differently?

Ludwig: There’s a few different things to consider. One is that the security people need to think differently. It’s not necessarily about changing the behavior of every employee in the company. Some of it is about figuring out how to implement critical solutions that provide security without changing behavior.

Security people need to think differently. It's not necessarily about changing the behavior of every employee in the company. It's about implementing solutions that provide security without changing behavior.

There is a phrase, the paved path or road; so, making the secure way the easy way to do something. When people started using YubiKey U2F [an open authentication standard that enables internet users to securely access any number of online services with a single security key] as a second-factor authentication, it was actually a lot easier than having to input your password all over the place -- and it’s more secure.

That’s the kind of thing we’re looking for. How do we enable enhanced security while also having a better user experience? What’s true in authentication could be true in any number of other places as well.

Second, we need to focus on developers. We need to make the developer experience more secure and build more confidence and trustworthiness in the software we’re building, as well as  in the types of tools used to build.

Developers find strength

Gardner: You brought up another point of interest to me. There’s a mindset that when you hand something off in an organization -- it could be from app development into production, or from product design into manufacturing -- people like to move on. But with security, that type of hand-off can be a risk factor.

Beginning with developers, how would you change that hand-off? Should developers be thinking about security in the same way that the IT production people do?

Ludwig: It’s tricky. Security is about having the whole system work the way that everybody expects it to. If there’s a breakdown anywhere in that system, and it doesn’t work the way you’re expecting, then you say, “Oh, it’s insecure.” But no one has figured out what those hidden expectations are.

A developer expects the code they write isn’t going to have vulnerabilities. Even if they make a mistake, even if there’s a performance bug, that shouldn’t introduce a security problem. And there are improvements being made in programming languages to help with that.

Certain languages are highly prone to security being a common failure. I grew up using C and C++. Security wasn’t something that was even thought of in the design of those languages. Java, a lot more security was thought of in the design of that language, so it’s intrinsically safer. Does that mean there are no security issues that can happen if you’re using Java? No.

Similar types of expectations exist at other places in the development pipeline as well.

Gardner: I suppose another shift has been from applications developed to reside in a data center, behind firewalls and security perimeters. But now -- with microservices, cloud-native applications, and multiple application programming interfaces (APIs) being brought together interdependently -- we’re no longer aware of where the code is running.

Don’t you have to think differently as a developer because of the way applications in production have shifted?

Ludwig: Yes, it’s definitely made a big difference. We used to describe applications as being monoliths. There were very few parts of the application that were exposed.

At this point, most applications are microservices. And that means across an application, there might be 1,000 different parts of the application that are publicly exposed. They all must have some level of security checks being done on them to make sure that if they’re handling an input that might be coming from the other side of the world that it’s being handled correctly.

Learn More 

So, yes, the design and the architecture have definitely exposed a lot more of the app’s surface. There’s been a bit of a race to make the tools better, but the architectures are getting more complicated. And I don’t know, it’s neck and neck on whether things are getting more secure or they’re getting less secure as these architectures get bigger and more exposed.

We have to think about that. How do we design processes to deal with that? How do you design technology, and what’s the culture that needs to be in place? I think part of it is having a culture of every single developer being conscious of the fact that the decisions they’re making have security implications. So that’s a lot of work to do.

Gardner: Another attitude adjustment that’s necessary is assuming that breaches are going to happen and to stifle them as quickly as possible. It’s a little different mindset, but the more people involved with looking for anomalies, who are willing to have their data or behaviors examined for anomalies makes sense.

Is there a needed cultural shift that goes with assuming you’re going to be breached and making sure the damage is limited?

Assume the worst to limit damage

Ludwig: Yes. A big part of the cultural shift is being comfortable taking feedback from anybody that you have a problem and that there’s something that you need to fix. That’s the first step.

Companies should let anybody identify a security problem -- and that could be anybody inside or outside of the company. Bug bounties. We’re in a bit of a revolution in terms of enabling better visibility into potential security problems.

But once you have that sort of culture, you start thinking, “Okay. How do I actually monitor what’s going on in each of the different areas?” With that visibility, exposure, and understanding what’s going in and out of specific applications, you can detect when there’s something you’re not expecting. That turns out to be really difficult, if what you’re looking at is very big and very, very complicated.

Decomposing an application down into smaller pieces, being able to trace the behaviors within those pieces, and understanding which APIs each of those different microservices is exposing turns out to be really important.

If you combine decomposing applications into smaller pieces with monitoring what’s going on in them and creating a culture where anybody can find a potential security flaw, surface it, and react to it -- those are good building blocks for having an environment where you have a lot more security than you would have otherwise.

Gardner: Another shift we’ve seen in the past several years is the advent of big data. Not only can we manage big data quickly, but we can also do it at a reasonable cost. That has brought about machine learning (ML) and movement to artificial intelligence (AI). So, now there’s an opportunity to put another arrow in our quiver of tools and use big data ML to buttress our security and provide a new culture of awareness as a result.

Most applications are so complicated -- and have been developed in such a chaotic manner -- it's impossible to understand what's going on inside of them.Give the robots a shot and see if we can figure it out by turning the machines on themselves.

Ludwig: I think so. There are a bunch of companies trying to do that, to look at the patterns that exist within applications, and understand what those patterns look like. In some instances, they can alert you when there’s something not operating the way that is expected and maybe guide you to rearchitecting and make your applications more efficient and secure.

There are a few different approaches being explored. Ultimately, at this point, most applications are so complicated -- and have been developed in such a chaotic manner -- it’s impossible to understand what’s going on inside of them. That’s the right time that the robots give it a shot and see if we can figure it out by turning the machines on themselves.

Gardner: Yes. Fight fire with fire.

Let’s get back to the culture of security. If you ask the people in the company to think differently about security, they all nod their heads and say they’ll try. But there has to be a leadership shift, too. Who is in charge of such security messaging? Who has the best voice for having the whole company think differently and better about security? Who’s in charge of security?

C-suite must take the lead

Ludwig: Not the security people. That will be a surprise for a lot of people to hear me say that. The reality is if you’re in security, you’re not normal. And the normal people don’t want to hear from the not-normal person who’s paranoid that they need to be more paranoid.

That’s a realization it took me several years to realize. If the security person keeps saying, “The sky is falling, the sky is falling,” people aren’t going to listen. They say, “Security is important.” And the others reply, “Yes, of course, security is important to you, you’re the security guy.”

If the head of the business, or the CEO, consistently says, “We need to make this a priority. Security is really important, and these are the people who are going to help us understand what that means and how to execute on it,” then that ends up being a really healthy relationship.

The companies I’ve seen turn themselves around to become good at security are the ones such as Microsoft, Google, or others where the CEO made it personal, and said, “We’re going to fix this, and it’s my number-one priority. We’re going to invest in it, and I’m going to hire a great team of security professionals to help us make that happen. I’m going to work with them and enable them to be successful.”

Learn More 

Alternatively, there are companies where the CEO says, “Oh, the board has asked us to get a good security person, so I’ve hired this person and you should do what he says.” That’s the path to a disgruntled bunch of folks across the entire organization. They will conclude that security is just lip service, it’s not that important. “We’re just doing it because we have to,” they will say. And that is not where you want to end up.

Gardner: You can’t just talk the talk, you have to walk the walk and do it all the time, over and over again, with a loud voice, right?

Ludwig: Yes. And eventually it gets quieter. Eventually, you don’t need to have the top level saying this is the most important thing. It becomes part of the culture. People realize that’s just the way – and it’s not that it’s just the way we do things, but it is a number-one value for us. It’s the number-one thing for our customers, too, and so culture shift ends up happening.

Gardner: Security mindfulness becomes the fabric within the organization. But to get there requires change and changing behaviors has always been hard.

Are there carrots? Are there sticks? When the top echelon of the organization, public or private, commits to security, how do you then execute on that? Are there some steps that you’ve learned or seen that help people get incentivized -- or whacked upside the head, so to speak, when necessary?

Talk the security talk and listen up

Ludwig: We definitely haven’t gone for “whacked upside the head.” I’m not sure that works for anybody at this point, but maybe I’m just a progressive when it comes to how to properly train employees.

What we have seen work is just talking about it on a regular basis, asking about the things that we’re doing from a security standpoint. Are they working? Are they getting in your way? Honestly, showing that there’s thoughtfulness and concern going into the development of those security improvements goes a long way toward making people more comfortable with following through on them.

A great example is … You roll out two-factor authentication, and then you ask, “Is it getting in the way? Is there anything that we can do to make this better? This is not the be-all and end-all. We want to improve this over time.”

That type of introspection by the security organization is surprising to some people. The idea that the security team doesn’t want it to be disruptive, that they don’t want to get in the way, can go a long way toward it feeling as though these new protections are less disruptive and less problematic than they might otherwise feel.

Gardner: And when the organization is focused on developers? Developers can be, you know …

Ludwig: Ornery?

Gardner: “Ornery” works. If you can make developers work toward a fabric of security mindedness and culture, you can probably do it to anyone. What have you learned on injecting a better security culture within the developer corps?

Ludwig: A lot of it starts, again, at the top. You know, we have core values that invoke vulgarity to both emphasize how important they are, but also how simple they are.

One of Atlassian’s values is, “Don’t fuck the customer.” And as a result of that, it’s very easy to remember, and it’s very easy to invoke. “Hey, if we don’t do this correctly, that’s going to hurt the customer.” We can’t let that happen as a top-level value.

We also have “Open company, no-bullshit”. If somebody says, “I see a problem over here,” then we need to follow up on it, right? There’s not a temptation to cover it up, to hide it, to pretend it’s not an issue. It’s about driving change and making sure that we’re implementing solutions that actually fix things.

There are countless examples of a feature that was built, and we really want to ship it, but it turns out it’s got a problem and we can’t do it because that would actually be a problem for the customer. So, we back off and go from there.

How to talk about security

Gardner: Words are powerful. Brands are powerful. Messaging is powerful. What you just said made me think, “Maybe the word security isn’t the right word.” If we use the words “customer experience,” maybe that’s better. Have you found that? Is “security” the wrong word nowadays? Maybe we should be thinking about creating an experience at a larger level that connotes success and progress.

Ludwig: Super interesting. Apple doesn’t use the word “security” very much at all. As a consumer brand, what they focus on is privacy, right? The idea that they’ve built highly secure products is motivated by the users’ right to privacy and the users’ desire to have their information remain private. But they don’t talk about security.

Apple doesn't use the word security very much at all. The idea that they've built highly secure products is motivated by the users' right to privacy and  the users' desire to have their information remain private. But they don't talk about security.

I always thought that was a really an interesting decision on their part. When I was at Google, we did some branding analysis, and we also came up with insights about how we talked about security. It’s a negative from a customer’s standpoint. And so, most of the references that you’ll see coming out of Google are security and privacy. They always attach those two things together. It’s not a coincidence. I think you’re right that the branding is problematic.

Microsoft uses trustworthy, as in trustworthy computing. So, I guess the rest of us are a little bit slow to pick up on that, but ultimately, it’s a combination of security and a bunch of other things that we’re trying to enable to make sure that the products do what we’re expecting them to do.

Gardner: I like resilience. I think that cuts across these terms because it’s not just the security, it’s how well the product is architected, how well it performs. Is it hardened, in a sense, so that it performs in trying circumstances – even when there are issues of scale or outside threats, and so forth. How do you like “resilience,” and how does that notion of business continuity come into play when we are trying to improve the culture?

Ludwig: Yes, “resilience” is a pretty good term. It comes up in the pop psychology space as well. You can try to make your children more resilient. Those are the ones that end up being the most successful, right? It certainly is an element of what you’re trying to build.

Learn More 

A “resilient” system is one in which there’s an understanding that it’s not going to be perfect. It’s going to have some setbacks, and you need to have it recoverable when there are setbacks. You need to design with an expectation that there are going to be problems. I still remember the first time I heard about a squirrel shorting out a data center and taking down the whole data center. It can happen, right? It does happen. Or, you know, you get a solar event and that takes down computers.

There are lots of different things that you need to build to recover from accidental threats, and there are ones that are more intentional -- like when somebody deploys ransomware and tries to take your pipeline offline.

Gardner: To be more resilient in our organizations, one of the things that we’ve seen with developers and IT operations is DevOps. Has DevOps been a good lesson for broader resilience? Is there something we can do with other silos in organization to make them more resilient?

DevOps derives from experience

Ludwig: I think so. Ultimately, there are lots of different ways people describe DevOps, but I think about taking what used to be a very big thing and acknowledging that you can’t comprehend the complexity of that big thing. Choosing instead to embrace the idea that you should do lots of little things, in aggregate, and that they’re going to end up being a big thing.

And that is a core ethos of DevOps, that each individual developer is going to write a little bit of code and then they’re going to ship it. You’re going to do that over and over and over. You are going to do that very, very, very quickly. And they’re going to be responsible for running their own thing. That’s the operations part of the development. But the result is, over time, you get closer to a good product because you can gain feedback from customers, you’re able to see how it’s working in reality, and you’ll be able to get testing that takes place with real data. There are lots of advantages to that. But the critical part of it, from a security standpoint, is it makes it possible to respond to security flaws in near real-time.

Often, organizations just aren’t pushing code frequently enough to be able to know how to fix a security problem. They are like, “Oh, our next release window is 90 days from now. I can’t possibly do anything between now and then.” Getting to a point where you have an improvement process that’s really flexible and that’s being exercised every single day is what you get by having DevOps.

And so, if you think about that same mentality for other parts of your organization, it definitely makes them able to react when something unexpected happens.

Gardner: Perhaps we should be looking to our software development organizations for lessons on cultural methods that we can apply elsewhere. They’re on the bleeding edge of being more secure, more productive, and they’re doing it through better communications and culture.

Ludwig: It’s interesting to phrase it that way because that sounds highfalutin, and that they achieved it out of expertise and brilliance. What it really is, is the humbleness of realizing that the compiler tells you your code is wrong every single day. There’s a new user bug every single day. And eventually you get beaten down by all those, and you decide you’re just going to react every single day instead of having this big thing build up.

So, yes, I think DevOps is a good example but it’s a result of realizing how many flaws there are more than anything highfalutin, that’s for sure.

Gardner: The software doesn’t just eat the world; the software can show the world the new, better way.

Ludwig: Yes, hopefully so.

Future best security practices

Gardner: Adrian, any thoughts about the future of better security, privacy, and resilience? How will ML and AI provide more analysis and improvements to come?

Ludwig: Probably the most important thing going on right now in the context of security is the realization by the senior executives and boards that security is something they need to be proponents for. They are pushing to make it possible for organizations to be more secure. That has fascinating ramifications all the way down the line.

If you look at the best security organizations, they know the best way to enable security within their companies and for their customers is to make security as easy as possible. You get a combination of the non-security executive saying, “Security is the number-one thing,” and at the same time, the security executive realizes the number-one thing to implement security is to make it as easy as possible to embrace and to not be disruptive.

And so, we are seeing faster investment in security that works because it’s easier. And I think that’s going to make a huge difference.

There are also several foundational technology shifts that have turned out to be very pro-security, which wasn’t why they were built -- but it’s turning out to be the case. For example, in the consumer space the move toward the web rather than desktop applications has enabled greater security. We saw a movement toward mobile operating systems as a primary mechanism for interacting with the web versus desktop operating systems. It turns out that those had a fundamentally more secure design, and so the risks there have gone down.

The enterprise has been a little slow, but I see the shift away from behind-the-firewall software toward cloud-based and software as a service (SaaS) software as enabling a lot better security for most organizations. Eventually, I think it will be for all organizations.

Those shifts are happening at the same time as we have cultural shifts. I’m really optimistic that over the next decade or two we’re going to get to a point where security is not something we talk about. It’s just something built-in and expected in much the same way as we don’t spend too much time now talking about having access to the Internet. That used to be a critical stumbling block. It’s hard to find a place now that doesn’t or won’t soon have access.

Gardner: These security practices and capabilities become part-and-parcel of good business conduct. We’ll just think of it as doing a good job, and those companies that don’t do a good job will suffer the consequences and the Darwinian nature of capitalism will take over.

Ludwig: I think it will.

Gardner: I’m afraid we’ll have to leave it there. You’ve been listening to a sponsored BriefingsDirect discussion on building security-minded cultures within public and private organizations.

And we’ve learned how behavior, culture, attitude, and organizational shifts create both hurdles and solutions for making businesses more intrinsically resilient by nature.


So, join me in thanking our guest, Adrian Ludwig, CISO at Atlassian. Thank you so much, Adrian, I really enjoyed it.

Ludwig: Thanks, Dana. I had a good time as well.

Gardner: And a big thank you to our audience for joining this BriefingsDirect IT security culture discussion. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host throughout this series of Traceable AI-sponsored BriefingsDirect interviews.

Stay tuned for our next podcast in this series, with a deep-dive look at new security tools and methods with Sanjay Nagaraj, Chief Technology Officer and Co-Founder at Traceable AI.

Look for other security podcasts and content at www.briefingsdirect.com.

Thanks again for listening. Please pass this along to your business community and do come back for our next chapter.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: Traceable.ai.

Transcript of a discussion on creating broader awareness of security risks and building a security-minded culture across organizations and ecosystems. Copyright Interarbor Solutions, LLC, 2005-2021. All rights reserved.

You may also be interested in:

      How API security provides a killer use case for ML and AI

      Securing APIs demands tracing and machine learning that analyze behaviors to head off attacks

      Rise of APIs brings new security threat vector -- and need for novel defenses

      Learn More About the Technologies and Solutions Behind Traceable.ai.

      Three Threat Vectors Addressed by Zero Trust App Sec

      Web Application Security is Not API Security

      Does SAST Deliver? The Challenges of Code Scanning.

      Everything You Need to Know About Authentication and Authorization in Web APIs

      Top 5 Ways to Protect Against Data Exposure

      TraceAI : Machine Learning Driven Application and API Security