Showing posts with label API. Show all posts
Showing posts with label API. Show all posts

Monday, August 30, 2021

How to Migrate Your Organization to a More Security-Minded Culture

Transcript of a discussion on creating broader awareness of security risks and building a security-minded culture across organizations and ecosystems.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: TraceableAI.

Dana Gardner: Hi, this is Dana Gardner, Principal Analyst at Interarbor Solutions, and you’re listening to BriefingsDirect.

Bringing broader awareness of security risks and building a security-minded culture within any public or private organization has been a top priority for years. Yet halfway through 2021, IT security remains as much a threat as ever -- with multiple major breaches and attacks costing tens of millions of dollars occurring nearly weekly.

Why are the threat vectors not declining? Why, with all the tools and investment, are businesses still regularly being held up for ransom or having their data breached? To what degree are behavior, culture, attitude, and organizational dissonance to blame?

Stay with us now as we probe into these more human elements of IT security with a leading chief information security officer (CISO).


To learn more about adjusting the culture of security to make organizations more resilient, please join me in welcoming Adrian Ludwig, CISO at Atlassian. Welcome, Adrian.

Adrian Ludwig: Hi, Dana. Glad to be here.

Gardner: Adrian, we are constantly bombarded with headlines showing how IT security is failing. Yet, for many people, they continue on their merry way -- business as usual.

Are we now living in a world where such breaches amount to acceptable losses? Are people not concerned because the attacks are perceived as someone else’s problem?

Security on the forefront

Ludwig

Ludwig: A lot of that is probably true, depending on whom you ask and what their state of mind is on a given day. We’re definitely seeing a lot more than we’ve seen in the past. And there’s some interesting twists to the language. What we’re seeing does not necessarily imply that there is more exploitation going on or that there are more problems -- but it’s definitely the case that we’re getting a lot more visibility.

I think it’s a little bit of both. There probably are more attacks going on, and we also have better visibility.

Gardner: Isn’t security something we should all be thinking about, not just the CISOs?

Ludwig: It’s interesting how people don’t want to think about it. They appoint somebody, give them a title, and then say that person is now responsible for making security happen.

But the reality is, within any organization, doing the right thing -- whether that be security, keeping track of the money, or making sure that things are going the way you’re expecting -- is a responsibility that’s shared across the entire organization. That’s something that we are now becoming more accustomed to. The security space is realizing it’s not just about the security folks doing a good job. It’s about enabling the entire organization to understand what’s important to be more secure and making that as easy as possible. So, there’s an element of culture change and of improving the entire organization.

Gardner: What’s making these softer approaches -- behavior, culture, management, and attitude – more important now? Is there something about security technology that has changed that makes us now need to look at how people think?

Ludwig: We’re beginning to realize that technology is not going to solve all our problems. When I first went into the security business, the company I worked for, a government agency, still had posters on the wall from World War II: Loose lips sink ships.

Learn More 

The idea of security culture is not new, but the awareness is, across organizations that any person could be subject to phishing, or any person could have their credentials taken -- those mistakes could be originating at any place in the organization. That broad-based awareness is relatively new. It probably helps that we’ve all been locked in our houses for the last year, paying a lot more attention to the media, and hearing about attacks that have been going on at governments, the hacking, and all those things. That has raised awareness as well.

Gardner:  It’s confounding that people authenticate better in their personal lives. They don’t want their credit cards or bank accounts pillaged. They have a double standard when it comes to what they think about protecting themselves versus protecting the company they work for.

Data safer at home or work?

Ludwig: Yes, it’s interesting. We used to think enterprise security could be more difficult from the user experience standpoint because people would put up with it because it was work.

But the opposite might be true, that people are more self-motivated in the consumer space and they’re willing to put up with something more challenging than they would in an enterprise. There might be some truth to that, Dana.

Gardner: The passwords I use for my bank account are long and complex, and the passwords I use when I’m in the business environment … maybe not so much. It gets us back to how you think and your attitude for improved security. How do we get people to think differently?

Ludwig: There’s a few different things to consider. One is that the security people need to think differently. It’s not necessarily about changing the behavior of every employee in the company. Some of it is about figuring out how to implement critical solutions that provide security without changing behavior.

Security people need to think differently. It's not necessarily about changing the behavior of every employee in the company. It's about implementing solutions that provide security without changing behavior.

There is a phrase, the paved path or road; so, making the secure way the easy way to do something. When people started using YubiKey U2F [an open authentication standard that enables internet users to securely access any number of online services with a single security key] as a second-factor authentication, it was actually a lot easier than having to input your password all over the place -- and it’s more secure.

That’s the kind of thing we’re looking for. How do we enable enhanced security while also having a better user experience? What’s true in authentication could be true in any number of other places as well.

Second, we need to focus on developers. We need to make the developer experience more secure and build more confidence and trustworthiness in the software we’re building, as well as  in the types of tools used to build.

Developers find strength

Gardner: You brought up another point of interest to me. There’s a mindset that when you hand something off in an organization -- it could be from app development into production, or from product design into manufacturing -- people like to move on. But with security, that type of hand-off can be a risk factor.

Beginning with developers, how would you change that hand-off? Should developers be thinking about security in the same way that the IT production people do?

Ludwig: It’s tricky. Security is about having the whole system work the way that everybody expects it to. If there’s a breakdown anywhere in that system, and it doesn’t work the way you’re expecting, then you say, “Oh, it’s insecure.” But no one has figured out what those hidden expectations are.

A developer expects the code they write isn’t going to have vulnerabilities. Even if they make a mistake, even if there’s a performance bug, that shouldn’t introduce a security problem. And there are improvements being made in programming languages to help with that.

Certain languages are highly prone to security being a common failure. I grew up using C and C++. Security wasn’t something that was even thought of in the design of those languages. Java, a lot more security was thought of in the design of that language, so it’s intrinsically safer. Does that mean there are no security issues that can happen if you’re using Java? No.

Similar types of expectations exist at other places in the development pipeline as well.

Gardner: I suppose another shift has been from applications developed to reside in a data center, behind firewalls and security perimeters. But now -- with microservices, cloud-native applications, and multiple application programming interfaces (APIs) being brought together interdependently -- we’re no longer aware of where the code is running.

Don’t you have to think differently as a developer because of the way applications in production have shifted?

Ludwig: Yes, it’s definitely made a big difference. We used to describe applications as being monoliths. There were very few parts of the application that were exposed.

At this point, most applications are microservices. And that means across an application, there might be 1,000 different parts of the application that are publicly exposed. They all must have some level of security checks being done on them to make sure that if they’re handling an input that might be coming from the other side of the world that it’s being handled correctly.

Learn More 

So, yes, the design and the architecture have definitely exposed a lot more of the app’s surface. There’s been a bit of a race to make the tools better, but the architectures are getting more complicated. And I don’t know, it’s neck and neck on whether things are getting more secure or they’re getting less secure as these architectures get bigger and more exposed.

We have to think about that. How do we design processes to deal with that? How do you design technology, and what’s the culture that needs to be in place? I think part of it is having a culture of every single developer being conscious of the fact that the decisions they’re making have security implications. So that’s a lot of work to do.

Gardner: Another attitude adjustment that’s necessary is assuming that breaches are going to happen and to stifle them as quickly as possible. It’s a little different mindset, but the more people involved with looking for anomalies, who are willing to have their data or behaviors examined for anomalies makes sense.

Is there a needed cultural shift that goes with assuming you’re going to be breached and making sure the damage is limited?

Assume the worst to limit damage

Ludwig: Yes. A big part of the cultural shift is being comfortable taking feedback from anybody that you have a problem and that there’s something that you need to fix. That’s the first step.

Companies should let anybody identify a security problem -- and that could be anybody inside or outside of the company. Bug bounties. We’re in a bit of a revolution in terms of enabling better visibility into potential security problems.

But once you have that sort of culture, you start thinking, “Okay. How do I actually monitor what’s going on in each of the different areas?” With that visibility, exposure, and understanding what’s going in and out of specific applications, you can detect when there’s something you’re not expecting. That turns out to be really difficult, if what you’re looking at is very big and very, very complicated.

Decomposing an application down into smaller pieces, being able to trace the behaviors within those pieces, and understanding which APIs each of those different microservices is exposing turns out to be really important.

If you combine decomposing applications into smaller pieces with monitoring what’s going on in them and creating a culture where anybody can find a potential security flaw, surface it, and react to it -- those are good building blocks for having an environment where you have a lot more security than you would have otherwise.

Gardner: Another shift we’ve seen in the past several years is the advent of big data. Not only can we manage big data quickly, but we can also do it at a reasonable cost. That has brought about machine learning (ML) and movement to artificial intelligence (AI). So, now there’s an opportunity to put another arrow in our quiver of tools and use big data ML to buttress our security and provide a new culture of awareness as a result.

Most applications are so complicated -- and have been developed in such a chaotic manner -- it's impossible to understand what's going on inside of them.Give the robots a shot and see if we can figure it out by turning the machines on themselves.

Ludwig: I think so. There are a bunch of companies trying to do that, to look at the patterns that exist within applications, and understand what those patterns look like. In some instances, they can alert you when there’s something not operating the way that is expected and maybe guide you to rearchitecting and make your applications more efficient and secure.

There are a few different approaches being explored. Ultimately, at this point, most applications are so complicated -- and have been developed in such a chaotic manner -- it’s impossible to understand what’s going on inside of them. That’s the right time that the robots give it a shot and see if we can figure it out by turning the machines on themselves.

Gardner: Yes. Fight fire with fire.

Let’s get back to the culture of security. If you ask the people in the company to think differently about security, they all nod their heads and say they’ll try. But there has to be a leadership shift, too. Who is in charge of such security messaging? Who has the best voice for having the whole company think differently and better about security? Who’s in charge of security?

C-suite must take the lead

Ludwig: Not the security people. That will be a surprise for a lot of people to hear me say that. The reality is if you’re in security, you’re not normal. And the normal people don’t want to hear from the not-normal person who’s paranoid that they need to be more paranoid.

That’s a realization it took me several years to realize. If the security person keeps saying, “The sky is falling, the sky is falling,” people aren’t going to listen. They say, “Security is important.” And the others reply, “Yes, of course, security is important to you, you’re the security guy.”

If the head of the business, or the CEO, consistently says, “We need to make this a priority. Security is really important, and these are the people who are going to help us understand what that means and how to execute on it,” then that ends up being a really healthy relationship.

The companies I’ve seen turn themselves around to become good at security are the ones such as Microsoft, Google, or others where the CEO made it personal, and said, “We’re going to fix this, and it’s my number-one priority. We’re going to invest in it, and I’m going to hire a great team of security professionals to help us make that happen. I’m going to work with them and enable them to be successful.”

Learn More 

Alternatively, there are companies where the CEO says, “Oh, the board has asked us to get a good security person, so I’ve hired this person and you should do what he says.” That’s the path to a disgruntled bunch of folks across the entire organization. They will conclude that security is just lip service, it’s not that important. “We’re just doing it because we have to,” they will say. And that is not where you want to end up.

Gardner: You can’t just talk the talk, you have to walk the walk and do it all the time, over and over again, with a loud voice, right?

Ludwig: Yes. And eventually it gets quieter. Eventually, you don’t need to have the top level saying this is the most important thing. It becomes part of the culture. People realize that’s just the way – and it’s not that it’s just the way we do things, but it is a number-one value for us. It’s the number-one thing for our customers, too, and so culture shift ends up happening.

Gardner: Security mindfulness becomes the fabric within the organization. But to get there requires change and changing behaviors has always been hard.

Are there carrots? Are there sticks? When the top echelon of the organization, public or private, commits to security, how do you then execute on that? Are there some steps that you’ve learned or seen that help people get incentivized -- or whacked upside the head, so to speak, when necessary?

Talk the security talk and listen up

Ludwig: We definitely haven’t gone for “whacked upside the head.” I’m not sure that works for anybody at this point, but maybe I’m just a progressive when it comes to how to properly train employees.

What we have seen work is just talking about it on a regular basis, asking about the things that we’re doing from a security standpoint. Are they working? Are they getting in your way? Honestly, showing that there’s thoughtfulness and concern going into the development of those security improvements goes a long way toward making people more comfortable with following through on them.

A great example is … You roll out two-factor authentication, and then you ask, “Is it getting in the way? Is there anything that we can do to make this better? This is not the be-all and end-all. We want to improve this over time.”

That type of introspection by the security organization is surprising to some people. The idea that the security team doesn’t want it to be disruptive, that they don’t want to get in the way, can go a long way toward it feeling as though these new protections are less disruptive and less problematic than they might otherwise feel.

Gardner: And when the organization is focused on developers? Developers can be, you know …

Ludwig: Ornery?

Gardner: “Ornery” works. If you can make developers work toward a fabric of security mindedness and culture, you can probably do it to anyone. What have you learned on injecting a better security culture within the developer corps?

Ludwig: A lot of it starts, again, at the top. You know, we have core values that invoke vulgarity to both emphasize how important they are, but also how simple they are.

One of Atlassian’s values is, “Don’t fuck the customer.” And as a result of that, it’s very easy to remember, and it’s very easy to invoke. “Hey, if we don’t do this correctly, that’s going to hurt the customer.” We can’t let that happen as a top-level value.

We also have “Open company, no-bullshit”. If somebody says, “I see a problem over here,” then we need to follow up on it, right? There’s not a temptation to cover it up, to hide it, to pretend it’s not an issue. It’s about driving change and making sure that we’re implementing solutions that actually fix things.

There are countless examples of a feature that was built, and we really want to ship it, but it turns out it’s got a problem and we can’t do it because that would actually be a problem for the customer. So, we back off and go from there.

How to talk about security

Gardner: Words are powerful. Brands are powerful. Messaging is powerful. What you just said made me think, “Maybe the word security isn’t the right word.” If we use the words “customer experience,” maybe that’s better. Have you found that? Is “security” the wrong word nowadays? Maybe we should be thinking about creating an experience at a larger level that connotes success and progress.

Ludwig: Super interesting. Apple doesn’t use the word “security” very much at all. As a consumer brand, what they focus on is privacy, right? The idea that they’ve built highly secure products is motivated by the users’ right to privacy and the users’ desire to have their information remain private. But they don’t talk about security.

Apple doesn't use the word security very much at all. The idea that they've built highly secure products is motivated by the users' right to privacy and  the users' desire to have their information remain private. But they don't talk about security.

I always thought that was a really an interesting decision on their part. When I was at Google, we did some branding analysis, and we also came up with insights about how we talked about security. It’s a negative from a customer’s standpoint. And so, most of the references that you’ll see coming out of Google are security and privacy. They always attach those two things together. It’s not a coincidence. I think you’re right that the branding is problematic.

Microsoft uses trustworthy, as in trustworthy computing. So, I guess the rest of us are a little bit slow to pick up on that, but ultimately, it’s a combination of security and a bunch of other things that we’re trying to enable to make sure that the products do what we’re expecting them to do.

Gardner: I like resilience. I think that cuts across these terms because it’s not just the security, it’s how well the product is architected, how well it performs. Is it hardened, in a sense, so that it performs in trying circumstances – even when there are issues of scale or outside threats, and so forth. How do you like “resilience,” and how does that notion of business continuity come into play when we are trying to improve the culture?

Ludwig: Yes, “resilience” is a pretty good term. It comes up in the pop psychology space as well. You can try to make your children more resilient. Those are the ones that end up being the most successful, right? It certainly is an element of what you’re trying to build.

Learn More 

A “resilient” system is one in which there’s an understanding that it’s not going to be perfect. It’s going to have some setbacks, and you need to have it recoverable when there are setbacks. You need to design with an expectation that there are going to be problems. I still remember the first time I heard about a squirrel shorting out a data center and taking down the whole data center. It can happen, right? It does happen. Or, you know, you get a solar event and that takes down computers.

There are lots of different things that you need to build to recover from accidental threats, and there are ones that are more intentional -- like when somebody deploys ransomware and tries to take your pipeline offline.

Gardner: To be more resilient in our organizations, one of the things that we’ve seen with developers and IT operations is DevOps. Has DevOps been a good lesson for broader resilience? Is there something we can do with other silos in organization to make them more resilient?

DevOps derives from experience

Ludwig: I think so. Ultimately, there are lots of different ways people describe DevOps, but I think about taking what used to be a very big thing and acknowledging that you can’t comprehend the complexity of that big thing. Choosing instead to embrace the idea that you should do lots of little things, in aggregate, and that they’re going to end up being a big thing.

And that is a core ethos of DevOps, that each individual developer is going to write a little bit of code and then they’re going to ship it. You’re going to do that over and over and over. You are going to do that very, very, very quickly. And they’re going to be responsible for running their own thing. That’s the operations part of the development. But the result is, over time, you get closer to a good product because you can gain feedback from customers, you’re able to see how it’s working in reality, and you’ll be able to get testing that takes place with real data. There are lots of advantages to that. But the critical part of it, from a security standpoint, is it makes it possible to respond to security flaws in near real-time.

Often, organizations just aren’t pushing code frequently enough to be able to know how to fix a security problem. They are like, “Oh, our next release window is 90 days from now. I can’t possibly do anything between now and then.” Getting to a point where you have an improvement process that’s really flexible and that’s being exercised every single day is what you get by having DevOps.

And so, if you think about that same mentality for other parts of your organization, it definitely makes them able to react when something unexpected happens.

Gardner: Perhaps we should be looking to our software development organizations for lessons on cultural methods that we can apply elsewhere. They’re on the bleeding edge of being more secure, more productive, and they’re doing it through better communications and culture.

Ludwig: It’s interesting to phrase it that way because that sounds highfalutin, and that they achieved it out of expertise and brilliance. What it really is, is the humbleness of realizing that the compiler tells you your code is wrong every single day. There’s a new user bug every single day. And eventually you get beaten down by all those, and you decide you’re just going to react every single day instead of having this big thing build up.

So, yes, I think DevOps is a good example but it’s a result of realizing how many flaws there are more than anything highfalutin, that’s for sure.

Gardner: The software doesn’t just eat the world; the software can show the world the new, better way.

Ludwig: Yes, hopefully so.

Future best security practices

Gardner: Adrian, any thoughts about the future of better security, privacy, and resilience? How will ML and AI provide more analysis and improvements to come?

Ludwig: Probably the most important thing going on right now in the context of security is the realization by the senior executives and boards that security is something they need to be proponents for. They are pushing to make it possible for organizations to be more secure. That has fascinating ramifications all the way down the line.

If you look at the best security organizations, they know the best way to enable security within their companies and for their customers is to make security as easy as possible. You get a combination of the non-security executive saying, “Security is the number-one thing,” and at the same time, the security executive realizes the number-one thing to implement security is to make it as easy as possible to embrace and to not be disruptive.

And so, we are seeing faster investment in security that works because it’s easier. And I think that’s going to make a huge difference.

There are also several foundational technology shifts that have turned out to be very pro-security, which wasn’t why they were built -- but it’s turning out to be the case. For example, in the consumer space the move toward the web rather than desktop applications has enabled greater security. We saw a movement toward mobile operating systems as a primary mechanism for interacting with the web versus desktop operating systems. It turns out that those had a fundamentally more secure design, and so the risks there have gone down.

The enterprise has been a little slow, but I see the shift away from behind-the-firewall software toward cloud-based and software as a service (SaaS) software as enabling a lot better security for most organizations. Eventually, I think it will be for all organizations.

Those shifts are happening at the same time as we have cultural shifts. I’m really optimistic that over the next decade or two we’re going to get to a point where security is not something we talk about. It’s just something built-in and expected in much the same way as we don’t spend too much time now talking about having access to the Internet. That used to be a critical stumbling block. It’s hard to find a place now that doesn’t or won’t soon have access.

Gardner: These security practices and capabilities become part-and-parcel of good business conduct. We’ll just think of it as doing a good job, and those companies that don’t do a good job will suffer the consequences and the Darwinian nature of capitalism will take over.

Ludwig: I think it will.

Gardner: I’m afraid we’ll have to leave it there. You’ve been listening to a sponsored BriefingsDirect discussion on building security-minded cultures within public and private organizations.

And we’ve learned how behavior, culture, attitude, and organizational shifts create both hurdles and solutions for making businesses more intrinsically resilient by nature.


So, join me in thanking our guest, Adrian Ludwig, CISO at Atlassian. Thank you so much, Adrian, I really enjoyed it.

Ludwig: Thanks, Dana. I had a good time as well.

Gardner: And a big thank you to our audience for joining this BriefingsDirect IT security culture discussion. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host throughout this series of Traceable AI-sponsored BriefingsDirect interviews.

Stay tuned for our next podcast in this series, with a deep-dive look at new security tools and methods with Sanjay Nagaraj, Chief Technology Officer and Co-Founder at Traceable AI.

Look for other security podcasts and content at www.briefingsdirect.com.

Thanks again for listening. Please pass this along to your business community and do come back for our next chapter.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: Traceable.ai.

Transcript of a discussion on creating broader awareness of security risks and building a security-minded culture across organizations and ecosystems. Copyright Interarbor Solutions, LLC, 2005-2021. All rights reserved.

You may also be interested in:

      How API security provides a killer use case for ML and AI

      Securing APIs demands tracing and machine learning that analyze behaviors to head off attacks

      Rise of APIs brings new security threat vector -- and need for novel defenses

      Learn More About the Technologies and Solutions Behind Traceable.ai.

      Three Threat Vectors Addressed by Zero Trust App Sec

      Web Application Security is Not API Security

      Does SAST Deliver? The Challenges of Code Scanning.

      Everything You Need to Know About Authentication and Authorization in Web APIs

      Top 5 Ways to Protect Against Data Exposure

      TraceAI : Machine Learning Driven Application and API Security

Friday, June 04, 2021

API Security Depends on the Novel Use of Advanced Machine Learning and Actionable Artificial Intelligence


Transcript of a discussion on the best security solutions for APIs across their dynamic and often uncharted use across myriad apps and business services.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: Traceable.ai.

Dana Gardner: Hi, this is Dana Gardner, Principal Analyst at Interarbor Solutions, and you’re listening to BriefingsDirect.

While the use of machine learning (ML) and artificial intelligence (AI) for IT security may not be new, the extent to which data-driven analytics can detect and thwart nefarious activities is still in its infancy.

As we’ve recently discussed here on BriefingsDirect, an expanding universe of interdependent application programming interfaces (APIs) forms a new and complex threat vector that strikes at the heart of digital business.

How will ML and AI form the next best security solution for APIs across their dynamic and often uncharted use in myriad apps and services? Stay with us now as we answer that question by exploring how advanced big data analytics forms a powerful and comprehensive means to track, understand, and model safe APIs use.

To learn how AI makes APIs secure and more resilient across their life cycles and use ecosystems, please join me in welcoming Ravi Guntur, Head of Machine Learning and Artificial Intelligence at Traceable.ai. Welcome, Ravi.

Ravi Guntur: Thanks, Dana. Happy to be here.


Gardner: Why does API security provide such a perfect use case for the strengths of ML and AI? Why do these all come together so well?

Guntur: When you look at the strengths of ML, the biggest strength is to process data at scale. And newer applications have taken a turn in the form of API-driven applications.

Large pieces of applications have been broken down into smaller pieces, and these smaller pieces are being exposed as even smaller applications in themselves. To process the information going between all these applications, to monitor what activity is going on, the scale at which you need to deal with them has gone up many fold. That’s the reason why ML algorithms form the best-suited class of algorithms to deal with the challenges we face with API-driven applications.

Gardner: Given the scale and complexity of the app security problem, what makes the older approaches to security wanting? Why don’t we just scale up what we already do with security?

More than rules needed to secure apps

Guntur: I’ll give an analogy as to why older approaches don’t work very well. Think of the older approaches as a big box with, let’s say, a single door. For attackers to get into that big box, all they must do is crack through that single door. 

Guntur

Now, with the newer applications, we have broken that big box into multiple small boxes, and we have given a door to each one of those small boxes. If the attacker wants to get into the application, they only have to get into one of these smaller boxes. And once he gets into one of the smaller boxes, he needs to take a key out of it and use that key to open another box.

By creating API-driven applications, we have exposed a much bigger attack surface. That’s number one. Number two, of course, we have made it challenging to the attackers, but the attack surface being so much bigger now needs to be dealt with in a completely different way.

The older class of applications took a rules-based system as the common approach to solve security use cases. Because they just had a single application and the application would not change that much in terms of the interfaces it exposed, you could build in rules to analyze how traffic goes in and out of that application.

Now, when we break the application into multiple pieces, and we bring in other paradigms of software development, such as DevOps and Agile development methodologies, this creates a scenario where the applications are always rapidly changing. There is no way rules can catch up with these rapidly changing applications. We need automation to understand what is happening with these applications, and we need automation to solve these problems, which rules alone cannot do. 

Gardner: We shouldn’t think of AI here as replacing old security or even humans. It’s doing something that just couldn’t be done any other way.

Guntur: Yes, absolutely. There’s no substitute for human intelligence, and there’s no substitute for the thinking capability of humans. If you go deeper into the AI-based algorithms, you realize that these algorithms are very simple in terms of how the AI is powered. They’re all based on optimization algorithms. Optimization algorithms don’t have thinking capability. They don’t have creativity, which humans have. So, there’s no way these algorithms are going to replace human intelligence.

Learn More 

They are going to work alongside humans to make all the mundane activities easier for humans and help humans look at the more creative and the difficult aspects of security, which these algorithms can’t do out of the box.

Gardner: And, of course, we’re also starting to see that the bad guys, the attackers, the hackers, are starting to rely on AI and ML themselves. You have to fight fire with fire. And so that’s another reason, in my thinking, to use the best combination of AI tools that you can.

Guntur: Absolutely.

Gardner: Another significant and growing security threat are bots, and the scale that threat vector takes. It seems like only automation and the best combination of human and machines can ferret out these bots.

Machines, humans combine to combat attacks

Guntur: You are right. Most of the best detection cases we see in security are a combination of humans and machines. The attackers are also starting to use automation to get into systems. We have seen such cases where the same bot comes in from geographically different locations and is trying to do the same thing in some of the customer locations.

The reason they’re coming from so many different locations is to challenge AI-based algorithms. One of the oldest schools of algorithms looks at rate anomaly, to see how quickly somebody is coming from a particular IP address. The moment you spread the IP addresses across the globe, you don’t know whether it’s different attackers or the same attacker coming from different locations. This kind of challenge has been brought by attackers using AI. The only way to challenge that is by building algorithms to counter them.

One thing is for sure, algorithms are not perfect. Algorithms can generate errors. Algorithms can create false positives. That’s where the human analyst comes in, to understand whether what the algorithm discovered is a true positive or a false positive. Going deeper into the output of an algorithm digs back into exactly how the algorithm figured out an attack is being launched. But some of these insights can’t be discovered by algorithms, only humans when they correlate different pieces of information, can find that out. So, it requires a team. Algorithms and humans work well as a team.

Gardner: What makes the way in which Traceable.ai is doing ML and AI different? How are you unique in your vision and execution for using AI for API security?

Guntur: When you look at any AI-based implementation, you will see that there are three basic components. The first is about the data itself. It’s not enough if you capture a large amount of data; it’s still not enough if you capture quality data. In most cases, you cannot guarantee data of high quality. There will always be some noise in the data. 

But more than volume and quality of data, what is more important is whether the data that you’re capturing is relevant for the particular use-case you’re trying to solve. We want to use the data that is helpful in solving security use-cases.

Traceable.ai built a platform from the ground up to cater to those security use cases. Right from the foundation, we began looking at the specific type of data required to solve modern API-based application security use cases. That’s the first challenge that we address, it’s very important, and brings strength to the product.

Seek differences in APIs

Once you address the proper data issue, the next is about how you learn from it. What are the challenges around learning? What kind of algorithms do we use? What is the scenario when we deploy that in a customer location?

We realized that every customer is completely different and has a completely different set of APIs, too, and those APIs behave differently. The data that goes in and out is different. Even if you take two e-commerce customers, they’re doing the same thing. They’re allowing you to look at products, and they’re selling you products. But the way the applications have been built, and the API architecture -- everything is different.

We realized it's no use to build supervised approaches. We needed to come up with an architecture where the day we deploy at the customer location; the algorithm then self-learns.

We realized it’s no use to build supervised approaches. We needed to come up with an architecture where the day we deploy at the customer location; the algorithm then self-learns. The whole concept of being able to learn on its own just by looking at data is the core to the way we build security using the AI algorithms we have.

Finally, the last step is to look at how we deliver security use cases. What is the philosophy behind building a security product? We knew that rules-based systems are not going to work. The alternate system is modeled around anomaly detection. Now, anomaly detection is a very old subject, and we have used anomaly detection in various things. We have  used it to  understand whether machinery is going to go down, we have used them to understand whether the traffic patterns on the road are going to change, and we have used it for anomaly detection in security.

But within anomaly detection, we focused on behavioral anomalies. We realized that APIs and the people who use APIs are the two key entities in the system. We needed to model the behavior of these two groups -- and when we see any deviation from this behavior, that’s when we’re able to capture the notion of an attack.

Learn More 

Behavioral anomalies are important because if you look at the attacks, they’re so subtle. You just can’t easily find the difference between the normal usage of an API and abnormal usage. But very deep inside the data and very deep into how the APIs are interacting, there is a deviation in the behavior. It’s very hard for humans to figure this out. Only algorithms can tease this out and determine that the behavior is different from a known behavior.

We have addressed this at all levels of our stack: The data-capture level, and the choice of how we want to execute our AI, and the choice of how we want to deliver our security use cases. And I think that’s what makes Traceable unique and holistic. We didn’t just bolt things on, we built it from the ground up. That’s why these three pieces gel well and work well together.

Gardner: I’d like to revisit the concept you brought up about the contextual use of the algorithms and the types of algorithms being deployed. This is a moving target, with so many different use cases and company by company.

How do you keep up with that rate of change? How do you remain contextual?

Function over form delivers context

Guntur: That’s a very good question. The notion of context is abstract. But when you dig deeper into what context is and how you build context, it boils down to basically finding all factors influencing the execution of a particular API.

Let’s take an example. We have an API, and we’re looking at how this API functions. It’s just not enough to look at the input and output of the API. We need to look at something around it. We need to see who triggered that input. Where did the user come from? Was it a residential IP address that the user came in from? Was it a hosted IP address? Which geolocation is the user coming from? Did this user have past anomalies within the system?

You need to bring in all these factors into the notion of context when we’re dealing with API security. Now, it’s a moving target. The context -- because data is constantly changing. There comes a moment when you have fixed this context, when you say that you know where the users are coming from, and you know what the users have done in the past. There is some amount of determinism to whatever detection you’re performing on these APIs.

Let’s say an API takes in five inputs, and it gives out 10 outputs. The inputs and outputs are a constant for every user, but the values that go into the input varies from user to user. Your bank account is different from my bank account. The account number I put in there is different for you, and it’s different for me. If you build an algorithm that looks for an anomaly, you will say, “Hey, you know what? For this part of the field, I’m seeing many different bank account numbers.”

There is some problem with this, but that’s not true. It’s meant to have many variations in that account number, and that determination comes from context. Building a context engine is unique in our AI-based system. It helps us tease out false positives and helps us learn the fact that some variations are genuine.


That’s how we keep up with this constant changing environment, where the environment is changing not just because new APIs are coming in. It’s also because new data is coming into the APIs.

Gardner: Is there a way for the algorithms to learn more about what makes the context powerful to avoid false positives? Is there certain data and certain ways people use APIs that allow your model to work better?

Guntur: Yes. When we initially started, we thought of APIs as rigidly designed. We thought of an API as a small unit of execution. When developers use these APIs, they’ll all be focused on very precise execution between the APIs.

We soon realized that developers bundle various additional features within the same API. We started seeing that they just provide a few more input options, but they get completely different functionality from that same API.

But we soon realized that developers bundle various additional features within the same API. We started seeing that they just provide a few more input options, and by triggering those extra input options you get completely different functionality from the same API.

We had to come up with algorithms that discover that a particular API can behave in multiple ways -- depending on the inputs being transmitted. It’s difficult for us to figure out whether the API is going to change and has ongoing change. But when we built our algorithms, we assumed that an API is going to have multiple manifestations, and we need to figure out which manifestation is currently being triggered by looking at the data.

We solved it differently by creating multiple personas for the same API. Although it looks like a single API, we have an internal representation of an API with multiple personas.

Gardner: Interesting. Another thing that’s fascinating to me about the API security problem is that the way hackers try not to abuse the API. Instead, they have subtle logic abuse attacks where they’re basically doing what the API is designed to do but using it as a tool for their nefarious activities.

How does your model help fight against these subtle logic abuse attacks?

Logic abuse detection

Guntur: When you look at the way hackers are getting into distributed applications and APIs using these attacks – it is very subtle. We classify these attacks as business logic abuse. They are using the existing business logic, but they are abusing it. Now, figuring out abuse to business logic is a very difficult task. It involves a lot of combinatorial issues that we need to solve. When I say combinatorial issues, it’s a problem of scale in terms of the number of APIs, the number of parameters that can be passed, and the types of values that can be passed.

Learn More 

When we built the Traceable.ai platform, it was not enough to just look at the front-facing APIs, we call them the external APIs. It’s also important for us to go deeper into the API ecosystem.

We have two classes of APIs. One, the external facing APIs, and the other is the internal APIs. The internal APIs are not called by users sitting outside of the ecosystem. They’re called by other APIs within the system. The only way for us to identify the subtle logic attacks is to be able to follow the paths taken by those internal APIs.

If the internal APIs are reaching a resource like a database, and within the database it reaches a particular row and column, it then returns the value. Only then you will be able to figure out that there was a subtle attack. We’re able to figure this out only because of the capability to trace the data deep into the ecosystem.

If we had done everything at the API gateway, if we had done everything at external facing APIs, we would not have figured out that there was an attack launched that went deep into the system and touched a resource it should never have touched.

It’s all about how well you capture the data and how rich your data representation is to capture this kind of attack. Once you capture this, using tons of data, and especially graph-like data, you have no option but to use algorithms to process it. That’s why we started using graph-based algorithms to discover variations in behavior, discover outliers, and uncover patterns of outliers, and so on.

Gardner: To fully tackle this problem, you need to know a lot about data integration, a lot about security and the vulnerabilities, as well as a lot about algorithms, AI, and data science. Tell me about your background. How are you able to keep these big, multiple balls in the air at once when it comes to solving this problem? There are so many different disciplines involved.

Multiple skills in data scientist toolbox

Guntur: Yes, it’s been a journey for me. When I initially started in 2005, I had just graduated from university. I used a lot of mathematical techniques to solve key problems in natural language processing (NLP) as part of my thesis. I realized that even security use cases can be modeled as a language. If you take any operating system (OS), we typically have a few system calls, right? About 200 system calls, or maybe 400 system calls. All the programs running in the operating system are using about 400 system calls in different ways to build the different applications.

It’s similar to natural languages. In natural language, you have words, and you compose the words according to a grammar to get a meaningful sentence. Something similar happens in the security world. We realized we could apply techniques from statistical NLP into the security use cases. We discovered, for example, way back then, certain Solaris login buffer and overflow vulnerabilities.

That’s how the journey began. I then went through multiple jobs and worked on different use cases. I learned if you want to be a good data scientist -- or if you want to use ML effectively -- you should think of yourself as a carpenter, as somebody with a toolbox with lots of tools in it, and who knows how to use those tools very well.

But to best use those tools, you also need the experience from building various things. You need to build a chair, a table, and a house. You need to build various things using the same set of tools, and that took me further along that journey.

While I began with NLP, I soon ventured into image processing and video processing, and I applied that to security, too. It furthered the journey. And through that whole process, I realized that almost all problems can be mapped to canonical forms. You can take any complex problem and break it down into simpler problems. Almost all fields can be broken down into simple mathematical problems. And if you know how to use various mathematical concepts, you can solve a lot of different problems.

We are applying these same principles at Traceable.ai as well. Yes, it’s been a journey, and every time you look at data you come up with different challenges. The only way to overcome that is to dirty your hands and solve it. That’s the only way to learn and the only way we could build this new class of algorithms -- by taking a piece from here, a piece from there, putting it together, and building something different. 

Gardner: To your point that complex things in nature, business, and technology can be brought down to elemental mathematical understandings, once you’ve attained that with APIs, for example, applying this first to security, and rightfully so, it’s the obvious low-lying fruit.

But over time, you also gain mathematical insights and understanding of more about how microservices are used and how they could be optimized. Or even how the relationship between developers and the IT production crews might be optimized.

Is that what you’re setting the stage for here? Will that mathematical foundation be brought to a much greater and potentially productive set of a problem-solving?

Something for everybody

Guntur: Yes, you’re right. If you think about it, we have embarked on that journey already. Based on what we have achieved as of today, and we look at the foundations over which we have built that, we see that we have something for everybody.

For example, we have something for the security folks as well as for the developer folks. The Traceable.ai system gives insights to developers as to what happens to their APIs when they’re in production. They need to know that. How is it all behaving? How many users are using the APIs? How are they using them? Mostly, they have no clue.

The mathematical foundation under which all these implementations are being done is based on relationships, relationships between APIs. You can call them graphs, but it's all about relationships.

And on the other side, the security team doesn’t know exactly what the application is. They can see lots of APIs, but how are the APIs glued together to form this big application? Now, the mathematical foundation under which all these implementations are being done is based on relationships, relationships between APIs. You can call them graphs, you can call them sequences, but it’s all about relationships.

One aspect we are looking at is how do you expose these relationships. Today we have this relationship buried deep inside of our implementations, inside our platform. But how do you take it out and make it visual so that you can better understand what’s happening? What is this application? What happens to the APIs?

By looking at these visualizations, you can easily figure out if there are bottlenecks within the application, for example. Is one API constantly being hit on? If I always go through this API, but the same API is also leading me to a search engine or a products catalog page, why does this API need to go through all these various functions? Can I simplify the API? Can I break it down and make it into multiple pieces? These kinds of insights are now being made available to the developer community.

Gardner: For those listening or reading this interview, how should they prepare themselves for being better able to leverage and take advantage of what Traceable.ai is providing? How can developers, security teams, as well as the IT operators get ready?

Rapid insights result in better APIs

Guntur: The moment you deploy Traceable in your environment, the algorithms kick in and start learning about the patterns of traffic in your environment. Within a few hours -- or if your traffic has high volume, within 48 hours -- you will receive insights into the API landscape within your environment. This insight starts with  how many APIs are there in your environment.  That’s a fundamental problem that a lot of companies are facing today. They just don’t know how many APIs exist in their environment at any given point of time. Once you know how many APIs are there, you can figure out how many services there are. What are the different services, and which APIs belong to which services? 

Traceable gives you the entire landscape within a few hours of deployment. Once you understand your landscape, the next interesting thing to see are your interfaces. You can learn how risky your APIs are. Are you exposing sensitive data? How many of the APIs are external facing? How to best use authentication to give access to APIs or not? And why do some APIs not have authentication? How are you exposing APIs without authentication?

Learn More 

All these questions are answered right there in the user interface. After that, you can look at whether your development team is in compliance. Do the APIs comply with the specifications in the requirements? Because usually the development teams are rapidly churning out code, they almost never maintain the API’s spec. They will have a draft spec and they will build against it, but finally, when you deploy it, the spec looks very different. But who knows it’s different? How do you know it’s different?

Traceable’s insights tell you whether your spec is compliant. You get to see that within a few hours of deployment. In addition to knowing what happened to your APIs and whether they are compliant with the spec, you start seeing various behaviors.

People think that when you have 100 APIs deployed, all users use those APIs the same way. We think all of them are using the apps the same way. But you’d be surprised to learn  that users use apps in many different ways. Sometimes the APIs are accessed through computational means, sometimes they are accessed via user interfaces. There is now insight for the development team on how users are actually using the APIs, which in itself is a great insight to help build better APIs, which helps build better applications, and simplifies the application deployments.

All of these insights are available within a few hours of the Traceable.ai deployment. And I think that’s very exciting. You just deploy it and open the screen to look at all the information. It’s just fascinating to see how different companies have built their API ecosystems.

And, of course, you have the security use cases. You start seeing what’s at work. We have seen, for example, what Bingbot from Microsoft looks like. But how active is it? Is it coming from 100 different IP addresses, or is it always coming from one part of a geolocation?

You can see how, for example, what search spiders’ activity looks like. What are they doing with our APIs? Why is the search engine starting to look at the APIs, which are internal language and have no information? But why are they crawling these APIs? All this information is available to you within a few hours. It’s really fascinating when you just deploy and observe.

Gardner: I’m afraid we’ll have to leave it there. You’ve been listening to a sponsored BriefingsDirect discussion on how data-driven behavioral analytics best detect and thwart nefarious activities from across the burgeoning ecosystem of APIs use. 


And we’ve learned how advanced ML-powered modeling and algorithms form a powerful and inclusive means to track, understand, and model APIs in action.

So, a big thank you to Ravi Guntur, Head of Machine Learning and Artificial Intelligence at Traceable.ai. Thank you so much.

Guntur: Thanks, Dana.

Gardner: And a big thank you as well for our audience for joining this BriefingsDirect API resiliency discussion. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host throughout this series of Traceable.ai-sponsored BriefingsDirect interviews.

Thanks again for listening. Please pass this along to your business community and do come back for our next chapter.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: Traceable.ai.

Transcript of a discussion on the best security solutions for APIs across their dynamic and often uncharted use in myriad apps and business services. Copyright Interarbor Solutions, LLC, 2005-2021. All rights reserved.

You may also be interested in: