Runtime Security Observability for Containerized Workloads in AWS

Runtime Security Observability for Containerized Workloads in AWS
April 21, 2023
Author:

Recently we asked thought leaders from cutting-edge security teams to share their best practices for detecting and responding to threats in incredibly complex cloud environments. Here are some highlights from our most recent talk with Matt Lehman, Head of Security at Amazon Payments. Matt joined us on a fireside chat last week “Runtime Security Observability for Containerized Workloads in AWS” and we asked him ten key questions and took a deep dive into container security challenges, covering:

  • Advantages and disadvantages of various forms of cloud-native infrastructure in AWS (containers, serverless, PaaS, etc);
  • The security implications of utilizing containers and the nuances of specific infrastructure modalities;
  • The impact of containers and serverless on the dynamics of the kill chain;
  • And the availability of cloud-native application protection platforms (CNAPP) to help bring deeper observability to the attack surface and implement runtime protection for this next-generation, cloud-native environments.

In our talk with Matt, we discovered how Amazon harnesses the power of AWS Fargate, Lambda, and EKS to streamline deployment and tackle complex security challenges. Amazon workloads range from bare metal servers to containerized solutions and serverless like Lambda.

Matt describes the complexity of their cloud-native architecture and how they think about managing that complexity as they go through digital transformation initiatives when he articulates:

“Fargate ends up being a pretty attractive container orchestration layer within our service teams that we support. The individual workloads in terms of size and shape can look a little bit different from team to team because they’re building different types of things. On one side of the business we could be looking at alternate payment methods, and on another side of the business we’re building the front-end screens for taking a credit card transaction inside Amazon.com and another team could be looking at trying to build vault solutions for how we house card information. So those are very different sorts of use case setups. We’ve tried really hard as we’ve made the move to micro containers over the last five to six years to not bring our monolithic-style architecture into monolithic-style containers.”

Their preferred container environment is AWS Fargate as it simplifies the deployment process for developers and offers a pay-as-you-go model. This model has helped simplify the complexity of their infrastructure deployments and hosting but has introduced new security challenges that need to be addressed around security observability in these environments.

In the webinar, we explored the trade-offs between managed Kubernetes and self-managed clusters along with the security implications of using containers and the nuances specific to serverless modalities like Fargate. Containers and microservices rely heavily on third-party dependencies and libraries, making supply chain security a critical foundational security control for container security. This is why it is important to have strong controls around supply chain security, such as vetting dependencies and using trusted sources, to mitigate security risks.

Matt covered the challenges of building a security program for AWS Fargate workloads. The lack of access to the underlying host makes it difficult to collect telemetry, emphasizing the importance of understanding the shared responsibility model and thinking through the division of responsibilities in advance to avoid problems during incidents.

We uncovered the intricacies of AWS Lambda and its unique security measures. Lambda is seen as being on the continuum between long-lived and one-time run workloads, and authorization and privilege become important factors.

The use of containers and serverless technology changes the dynamics of the kill chain, creating new pathways that are harder to detect and map for both attackers and defenders. While this can make the problem harder for attackers, it also presents challenges for defending against attacks. Matt emphasizes the importance of considering new attack vectors specific to cloud security when building a security program. It’s important to understand baseline capabilities, identify gaps, and decide how much coverage is needed, particularly in relation to the Shared Responsibility model that the CSPs have with their customers. Built-in cloud tools can be useful, but pure-play security vendors add the necessary depth and provide additional features that might not be available in cloud offerings and help you cover your side of the shared responsibility model.

In this clip, Matt hones in on the ways in which the Deepfence platform provides extra context and visibility above and beyond what’s built into the native cloud tools; when he talks about their environment, he highlights utilizing this extra context as important to their alert reduction efforts:

“This is the thing that I like about Deepfence. We’re really sensitive to noise in our alerting just because of how big we are. So a 1% false positive rate equates to whole numbers of engineers that are spending their time on inefficiencies.”

Overall the talk highlighted the need for a different way of thinking about security and application building in the container world. It’s important to focus on depth and coverage while worrying about automation, platforms, and tools such as Deepfence that facilitate this. Companies need to embrace the basics of blocking and tackling which are key to building a good security program.

You can watch the entire webinar and interview “Runtime Security Observability for Containerized Workloads in AWS” now. And don’t miss our next panel of industry experts from Google, Snap, and Deepfence where they discuss Kubernetes security risks and attack vectors and share their best practices for detecting and responding to threats in these complex environments.