Cloud Native Security is a holistic approach to protecting applications, infrastructure, and data in distributed, containerized environments. It spans multiple layers from code to runtime to ensure comprehensive protection. This approach recognizes that traditional security models don't adequately address the unique challenges presented by cloud-native architectures, such as ephemeral infrastructure, immutable deployments, microservices, and API-driven automation.
In cloud-native environments, security must be:
Integrated throughout the entire lifecycle - from development to runtime
API-driven and automated - manual processes cannot scale
Defense-in-depth - multiple layers of protection
Declarative - defined as code and configuration
Dynamic - adaptable to changing environments
Observable - with comprehensive logging and monitoring
The 4C's model provides a structured approach to understanding and implementing security controls across all layers of cloud-native applications. Each layer builds upon the security of the previous layers, creating a comprehensive security strategy.
Supply chain security has become increasingly critical as attackers focus on exploiting vulnerabilities in the software development and delivery process. A comprehensive approach addresses all stages from source code to production deployment.
Container image supply chain security is critical for protecting your applications from their inception:
Use trusted base images
Prefer minimal, official images (Alpine, distroless)
Validate publisher identity and authenticity
Maintain a private registry of approved base images
Consider SLSA (Supply chain Levels for Software Artifacts) framework
Regularly update base images for security patches
Scan images for vulnerabilities
Implement scanning in CI/CD pipelines
Use multiple scanners for better coverage
Establish severity thresholds for blocking builds
Maintain continuous scanning in registries
Have clear remediation workflows for findings
Sign and verify images
Implement cryptographic signing of all images
Use tools like Cosign, Notary, or Sigstore
Verify signatures before deployment
Maintain secure key management practices
Ensure signature verification in air-gapped environments
Implement software bill of materials (SBOM)
Generate SBOMs for all container images
Use formats like SPDX or CycloneDX
Track all dependencies and their versions
Enable vulnerability correlation with SBOMs
Make SBOMs available for security teams
Enforce image admission policies
Use admission controllers for runtime enforcement
Verify signatures and provenance at deploy time
Block images with critical vulnerabilities
Ensure images come from approved registries
Implement policy as code for consistency
# Example Notary / Cosign image signing
# Sign an image with a private key
cosign sign --key cosign.key $IMAGE_REPO/myapp:latest
# Sign an image with keyless signing (Sigstore)
cosign sign --identity-token=$(gcloud auth print-identity-token) $IMAGE_REPO/myapp:latest
# Generate key pair for signing
cosign generate-key-pair
# Verify a signed image
cosign verify --key cosign.pub $IMAGE_REPO/myapp:latest
# Attach attestations (provenance, SBOM, scan results)
cosign attest --predicate sbom.json --key cosign.key $IMAGE_REPO/myapp:latest
# Verify attestations
cosign verify-attestation --key cosign.pub $IMAGE_REPO/myapp:latest
# Example admission policy for signed images using Sigstore policy controller
apiVersion: admission.sigstore.dev/v1alpha1
kind: ClusterImagePolicy
metadata:
name: require-signatures
spec:
match:
- resource:
kinds:
- Pod
namespaces:
- "production"
- "staging"
- resource:
kinds:
- Deployment
- StatefulSet
- DaemonSet
apiGroups:
- "apps"
namespaces:
- "production"
- "staging"
authorities:
- name: keyless
keyless:
url: https://fulcio.example.com
identities:
- issuer: https://accounts.google.com
subject: user@example.com
- name: company-key
key:
data: |
-----BEGIN PUBLIC KEY-----
MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAESLwLGUg66zDoCgA0rKgbkkexLzR/
JRWFO3XV6eVJwjKLLYXrLX5HlUSEQFx6FCkqxN1VKVqgGtxC2hRGZ/rzLg==
-----END PUBLIC KEY-----
hashAlgorithm: sha256
- name: policy-controller-conformance
static:
action: pass
Runtime security provides protection for containers during execution. This involves multiple complementary techniques to restrict container capabilities, monitor behavior, and prevent exploitation of vulnerabilities.
Network security in Kubernetes involves controlling traffic flow between pods, isolating namespaces, encrypting communications, and protecting cluster ingress/egress points. Network Policies are the primary mechanism for implementing microsegmentation within a cluster.
# Network Policy to restrict pod communication
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: api-allow
namespace: production
labels:
app: api
tier: backend
environment: production
spec:
# Select pods this policy applies to
podSelector:
matchLabels:
app: api
# Types of rules included in this policy
policyTypes:
- Ingress # Incoming traffic rules
- Egress # Outgoing traffic rules
# Incoming traffic rules
ingress:
# Allow traffic from frontend pods on port 8080
- from:
- podSelector:
matchLabels:
app: frontend
# Allow frontend to access API on specific ports
ports:
- protocol: TCP
port: 8080
# Allow traffic from monitoring namespace
- from:
- namespaceSelector:
matchLabels:
purpose: monitoring
ports:
- protocol: TCP
port: 9090 # Prometheus metrics port
# Outgoing traffic rules
egress:
# Allow traffic to database pods on port 5432
- to:
- podSelector:
matchLabels:
app: database
ports:
- protocol: TCP
port: 5432
# Allow DNS resolution
- to:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: kube-system
- podSelector:
matchLabels:
k8s-app: kube-dns
ports:
- protocol: UDP
port: 53
- protocol: TCP
port: 53
Pod Security Standards provide predefined security configurations to help secure pod deployments. With Kubernetes 1.25, Pod Security Policies (PSP) were removed in favor of the Pod Security Admission (PSA) controller which implements these standards.
Kubernetes provides built-in Pod Security Standards in three levels:
Privileged: Unrestricted policy, providing the widest possible level of permissions
No restrictions on pod specifications
Equivalent to running with full root access on the node
Suitable for system and infrastructure workloads managed by privileged users
Examples: CNI plugins, storage drivers, log collectors with host access
Baseline: Minimally restrictive policy that prevents known privilege escalations
Role-Based Access Control (RBAC) is a crucial security mechanism in Kubernetes that controls what actions users and service accounts can perform on which resources. Properly configured RBAC is essential for maintaining the principle of least privilege.
RoleBindings grant the permissions defined in a Role to users, groups, or service accounts within a specific namespace.
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: read-pods
namespace: default
labels:
app: monitoring
subjects:
# Individual user binding
- kind: User
name: jane
apiGroup: rbac.authorization.k8s.io
# Group binding
- kind: Group
name: monitoring-team
apiGroup: rbac.authorization.k8s.io
# Service account binding
- kind: ServiceAccount
name: prometheus
namespace: monitoring # Note: namespace
## Secret Management
::alert{type="warning"}
Kubernetes Secrets are not encrypted by default. Consider these approaches for better security:
1. **Enable encryption at rest for etcd**
- Configure the API server with encryption provider
- Use strong encryption algorithms (AES-GCM, AES-CBC)
- Securely manage encryption keys
- Rotate encryption keys periodically
- Monitor encryption configuration for changes
2. **Use external secret stores like HashiCorp Vault**
- Maintain centralized secret management
- Leverage dynamic secret generation
- Implement fine-grained access controls
- Utilize audit logging capabilities
- Take advantage of secret rotation features
3. **Use Sealed Secrets or SOPS for GitOps workflows**
- Encrypt secrets before storing in Git
- Use asymmetric encryption for better security
- Implement proper key management
- Enable safe storage of encrypted secrets in repositories
- Automate decryption in the cluster
4. **Implement proper RBAC for Secret access**
- Restrict secret access to specific users and service accounts
- Separate secret access by namespaces
- Use minimally privileged roles
- Audit secret access regularly
- Consider using OPA Gatekeeper for additional policy controls
5. **Rotate secrets regularly**
- Implement automated rotation mechanisms
- Set up monitoring for secret age
- Use short-lived credentials where possible
- Plan for application handling of rotated secrets
- Test rotation procedures before implementation
::
```yaml
# Example of using external secrets with ESO
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: database-credentials
namespace: application
labels:
app: backend
type: database
environment: production
spec:
refreshInterval: "15m" # How often to refresh the secret
secretStoreRef:
name: vault-backend
kind: ClusterSecretStore # Reference to a central secret store
target:
name: database-credentials # Name of the created K8s Secret
creationPolicy: Owner # Secret will be owned by this ExternalSecret
template: # Optional template for the Secret
type: kubernetes.io/basic-auth # Use specific Secret type
metadata:
labels:
app: backend
data:
- secretKey: username # Key in the Kubernetes Secret
remoteRef:
key: database/credentials # Path in Vault
property: username # Property to extract
- secretKey: password
remoteRef:
key: database/credentials
property: password
---
# Define the SecretStore connecting to Vault
apiVersion: external-secrets.io/v1beta1
kind: ClusterSecretStore
metadata:
name: vault-backend
spec:
provider:
vault:
server: "https://vault.example.com:8200"
path: "secret" # Base path for secrets
version: "v2" # KV version
auth:
# Kubernetes auth method
kubernetes:
mountPath: "kubernetes"
role: "external-secrets"
serviceAccountRef:
name: "external-secrets"
namespace: "external-secrets"
# Optional: Use namespace as prefix in Vault paths
namespace: "my-vault-namespace"
# Optional: Configure custom CA bundle
caBundle: "<base64-encoded-ca-bundle>"
---
# Encryption at rest configuration for Kubernetes API server
# This is applied to the kube-apiserver configuration
apiVersion: apiserver.config.k8s.io/v1
kind: EncryptionConfiguration
metadata:
name: encryption-config
spec:
resources:
- resources:
- secrets
providers:
- aescbc: # AES-CBC with PKCS#7 padding
keys:
- name: key1
secret: <base64-encoded-key> # 32-byte key
- identity: {} # Used for reading unencrypted secrets
Securing the Kubernetes API server is critical as it's the primary interface for managing the cluster. A properly hardened API server helps prevent unauthorized access and reduces the attack surface.
Runtime threat detection provides real-time monitoring and protection for containerized workloads, allowing you to detect and respond to suspicious activities as they occur.
# Example Falco rule for detecting suspicious activities
- rule: Terminal Shell in Container
desc: A shell was spawned in a container
condition: >
container and
shell_procs and
container.image.repository != "alpine" and
not proc.name in (shell_binaries) and
not falco_sensitive_mount_containers and
not allowed_shell_containers
output: >
Shell spawned in a container (user=%user.name container=%container.name
image=%container.image.repository:%container.image.tag shell=%proc.name parent=%proc.pname
cmdline=%proc.cmdline pod=%k8s.pod.name ns=%k8s.ns.name)
priority: WARNING
tags: [container, shell, mitre_execution]
# Detect privilege escalation attempts
- rule: Privilege Escalation Detected
desc: Detects attempts to escalate privileges in containers
condition: >
spawned_process and container and
(proc.name in (setuid_binaries) or
proc.name = "sudo" or
proc.name = "su")
output: >
Privilege escalation attempt in container (user=%user.name command=%proc.cmdline
container=%container.name image=%container.image.repository:%container.image.tag
pod=%k8s.pod.name ns=%k8s.ns.name)
priority: CRITICAL
tags: [container, privilege-escalation, mitre_privilege_escalation]
# Detect unusual network connections
- rule: Unexpected Outbound Connection
desc: Container making outbound connection not matching whitelist
condition: >
outbound and container and
not (container.image.repository in (allowed_outbound_destinations_images)) and
not (fd.sip in (allowed_outbound_destinations_ips)) and
not (fd.sip in (rfc_1918_addresses))
output: >
Unexpected outbound connection from container (source=%container.name:%container.id
image=%container.image.repository:%container.image.tag
destination=%fd.sip:%fd.sport process=%proc.name command=%proc.cmdline)
priority: WARNING
tags: [container, network, exfiltration, mitre_exfiltration]
# Detect sensitive file access
- rule: Sensitive File Access in Container
desc: Detect access to sensitive files in containers
condition: >
open_read and container and
(fd.name startswith "/etc/shadow" or
fd.name startswith "/etc/passwd" or
fd.name startswith "/etc/kubernetes/pki" or
fd.name startswith "/var/run/secrets")
output: >
Sensitive file accessed in container (user=%user.name file=%fd.name
container=%container.name image=%container.image.repository:%container.image.tag
command=%proc.cmdline pod=%k8s.pod.name)
priority: WARNING
tags: [container, file-access, mitre_credential_access]
Policy enforcement in Kubernetes ensures that all resources comply with organizational requirements and security best practices. Open Policy Agent (OPA) Gatekeeper is a powerful tool for implementing policy as code.
# OPA Gatekeeper constraint template
apiVersion: templates.gatekeeper.sh/v1beta1
kind: ConstraintTemplate
metadata:
name: k8srequiredlabels
annotations:
description: "Requires resources to have a specific set of labels"
control: "NIST SP 800-53: CM-2, CM-6"
spec:
crd:
spec:
names:
kind: K8sRequiredLabels
listKind: K8sRequiredLabelsList
plural: k8srequiredlabels
singular: k8srequiredlabels
validation:
openAPIV3Schema:
type: object
properties:
labels:
type: array
description: "List of required labels"
items:
type: string
message:
type: string
description: "Optional custom violation message"
targets:
- target: admission.k8s.gatekeeper.sh
rego: |
package k8srequiredlabels
violation[{"msg": msg, "details": details}] {
# Get all provided labels
provided := {label | input.review.object.metadata.labels[label]}
# Get all required labels from parameters
required := {label | label := input.parameters.labels[_]}
# Find missing labels
missing := required - provided
# Trigger violation if any labels are missing
count(missing) > 0
# Custom or default message
custom_msg := input.parameters.message
default_msg := sprintf("Missing required labels: %v", [missing])
msg := custom_msg != "" ? sprintf(custom_msg, [missing]) : default_msg
# Provide violation details
details := {
"missing_labels": missing,
"provided_labels": provided,
"required_labels": required,
"resource": input.review.object.kind
}
}
---
# Constraint using the template
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sRequiredLabels
metadata:
name: deployment-must-have-labels
annotations:
description: "Requires all deployments to have standard labels"
security-severity: "medium"
owner: "security-team"
spec:
enforcementAction: deny # Can be deny, dryrun, or warn
match:
kinds:
- apiGroups: ["apps"]
kinds: ["Deployment", "StatefulSet", "DaemonSet"]
namespaces:
- "default"
- "production"
excludedNamespaces:
- "kube-system"
- "monitoring"
labelSelector:
matchExpressions:
- key: exemption
operator: NotIn
values: ["policy-exempt"]
parameters:
labels:
- "app"
- "environment"
- "owner"
- "cost-center"
message: "Resources must have the following labels: %v"
Effective security monitoring and incident response capabilities are critical for detecting, investigating, and remediating security incidents in Kubernetes environments.
Zero Trust in Kubernetes follows the principle of "never trust, always verify" by implementing continuous authentication, authorization, and validation at every layer of the stack.
graph TD
A[Identity Provider] -->|Authentication| B[API Server]
B -->|Authorization| C[RBAC Check]
C -->|Admission| D[Policy Engine]
D -->|Runtime| E[Workload]
F[Network Policy] -->|Enforce| E
G[mTLS] -->|Secure| E
H[Monitoring] -->|Observe| E
I[Threat Detection] -->|Protect| E
J[Secret Store] -->|Provide| E
%% Additional relationships
K[Certificate Authority] -->|Issues Certs| G
L[Policy as Code] -->|Defines| D
M[Behavioral Baseline] -->|Informs| I
N[Audit Logging] -->|Records| B
O[Vulnerability Scanner] -->|Checks| E
P[Binary Authorization] -->|Validates| E
Q[Service Mesh] -->|Controls| F
A comprehensive Zero Trust implementation for Kubernetes includes:
Identity-based access:
Strong authentication with multi-factor authentication (MFA)
Implementing a cloud native security maturity model helps organizations methodically improve their security posture over time, with clear progression paths and measurable objectives.