Systemic Kubernetes hardening gaps (377 Checkov failures)
medium
CVSS 3.1: — · Asset: corezoid/helm charts
- Severity: Medium (systemic, large attack-surface-amplifier; individual issues are low-to-medium but compounded)
- CVSS 3.1: N/A (this is a hardening/defense-in-depth finding, not a single exploitable bug)
- CWE: CWE-1008 (Weak Security Feature), CWE-732 (Incorrect Permission Assignment), CWE-269 (Improper Privilege Management)
- Asset:
corezoid/helmrepository (deployed to production EKS clusters) - Discovered: 2026-04-26
- Status: Open — recommend phased hardening over 1 quarter
- Reporter: Claude (CVP pentest)
Summary
Checkov IaC scan of the corezoid/helm umbrella chart
reports 377 failed checks across 110 resources (1387
passing). The failures cluster into systemic Kubernetes hardening gaps —
none of them is an immediate exploit, but together they
eliminate multiple layers of defense-in-depth and mean that a
single compromised container has much broader options for lateral
movement, privilege escalation, and data exfiltration than it
should.
Findings by check type (Checkov)
| Count | Check ID | Description |
|---|---|---|
| 75 | CKV_K8S_21 | default namespace used — resources should go in
purpose-named namespaces |
| 19 | CKV_K8S_40 | Containers run with low UID — should run as high UID (≥10000) to avoid host-user conflicts |
| 19 | CKV_K8S_38 | Service-account tokens mounted where not needed — pods can hit the k8s API |
| 19 | CKV_K8S_43 | Images pulled by tag, not digest — supply-chain risk (tag can be re-pushed) |
| 18 | CKV_K8S_37 | Containers admitted with capabilities assigned — principle of least privilege violated |
| 18 | CKV_K8S_31 | Seccomp profile not set to docker/default or
runtime/default — container escape CVEs become
reachable |
| 18 | CKV_K8S_20 | allowPrivilegeEscalation: true (default) — setuid
binaries work |
| 18 | CKV_K8S_22 | Root filesystem writable — malware persistence possible |
| 18 | CKV_K8S_28 | NET_RAW capability admitted — raw packet capture / ARP
spoofing inside cluster |
| 17 | CKV_K8S_29 | No pod-level securityContext |
| 15 | CKV_K8S_13 | Memory limits missing — one bad pod can OOM-kill a node |
| 15 | CKV_K8S_11 | CPU limits missing — noisy-neighbor / cryptomining container |
| 15 | CKV2_K8S_6 | No NetworkPolicy — any pod can talk to any pod (flat east-west) |
| 14 | CKV_K8S_30 | No container-level securityContext |
| 14 | CKV_K8S_12 | Memory requests missing — scheduler makes poor placement decisions |
| 14 | CKV_K8S_10 | CPU requests missing |
| 14 | CKV_K8S_35 | Secrets mounted as env vars, not files — visible in
/proc/*/environ, crash dumps, logs |
| 12 | CKV_K8S_23 | Root containers admitted |
| 12 | CKV_K8S_15 | imagePullPolicy: Always not set — cached images used,
rotation ineffective |
| 4 | CKV_K8S_25 | Containers with added Linux capabilities |
Full details: tools-out/checkov-helm.json
Impact scenarios
Scenario A — compromised container + privilege
escalation: Attacker lands RCE in any pod (via an application
vuln, malicious image layer, or stolen deploy key). With
allowPrivilegeEscalation: true + root containers + writable
root FS, they can:
- Run setuid binaries
- Persist malware across pod restarts (write to PVC, scheduled cronjobs)
- Install kernel modules (if capabilities include
SYS_MODULE) - Read mounted service-account token → hit k8s API → enumerate/modify other pods
Scenario B — east-west lateral movement: Without NetworkPolicy, a compromised pod in namespace A can directly connect to:
- Pods in namespace B (including database pods)
- Kubernetes API server (if reachable from pod network)
- Other cluster services via coredns (internal service discovery works unchecked)
Scenario C — supply chain via mutable tags: Images
pulled by myregistry/app:latest instead of
myregistry/app@sha256:.... If an attacker pushes a
malicious layer with the same tag, next pod restart runs the malicious
image — without any code or config change detected by GitOps
tooling.
Scenario D — secrets exposure via env vars: Secrets mounted as env vars appear in:
/proc/<pid>/environ(readable by any co-located sidecar or debug tool)- Application crash dumps
- Accidental logging (
process.envdumps) - OOMKilled events (some k8s versions log pod env on kill)
Remediation
Priority 1 — chart-level defaults (1 PR, one helm chart umbrella change):
# values.yaml — apply to all subcharts
podSecurityContext:
runAsNonRoot: true
runAsUser: 10001
runAsGroup: 10001
fsGroup: 10001
seccompProfile:
type: RuntimeDefault
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
capabilities:
drop: ["ALL"]
runAsNonRoot: true
automountServiceAccountToken: false # enable only where the pod genuinely needs the API
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 500m
memory: 512Mi
Override only where a workload legitimately needs exceptions (e.g., postgres init container needs runAsUser:0 to chown PVC).
Priority 2 — network segmentation (1-2 sprints):
- Install Cilium or Calico if not already (EKS supports both).
- Default-deny NetworkPolicy per namespace.
- Allow-list specific service-to-service traffic.
Priority 3 — supply chain (ongoing):
- Pin all images by SHA digest (
image: nginx@sha256:...), not tag. - Enable cosign verification on pod admission (you already have
init-verify-image-signature in
account! extend it cluster-wide via Kyverno or Gatekeeper). - Automate digest-updates via Renovate or digestabot.
Priority 4 — namespaces (1 sprint):
Create purpose-named namespaces (account,
apigw, workers, simulator, etc.)
and migrate resources out of default. This also enables
per-namespace ResourceQuotas.
Verification after remediation
Re-run checkov: expected to drop to <50 failed checks (from 377) once umbrella defaults are set. Add CI gate:
# .gitlab-ci.yml / .github/workflows/
- name: Checkov gate
run: checkov -d . --framework helm --compact --soft-fail=false